Search results for: nonlinear radiation
1707 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations
Authors: Teng Li, Kamran Mohseni
Abstract:
This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulationsKeywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow
Procedia PDF Downloads 5021706 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 2621705 Comparative Study of Various Treatment Positioning Technique: A Site Specific Study-CA. Breast
Authors: Kamal Kaushik, Dandpani Epili, Ajay G. V., Ashutosh, S. Pradhaan
Abstract:
Introduction: Radiation therapy has come a long way over a period of decades, from 2-dimensional radiotherapy to intensity-modulated radiation therapy (IMRT) or VMAT. For advanced radiation therapy, we need better patient position reproducibility to deliver precise and quality treatment, which raises the need for better image guidance technologies for precise patient positioning. This study presents a two tattoo simulation with roll correction technique which is comparable to other advanced patient positioning techniques. Objective: This is a site-specific study is aimed to perform a comparison between various treatment positioning techniques used for the treatment of patients of Ca- Breast undergoing radiotherapy. In this study, we are comparing 5 different positioning methods used for the treatment of ca-breast, namely i) Vacloc with 3 tattoos, ii) Breast board with three tattoos, iii) Thermoplastic cast with three fiducials, iv) Breast board with a thermoplastic mask with 3 tattoo, v) Breast board with 2 tattoos – A roll correction method. Methods and material: All in one (AIO) solution immobilization was used in all patient positioning techniques for immobilization. The process of two tattoo simulations includes positioning of the patient with the help of a thoracic-abdomen wedge, armrest & knee rest. After proper patient positioning, we mark two tattoos on the treatment side of the patient. After positioning, place fiducials as per the clinical borders markers (1) sternum notch (lower border of clavicle head) (2) 2 cm below from contralateral breast (3) midline between 1 & 2 markers (4) mid axillary on the same axis of 3 markers (Marker 3 & 4 should be on the same axis). During plan implementation, a roll depth correction is applied as per the anterior and lateral positioning tattoos, followed by the shifts required for the Isocentre position. The shifts are then verified by SSD on the patient surface followed by radiographic verification using Cone Beam Computed Tomography (CBCT). Results: When all the five positioning techniques were compared all together, the produced shifts in Vertical, Longitudinal and lateral directions are as follows. The observations clearly suggest that the Longitudinal average shifts in two tattoo roll correction techniques are less than every other patient positioning technique. Vertical and lateral Shifts are also comparable to other modern positioning techniques. Concluded: The two tattoo simulation with roll correction technique provides us better patient setup with a technique that can be implemented easily in most of the radiotherapy centers across the developing nations where 3D verification techniques are not available along with delivery units as the shifts observed are quite minimal and are comparable to those with Vacloc and modern amenities.Keywords: Ca. breast, breast board, roll correction technique, CBCT
Procedia PDF Downloads 1351704 Numerical Solution of Space Fractional Order Linear/Nonlinear Reaction-Advection Diffusion Equation Using Jacobi Polynomial
Authors: Shubham Jaiswal
Abstract:
During modelling of many physical problems and engineering processes, fractional calculus plays an important role. Those are greatly described by fractional differential equations (FDEs). So a reliable and efficient technique to solve such types of FDEs is needed. In this article, a numerical solution of a class of fractional differential equations namely space fractional order reaction-advection dispersion equations subject to initial and boundary conditions is derived. In the proposed approach shifted Jacobi polynomials are used to approximate the solutions together with shifted Jacobi operational matrix of fractional order and spectral collocation method. The main advantage of this approach is that it converts such problems in the systems of algebraic equations which are easier to be solved. The proposed approach is effective to solve the linear as well as non-linear FDEs. To show the reliability, validity and high accuracy of proposed approach, the numerical results of some illustrative examples are reported, which are compared with the existing analytical results already reported in the literature. The error analysis for each case exhibited through graphs and tables confirms the exponential convergence rate of the proposed method.Keywords: space fractional order linear/nonlinear reaction-advection diffusion equation, shifted Jacobi polynomials, operational matrix, collocation method, Caputo derivative
Procedia PDF Downloads 4451703 Predicting Photovoltaic Energy Profile of Birzeit University Campus Based on Weather Forecast
Authors: Muhammad Abu-Khaizaran, Ahmad Faza’, Tariq Othman, Yahia Yousef
Abstract:
This paper presents a study to provide sufficient and reliable information about constructing a Photovoltaic energy profile of the Birzeit University campus (BZU) based on the weather forecast. The developed Photovoltaic energy profile helps to predict the energy yield of the Photovoltaic systems based on the weather forecast and hence helps planning energy production and consumption. Two models will be developed in this paper; a Clear Sky Irradiance model and a Cloud-Cover Radiation model to predict the irradiance for a clear sky day and a cloudy day, respectively. The adopted procedure for developing such models takes into consideration two levels of abstraction. First, irradiance and weather data were acquired by a sensory (measurement) system installed on the rooftop of the Information Technology College building at Birzeit University campus. Second, power readings of a fully operational 51kW commercial Photovoltaic system installed in the University at the rooftop of the adjacent College of Pharmacy-Nursing and Health Professions building are used to validate the output of a simulation model and to help refine its structure. Based on a comparison between a mathematical model, which calculates Clear Sky Irradiance for the University location and two sets of accumulated measured data, it is found that the simulation system offers an accurate resemblance to the installed PV power station on clear sky days. However, these comparisons show a divergence between the expected energy yield and actual energy yield in extreme weather conditions, including clouding and soiling effects. Therefore, a more accurate prediction model for irradiance that takes into consideration weather factors, such as relative humidity and cloudiness, which affect irradiance, was developed; Cloud-Cover Radiation Model (CRM). The equivalent mathematical formulas implement corrections to provide more accurate inputs to the simulation system. The results of the CRM show a very good match with the actual measured irradiance during a cloudy day. The developed Photovoltaic profile helps in predicting the output energy yield of the Photovoltaic system installed at the University campus based on the predicted weather conditions. The simulation and practical results for both models are in a very good match.Keywords: clear-sky irradiance model, cloud-cover radiation model, photovoltaic, weather forecast
Procedia PDF Downloads 1321702 Simulation of the Collimator Plug Design for Prompt-Gamma Activation Analysis in the IEA-R1 Nuclear Reactor
Authors: Carlos G. Santos, Frederico A. Genezini, A. P. Dos Santos, H. Yorivaz, P. T. D. Siqueira
Abstract:
The Prompt-Gamma Activation Analysis (PGAA) is a valuable technique for investigating the elemental composition of various samples. However, the installation of a PGAA system entails specific conditions such as filtering the neutron beam according to the target and providing adequate shielding for both users and detectors. These requirements incur substantial costs, exceeding $100,000, including manpower. Nevertheless, a cost-effective approach involves leveraging an existing neutron beam facility to create a hybrid system integrating PGAA and Neutron Tomography (NT). The IEA-R1 nuclear reactor at IPEN/USP possesses an NT facility with suitable conditions for adapting and implementing a PGAA device. The NT facility offers a thermal flux slightly colder and provides shielding for user protection. The key additional requirement involves designing detector shielding to mitigate high gamma ray background and safeguard the HPGe detector from neutron-induced damage. This study employs Monte Carlo simulations with the MCNP6 code to optimize the collimator plug for PGAA within the IEA-R1 NT facility. Three collimator models are proposed and simulated to assess their effectiveness in shielding gamma and neutron radiation from nucleon fission. The aim is to achieve a focused prompt-gamma signal while shielding ambient gamma radiation. The simulation results indicate that one of the proposed designs is particularly suitable for the PGAA-NT hybrid system.Keywords: MCNP6.1, neutron, prompt-gamma ray, prompt-gamma activation analysis
Procedia PDF Downloads 751701 Constructions of Linear and Robust Codes Based on Wavelet Decompositions
Authors: Alla Levina, Sergey Taranov
Abstract:
The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability
Procedia PDF Downloads 4891700 Assessing Future Isoprene Emissions in Southeast Asia: Climate Change Implications
Authors: Justin Sentian, Franky Herman, Maggie Chel Gee Ooi, Vivian Kong WAN Yee, Teo You Rou, Chin Jia Hui
Abstract:
Isoprene emission is known to depend heavily on temperature and radiation. Considering these environmental factors together is crucial for a comprehensive understanding of the impact of climate change on isoprene emissions and atmospheric chemistry. Therefore, the aim of this study is to investigate how isoprene emission responds to changing climate scenarios in Southeast Asia (SEA). Two climate change scenarios, RCP4.5 and RCP8.5, were used to simulate climate change using the Weather Research Forecasting (WRF v3.9.1) model in three different time periods: near-future (2030-2039), mid-century (2050-2059), and far future (2090-2099), with 2010 (2005-2014) as the baseline period. The output from WRF was then used to investigate how isoprene emission changes under a changing climate by using the Model Emission of Gases and Aerosol from Nature (MEGAN v2.1). The results show that the overall isoprene emissions during the baseline period are 1.41 tons hr-1 during DJF and 1.64 tons hr-1 during JJA. The overall emissions for both RCPs slightly increase during DJF, ranging from 0.03 to 0.06 tons hr-1 in the near future, 0.11 to 0.19 tons hr-1 in the mid-century, and 0.24 to 0.52 tons hr-1 in the far future. During JJA season, environmental conditions often favour higher emission rates in MEGAN due to their optimal state. Isoprene emissions also show a strong positive correlation (0.81 – 1.00) with temperature and photosynthetic active radiation (PAR). The future emission rate of isoprene is strongly modulated by both temperature and PAR, as indicated by a strong positive correlation (0.81 - 1.00). This relationship underscores the fact that future warming will not be the sole driver impacting isoprene emissions. Therefore, it is essential to consider the multifaceted effect of climate change in shaping the levels of isoprene in the future.Keywords: isoprene, climate change, Southeast Asia, WRF, MEGAN.
Procedia PDF Downloads 281699 Environmental and Safety Studies for Advanced Fuel Cycle Fusion Energy Systems: The ESSENTIAL Approach
Authors: Massimo Zucchetti
Abstract:
In the US, the SPARC-ARC projects of compact tokamaks are being developed: both are aimed at the technological demonstration of fusion power reactors with cutting-edge technology but following different design approaches. However, they show more similarities than differences in the fuel cycle, safety, radiation protection, environmental, waste and decommissioning aspects: all reactors, either experimental or demonstration ones, have to fulfill certain "essential" requirements to pass from virtual to real machines, to be built in the real world. The paper will discuss these "essential" requirements. Some of the relevant activities in these fields, carried out by our research group (ESSENTIAL group), will be briefly reported, with the aim of showing some methodology aspects that have been developed and might be of wider interest. Also, a non-competitive comparison between our results for different projects will be included when useful. The question of advanced D-He3 fuel cycles to be used for those machines will be addressed briefly. In the past, the IGNITOR project of a compact high-magnetic field D-T ignition experiment was found to be able to sustain limited D-He3 plasmas, while the Candor project was a more decisive step toward D-He3 fusion reactors. The following topics will be treated: Waste management and radioactive safety studies for advanced fusion power plants; development of compact high-field advanced fusion reactors; behavior of nuclear materials under irradiation: neutron-induced radioactivity due to side DT reactions, radiation damage; accident analysis; reactor siting.Keywords: advanced fuel fusion reactors, deuterium-helium3, high-field tokamaks, fusion safety
Procedia PDF Downloads 821698 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy
Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro
Abstract:
Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer
Procedia PDF Downloads 1351697 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet
Authors: Amir Moslemi, Amir movafeghi, Shahab Moradi
Abstract:
One of the most important challenging factors in medical images is nominated as noise.Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjected to low quality due to the noise. The quality of CT images is dependent on the absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on the purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete wavelet transform(DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result in good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).Keywords: computed tomography (CT), noise reduction, curve-let, contour-let, signal to noise peak-peak ratio (PSNR), structure similarity (Ssim), absorbed dose to patient (ADP)
Procedia PDF Downloads 4411696 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 5251695 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas
Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi
Abstract:
In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.Keywords: thermal remote sensing, insolation model, land surface temperature, geothermal anomalies
Procedia PDF Downloads 3711694 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.Keywords: anthropomorphic phantom, computed tomography, CT-expo, radiation dose
Procedia PDF Downloads 2211693 Monte Carlo Simulation Study on Improving the Flatting Filter-Free Radiotherapy Beam Quality Using Filters from Low- z Material
Authors: H. M. Alfrihidi, H.A. Albarakaty
Abstract:
Flattening filter-free (FFF) photon beam radiotherapy has increased in the last decade, which is enabled by advancements in treatment planning systems and radiation delivery techniques like multi-leave collimators. FFF beams have higher dose rates, which reduces treatment time. On the other hand, FFF beams have a higher surface dose, which is due to the loss of beam hardening effect caused by the presence of the flatting filter (FF). The possibility of improving FFF beam quality using filters from low-z materials such as steel and aluminium (Al) was investigated using Monte Carlo (MC) simulations. The attenuation coefficient of low-z materials for low-energy photons is higher than that of high-energy photons, which leads to the hardening of the FFF beam and, consequently, a reduction in the surface dose. BEAMnrc user code, based on Electron Gamma Shower (EGSnrc) MC code, is used to simulate the beam of a 6 MV True-Beam linac. A phase-space (phosphor) file provided by Varian Medical Systems was used as a radiation source in the simulation. This phosphor file was scored just above the jaws at 27.88 cm from the target. The linac from the jaw downward was constructed, and radiation passing was simulated and scored at 100 cm from the target. To study the effect of low-z filters, steel and Al filters with a thickness of 1 cm were added below the jaws, and the phosphor file was scored at 100 cm from the target. For comparison, the FF beam was simulated using a similar setup. (BEAM Data Processor (BEAMdp) is used to analyse the energy spectrum in the phosphorus files. Then, the dose distribution resulting from these beams was simulated in a homogeneous water phantom using DOSXYZnrc. The dose profile was evaluated according to the surface dose, the lateral dose distribution, and the percentage depth dose (PDD). The energy spectra of the beams show that the FFF beam is softer than the FF beam. The energy peaks for the FFF and FF beams are 0.525 MeV and 1.52 MeV, respectively. The FFF beam's energy peak becomes 1.1 MeV using a steel filter, while the Al filter does not affect the peak position. Steel and Al's filters reduced the surface dose by 5% and 1.7%, respectively. The dose at a depth of 10 cm (D10) rises by around 2% and 0.5% due to using a steel and Al filter, respectively. On the other hand, steel and Al filters reduce the dose rate of the FFF beam by 34% and 14%, respectively. However, their effect on the dose rate is less than that of the tungsten FF, which reduces the dose rate by about 60%. In conclusion, filters from low-z material decrease the surface dose and increase the D10 dose, allowing for a high-dose delivery to deep tumors with a low skin dose. Although using these filters affects the dose rate, this effect is much lower than the effect of the FF.Keywords: flattening filter free, monte carlo, radiotherapy, surface dose
Procedia PDF Downloads 731692 Development of a Laboratory Laser-Produced Plasma “Water Window” X-Ray Source for Radiobiology Experiments
Authors: Daniel Adjei, Mesfin Getachew Ayele, Przemyslaw Wachulak, Andrzej Bartnik, Luděk Vyšín, Henryk Fiedorowicz, Inam Ul Ahad, Lukasz Wegrzynski, Anna Wiechecka, Janusz Lekki, Wojciech M. Kwiatek
Abstract:
Laser produced plasma light sources, emitting high intensity pulses of X-rays, delivering high doses are useful to understand the mechanisms of high dose effects on biological samples. In this study, a desk-top laser plasma soft X-ray source, developed for radio biology research, is presented. The source is based on a double-stream gas puff target, irradiated with a commercial Nd:YAG laser (EKSPLA), which generates laser pulses of 4 ns time duration and energy up to 800 mJ at 10 Hz repetition rate. The source has been optimized for maximum emission in the “water window” wavelength range from 2.3 nm to 4.4 nm by using pure gas (argon, nitrogen and krypton) and spectral filtering. Results of the source characterization measurements and dosimetry of the produced soft X-ray radiation are shown and discussed. The high brightness of the laser produced plasma soft X-ray source and the low penetration depth of the produced X-ray radiation in biological specimen allows a high dose rate to be delivered to the specimen of over 28 Gy/shot; and 280 Gy/s at the maximum repetition rate of the laser system. The source has a unique capability for irradiation of cells with high pulse dose both in vacuum and He-environment. Demonstration of the source to induce DNA double- and single strand breaks will be discussed.Keywords: laser produced plasma, soft X-rays, radio biology experiments, dosimetry
Procedia PDF Downloads 5871691 A Broadband Tri-Cantilever Vibration Energy Harvester with Magnetic Oscillator
Authors: Xiaobo Rui, Zhoumo Zeng, Yibo Li
Abstract:
A novel tri-cantilever energy harvester with magnetic oscillator was presented, which could convert the ambient vibration into electrical energy to power the low-power devices such as wireless sensor networks. The most common way to harvest vibration energy is based on the use of linear resonant devices such as cantilever beam, since this structure creates the highest strain for a given force. The highest efficiency will be achieved when the resonance frequency of the harvester matches the vibration frequency. The limitation of the structure is the narrow effective bandwidth. To overcome this limitation, this article introduces a broadband tri-cantilever harvester with nonlinear stiffness. This energy harvester typically consists of three thin cantilever beams vertically arranged with Neodymium Magnets ( NdFeB)magnetics at its free end and a fixed base at the other end. The three cantilevers have different resonant frequencies by designed in different thicknesses. It is obviously that a similar advantage of multiple resonant frequencies as piezoelectric cantilevers array structure is built. To achieve broadband energy harvesting, magnetic interaction is used to introduce the nonlinear system stiffness to tune the resonant frequency to match the excitation. Since the three cantilever tips are all free and the magnetic force is distance dependent, the resonant frequencies will be complexly changed with the vertical vibration of the free end. Both model and experiment are built. The electromechanically coupled lumped-parameter model is presented. An electromechanical formulation and analytical expressions for the coupled nonlinear vibration response and voltage response are given. The entire structure is fabricated and mechanically attached to a electromagnetic shaker as a vibrating body via the fixed base, in order to couple the vibrations to the cantilever. The cantilevers are bonded with piezoelectric macro-fiber composite (MFC) materials (Model: M8514P2). The size of the cantilevers is 120*20mm2 and the thicknesses are separately 1mm, 0.8mm, 0.6mm. The prototype generator has a measured performance of 160.98 mW effective electrical power and 7.93 DC output voltage via the excitation level of 10m/s2. The 130% increase in the operating bandwidth is achieved. This device is promising to support low-power devices, peer-to-peer wireless nodes, and small-scale wireless sensor networks in ambient vibration environment.Keywords: tri-cantilever, ambient vibration, energy harvesting, magnetic oscillator
Procedia PDF Downloads 1541690 Dosimetric Analysis of Intensity Modulated Radiotherapy versus 3D Conformal Radiotherapy in Adult Primary Brain Tumors: Regional Cancer Centre, India
Authors: Ravi Kiran Pothamsetty, Radha Rani Ghosh, Baby Paul Thaliath
Abstract:
Radiation therapy has undergone many advancements and evloved from 2D to 3D. Recently, with rapid pace of drug discoveries, cutting edge technology, and clinical trials has made innovative advancements in computer technology and treatment planning and upgraded to intensity modulated radiotherapy (IMRT) which delivers in homogenous dose to tumor and normal tissues. The present study was a hospital-based experience comparing two different conformal radiotherapy techniques for brain tumors. This analytical study design has been conducted at Regional Cancer Centre, India from January 2014 to January 2015. Ten patients have been selected after inclusion and exclusion criteria. All the patients were treated on Artiste Siemens Linac Accelerator. The tolerance level for maximum dose was 6.0 Gyfor lenses and 54.0 Gy for brain stem, optic chiasm and optical nerves as per RTOG criteria. Mean and standard deviation values of PTV98%, PTV 95% and PTV 2% in IMRT were 93.16±2.9, 95.01±3.4 and 103.1±1.1 respectively; for 3DCRT were 91.4±4.7, 94.17±2.6 and 102.7±0.39 respectively. PTV max dose (%) in IMRT and 3D-CRT were 104.7±0.96 and 103.9±1.0 respectively. Maximum dose to the tumor can be delivered with IMRT with acceptable toxicity limits. Variables such as expertise, location of tumor, patient condition, and TPS influence the outcome of the treatment.Keywords: brain tumors, intensity modulated radiotherapy (IMRT), three dimensional conformal radiotherapy (3D-CRT), radiation therapy oncology group (RTOG)
Procedia PDF Downloads 2391689 Micro-Channel Flows Simulation Based on Nonlinear Coupled Constitutive Model
Authors: Qijiao He
Abstract:
MicroElectrical-Mechanical System (MEMS) is one of the most rapidly developing frontier research field both in theory study and applied technology. Micro-channel is a very important link component of MEMS. With the research and development of MEMS, the size of the micro-devices and the micro-channels becomes further smaller. Compared with the macroscale flow, the flow characteristics of gas in the micro-channel have changed, and the rarefaction effect appears obviously. However, for the rarefied gas and microscale flow, Navier-Stokes-Fourier (NSF) equations are no longer appropriate due to the breakup of the continuum hypothesis. A Nonlinear Coupled Constitutive Model (NCCM) has been derived from the Boltzmann equation to describe the characteristics of both continuum and rarefied gas flows. We apply the present scheme to simulate continuum and rarefied gas flows in a micro-channel structure. And for comparison, we apply other widely used methods which based on particle simulation or direct solution of distribution function, such as Direct simulation of Monte Carlo (DSMC), Unified Gas-Kinetic Scheme (UGKS) and Lattice Boltzmann Method (LBM), to simulate the flows. The results show that the present solution is in better agreement with the experimental data and the DSMC, UGKS and LBM results than the NSF results in rarefied cases but is in good agreement with the NSF results in continuum cases. And some characteristics of both continuum and rarefied gas flows are observed and analyzed.Keywords: continuum and rarefied gas flows, discontinuous Galerkin method, generalized hydrodynamic equations, numerical simulation
Procedia PDF Downloads 1721688 A Variant of Newton's Method with Free Second-Order Derivative
Authors: Young Hee Geum
Abstract:
In this paper, we present the iterative method and determine the control parameters to converge cubically for solving nonlinear equations. In addition, we derive the asymptotic error constant.Keywords: asymptotic error constant, iterative method, multiple root, root-finding, order of convergent
Procedia PDF Downloads 2941687 Hourly Solar Radiations Predictions for Anticipatory Control of Electrically Heated Floor: Use of Online Weather Conditions Forecast
Authors: Helene Thieblemont, Fariborz Haghighat
Abstract:
Energy storage systems play a crucial role in decreasing building energy consumption during peak periods and expand the use of renewable energies in buildings. To provide a high building thermal performance, the energy storage system has to be properly controlled to insure a good energy performance while maintaining a satisfactory thermal comfort for building’s occupant. In the case of passive discharge storages, defining in advance the required amount of energy is required to avoid overheating in the building. Consequently, anticipatory supervisory control strategies have been developed forecasting future energy demand and production to coordinate systems. Anticipatory supervisory control strategies are based on some predictions, mainly of the weather forecast. However, if the forecasted hourly outdoor temperature may be found online with a high accuracy, solar radiations predictions are most of the time not available online. To estimate them, this paper proposes an advanced approach based on the forecast of weather conditions. Several methods to correlate hourly weather conditions forecast to real hourly solar radiations are compared. Results show that using weather conditions forecast allows estimating with an acceptable accuracy solar radiations of the next day. Moreover, this technique allows obtaining hourly data that may be used for building models. As a result, this solar radiation prediction model may help to implement model-based controller as Model Predictive Control.Keywords: anticipatory control, model predictive control, solar radiation forecast, thermal storage
Procedia PDF Downloads 2711686 Temperature Distribution for Asphalt Concrete-Concrete Composite Pavement
Authors: Tetsya Sok, Seong Jae Hong, Young Kyu Kim, Seung Woo Lee
Abstract:
The temperature distribution for asphalt concrete (AC)-Concrete composite pavement is one of main influencing factor that affects to performance life of pavement. The temperature gradient in concrete slab underneath the AC layer results the critical curling stress and lead to causes de-bonding of AC-Concrete interface. These stresses, when enhanced by repetitive axial loadings, also contribute to the fatigue damage and eventual crack development within the slab. Moreover, the temperature change within concrete slab extremely causes the slab contracts and expands that significantly induces reflective cracking in AC layer. In this paper, the numerical prediction of pavement temperature was investigated using one-dimensional finite different method (FDM) in fully explicit scheme. The numerical predicted model provides a fundamental and clear understanding of heat energy balance including incoming and outgoing thermal energies in addition to dissipated heat in the system. By using the reliable meteorological data for daily air temperature, solar radiation, wind speech and variable pavement surface properties, the predicted pavement temperature profile was validated with the field measured data. Additionally, the effects of AC thickness and daily air temperature on the temperature profile in underlying concrete were also investigated. Based on obtained results, the numerical predicted temperature of AC-Concrete composite pavement using FDM provided a good accuracy compared to field measured data and thicker AC layer significantly insulates the temperature distribution in underlying concrete slab.Keywords: asphalt concrete, finite different method (FDM), curling effect, heat transfer, solar radiation
Procedia PDF Downloads 2691685 The application of Gel Dosimeters and Comparison with other Dosimeters in Radiotherapy: A Literature Review
Authors: Sujan Mahamud
Abstract:
Purpose: A major challenge in radiotherapy treatment is to deliver precise dose of radiation to the tumor with minimum dose to the healthy normal tissues. Recently, gel dosimetry has emerged as a powerful tool to measure three-dimensional (3D) dose distribution for complex delivery verification and quality assurance. These dosimeters act both as a phantom and detector, thus confirming the versatility of dosimetry technique. The aim of the study is to know the application of Gel Dosimeters in Radiotherapy and find out the comparison with 1D and 2D dimensional dosimeters. Methods and Materials: The study is carried out from Gel Dosimeter literatures. Secondary data and images have been collected from different sources such as different guidelines, books, and internet, etc. Result: Analyzing, verifying, and comparing data from treatment planning system (TPS) is determined that gel dosimeter is a very excellent powerful tool to measure three-dimensional (3D) dose distribution. The TPS calculated data were in very good agreement with the dose distribution measured by the ferrous gel. The overall uncertainty in the ferrous-gel dose determination was considerably reduced using an optimized MRI acquisition protocol and a new MRI scanner. The method developed for comparing measuring gel data with calculated treatment plans, the gel dosimetry method, was proven to be a useful for radiation treatment planning verification. In 1D and 2D Film, the depth dose and lateral for RMSD are 1.8% and 2%, and max (Di-Dj) are 2.5% and 8%. Other side 2D+ ( 3D) Film Gel and Plan Gel for RMSDstruct and RMSDstoch are 2.3% & 3.6% and 1% & 1% and system deviation are -0.6% and 2.5%. The study is investigated that the result fined 2D+ (3D) Film Dosimeter is better than the 1D and 2D Dosimeter. Discussion: Gel Dosimeters is quality control and quality assurance tool which will used the future clinical application.Keywords: gel dosimeters, phantom, rmsd, QC, detector
Procedia PDF Downloads 1521684 On the convergence of the Mixed Integer Randomized Pattern Search Algorithm
Authors: Ebert Brea
Abstract:
We propose a novel direct search algorithm for identifying at least a local minimum of mixed integer nonlinear unconstrained optimization problems. The Mixed Integer Randomized Pattern Search Algorithm (MIRPSA), so-called by the author, is based on a randomized pattern search, which is modified by the MIRPSA for finding at least a local minimum of our problem. The MIRPSA has two main operations over the randomized pattern search: moving operation and shrinking operation. Each operation is carried out by the algorithm when a set of conditions is held. The convergence properties of the MIRPSA is analyzed using a Markov chain approach, which is represented by an infinite countable set of state space λ, where each state d(q) is defined by a measure of the qth randomized pattern search Hq, for all q in N. According to the algorithm, when a moving operation is carried out on the qth randomized pattern search Hq, the MIRPSA holds its state. Meanwhile, if the MIRPSA carries out a shrinking operation over the qth randomized pattern search Hq, the algorithm will visit the next state, this is, a shrinking operation at the qth state causes a changing of the qth state into (q+1)th state. It is worthwhile pointing out that the MIRPSA never goes back to any visited states because the MIRPSA only visits any qth by shrinking operations. In this article, we describe the MIRPSA for mixed integer nonlinear unconstrained optimization problems for doing a deep study of its convergence properties using Markov chain viewpoint. We herein include a low dimension case for showing more details of the MIRPSA, when the algorithm is used for identifying the minimum of a mixed integer quadratic function. Besides, numerical examples are also shown in order to measure the performance of the MIRPSA.Keywords: direct search, mixed integer optimization, random search, convergence, Markov chain
Procedia PDF Downloads 4701683 Some Inequalities Related with Starlike Log-Harmonic Mappings
Authors: Melike Aydoğan, Dürdane Öztürk
Abstract:
Let H(D) be the linear space of all analytic functions defined on the open unit disc. A log-harmonic mappings is a solution of the nonlinear elliptic partial differential equation where w(z) ∈ H(D) is second dilatation such that |w(z)| < 1 for all z ∈ D. The aim of this paper is to define some inequalities of starlike logharmonic functions of order α(0 ≤ α ≤ 1).Keywords: starlike log-harmonic functions, univalent functions, distortion theorem
Procedia PDF Downloads 5261682 Seismic Performance of Concrete Moment Resisting Frames in Western Canada
Authors: Ali Naghshineh, Ashutosh Bagchi
Abstract:
Performance-based seismic design concepts are increasingly being adopted in various jurisdictions. While the National Building Code of Canada (NBCC) is not fully performance-based, it provides some features of a performance-based code, such as displacement control and objective-based solutions. Performance evaluation is an important part of a performance-based design. In this paper, the seismic performance of a set of code-designed 4, 8 and 12 story moment resisting concrete frames located in Victoria, BC, in the western part of Canada at different hazard levels namely, SLE (Service Level Event), DLE (Design Level Event) and MCE (Maximum Considered Event) has been studied. The seismic performance of these buildings has been evaluated based on FEMA 356 and ATC 72 procedures, and the nonlinear time history analysis. Pushover analysis has been used to investigate the different performance levels of these buildings and adjust their design based on the corresponding target displacements. Since pushover analysis ignores the higher mode effects, nonlinear dynamic time history using a set of ground motion records has been performed. Different types of ground motion records, such as crustal and subduction earthquake records have been used for the dynamic analysis to determine their effects. Results obtained from push over analysis on inter-story drift, displacement, shear and overturning moment are compared to those from the dynamic analysis.Keywords: seismic performance., performance-based design, concrete moment resisting frame, crustal earthquakes, subduction earthquakes
Procedia PDF Downloads 2641681 Effect of Radioprotectors on DNA Repair Enzyme and Survival of Gamma-Irradiated Cell Division Cycle Mutants of Saccharomyces pombe
Authors: Purva Nemavarkar, Badri Narain Pandey, Jitendra Kumar
Abstract:
Introduction: The objective was to understand the effect of various radioprotectors on DNA damage repair enzyme and survival in gamma-irradiated wild and cdc mutants of S. pombe (fission yeast) cultured under permissive and restrictive conditions. DNA repair process, as influenced by radioprotectors, was measured by activity of DNA polymerase in the cells. The use of single cell gel electrophoresis assay (SCGE) or Comet Assay to follow gamma-irradiation induced DNA damage and effect of radioprotectors was employed. In addition, studying the effect of caffeine at different concentrations on S-phase of cell cycle was also delineated. Materials and Methods: S. pombe cells grown at permissive temperature (250C) and/or restrictive temperature (360C) were followed by gamma-radiation. Percentage survival and activity of DNA Polymerase (yPol II) were determined after post-irradiation incubation (5 h) with radioprotectors such as Caffeine, Curcumin, Disulphiram, and Ellagic acid (the dose depending on individual D 37 values). The gamma-irradiated yeast cells (with and without the radioprotectors) were spheroplasted by enzyme glusulase and subjected to electrophoresis. Radio-resistant cells were obtained by arresting cells in S-phase using transient treatment of hydroxyurea (HU) and studying the effect of caffeine at different concentrations on S-phase of cell cycle. Results: The mutants of S. pombe showed insignificant difference in survival when grown under permissive conditions. However, growth of these cells under restrictive temperature leads to arrest in specific phases of cell cycle in different cdc mutants (cdc10: G1 arrest, cdc22: early S arrest, cdc17: late S arrest, cdc25: G2 arrest). All the cdc mutants showed decrease in survival after gamma radiation when grown at permissive and restrictive temperatures. Inclusion of the radioprotectors at respective concentrations during post irradiation incubation showed increase in survival of cells. Activity of DNA polymerase enzyme (yPol II) was increased significantly in cdc mutant cells exposed to gamma-radiation. Following SCGE, a linear relationship was observed between doses of irradiation and the tail moments of comets. The radioprotection of the fission yeast by radioprotectors can be seen by the reduced tail moments of the yeast comets. Caffeine also exhibited its radio-protective ability in radio-resistant S-phase cells obtained after HU treatment. Conclusions: The radioprotectors offered notable radioprotection in cdc mutants when added during irradiation. The present study showed activation of DNA damage repair enzyme (yPol II) and an increase in survival after treatment of radioprotectors in gamma irradiated wild type and cdc mutants of S. pombe cells. Results presented here showed feasibility of applying SCGE in fission yeast to follow DNA damage and radioprotection at high doses, which are not feasible with other eukaryotes. Inclusion of caffeine at 1mM concentration to S phase cells offered protection and did not decrease the cell viability. It can be proved that at minimal concentration, caffeine offered marked radioprotection.Keywords: radiation protection, cell cycle, fission yeast, comet assay, s-phase, DNA repair, radioprotectors, caffeine, curcumin, SCGE
Procedia PDF Downloads 1131680 Optimal Harmonic Filters Design of Taiwan High Speed Rail Traction System
Authors: Ying-Pin Chang
Abstract:
This paper presents a method for combining a particle swarm optimization with nonlinear time-varying evolution and orthogonal arrays (PSO-NTVEOA) in the planning of harmonic filters for the high speed railway traction system with specially connected transformers in unbalanced three-phase power systems. The objective is to minimize the cost of the filter, the filters loss, the total harmonic distortion of currents and voltages at each bus simultaneously. An orthogonal array is first conducted to obtain the initial solution set. The set is then treated as the initial training sample. Next, the PSO-NTVEOA method parameters are determined by using matrix experiments with an orthogonal array, in which a minimal number of experiments would have an effect that approximates the full factorial experiments. This PSO-NTVEOA method is then applied to design optimal harmonic filters in Taiwan High Speed Rail (THSR) traction system, where both rectifiers and inverters with IGBT are used. From the results of the illustrative examples, the feasibility of the PSO-NTVEOA to design an optimal passive harmonic filter of THSR system is verified and the design approach can greatly reduce the harmonic distortion. Three design schemes are compared that V-V connection suppressing the 3rd order harmonic, and Scott and Le Blanc connection for the harmonic improvement is better than the V-V connection.Keywords: harmonic filters, particle swarm optimization, nonlinear time-varying evolution, orthogonal arrays, specially connected transformers
Procedia PDF Downloads 3921679 Buildings Founded on Thermal Insulation Layer Subjected to Earthquake Load
Authors: David Koren, Vojko Kilar
Abstract:
The modern energy-efficient houses are often founded on a thermal insulation (TI) layer placed under the building’s RC foundation slab. The purpose of the paper is to identify the potential problems of the buildings founded on TI layer from the seismic point of view. The two main goals of the study were to assess the seismic behavior of such buildings, and to search for the critical structural parameters affecting the response of the superstructure as well as of the extruded polystyrene (XPS) layer. As a test building a multi-storeyed RC frame structure with and without the XPS layer under the foundation slab has been investigated utilizing nonlinear dynamic (time-history) and static (pushover) analyses. The structural response has been investigated with reference to the following performance parameters: i) Building’s lateral roof displacements, ii) Edge compressive and shear strains of the XPS, iii) Horizontal accelerations of the superstructure, iv) Plastic hinge patterns of the superstructure, v) Part of the foundation in compression, and vi) Deformations of the underlying soil and vertical displacements of the foundation slab (i.e. identifying the potential uplift). The results have shown that in the case of higher and stiff structures lying on firm soil the use of XPS under the foundation slab might induce amplified structural peak responses compared to the building models without XPS under the foundation slab. The analysis has revealed that the superstructure as well as the XPS response is substantially affected by the stiffness of the foundation slab.Keywords: extruded polystyrene (XPS), foundation on thermal insulation, energy-efficient buildings, nonlinear seismic analysis, seismic response, soil–structure interaction
Procedia PDF Downloads 3011678 The Techno-Economic Comparison of Solar Power Generation Methods for Turkish Republic of North Cyprus
Authors: Mustafa Dagbasi, Olusola Bamisile, Adii Chinedum
Abstract:
The objective of this work is to examine and compare the economic and environmental feasibility of 40MW photovoltaic (PV) power plant and 40MW parabolic trough (PT) power plant to be installed in two different cities, namely Nicosia and Famagusta in Turkish Republic of Northern Cyprus (TRNC). The need for using solar power technology around the world is also emphasized. Solar radiation and sunshine data for Nicosia and Famagusta are considered and analyzed to assess the distribution of solar radiation, sunshine duration, and air temperature. Also, these two different technologies with same rated power of 40MW will be compared with the performance of the proposed Solar Power Plant at Bari, Italy. The project viability analysis is performed using System Advisor Model (SAM) through Annual Energy Production and economic parameters for both cities. It is found that for the two cities; Nicosia and Famagusta, the investment is feasible for both 40MW PV power plant and 40MW PT power plant. From the techno-economic analysis of these two different solar power technologies having same rated power and under the same environmental conditions, PT plants produce more energy than PV plant. It is also seen that if a PT plant is installed near an existing steam turbine power plant, the steam from the PT system can be used to run this turbine which makes it more feasible to invest. The high temperatures that are used to produce steam for the turbines in the PT plant system can be supplemented with a secondary plant based on natural gas or other biofuels and can be used as backup. Although the initial investment of PT plant is higher, it has higher economic return and occupies smaller area compared to PV plant of the same capacity.Keywords: solar power, photovoltaic plant, parabolic trough plant, techno-economic analysis
Procedia PDF Downloads 283