Search results for: computational physics
1480 Flow Characterization in Complex Terrain for Aviation Safety
Authors: Adil Rasheed, Mandar Tabib
Abstract:
The paper describes the ability of a high-resolution Computational Fluid Dynamics model to predict terrain-induced turbulence and wind shear close to the ground. Various sensitivity studies to choose the optimal simulation setup for modeling the flow characteristics in a complex terrain are presented. The capabilities of the model are demonstrated by applying it to the Sandnessjøen Airport, Stokka in Norway, an airport that is located in a mountainous area. The model is able to forecast turbulence in real time and trigger an alert when atmospheric conditions might result in high wind shear and turbulence.Keywords: aviation safety, terrain-induced turbulence, atmospheric flow, alert system
Procedia PDF Downloads 4161479 Alteration of Bone Strength in Osteoporosis of Mouse Femora: Computational Study Based on Micro CT Images
Authors: Changsoo Chon, Sangkuy Han, Donghyun Seo, Jihyung Park, Bokku Kang, Hansung Kim, Keyoungjin Chun, Cheolwoong Ko
Abstract:
The purpose of the study is to develop a finite element model based on 3D bone structural images of Micro-CT and to analyze the stress distribution for the osteoporosis mouse femora. In this study, results of finite element analysis show that the early osteoporosis of mouse model decreased a bone density in trabecular region; however, the bone density in cortical region increased.Keywords: micro-CT, finite element analysis, osteoporosis, bone strength
Procedia PDF Downloads 3631478 Hierarchical Checkpoint Protocol in Data Grids
Authors: Rahma Souli-Jbali, Minyar Sassi Hidri, Rahma Ben Ayed
Abstract:
Grid of computing nodes has emerged as a representative means of connecting distributed computers or resources scattered all over the world for the purpose of computing and distributed storage. Since fault tolerance becomes complex due to the availability of resources in decentralized grid environment, it can be used in connection with replication in data grids. The objective of our work is to present fault tolerance in data grids with data replication-driven model based on clustering. The performance of the protocol is evaluated with Omnet++ simulator. The computational results show the efficiency of our protocol in terms of recovery time and the number of process in rollbacks.Keywords: data grids, fault tolerance, clustering, chandy-lamport
Procedia PDF Downloads 3411477 Solving the Pseudo-Geometric Traveling Salesman Problem with the “Union Husk” Algorithm
Authors: Boris Melnikov, Ye Zhang, Dmitrii Chaikovskii
Abstract:
This study explores the pseudo-geometric version of the extensively researched Traveling Salesman Problem (TSP), proposing a novel generalization of existing algorithms which are traditionally confined to the geometric version. By adapting the "onion husk" method and introducing auxiliary algorithms, this research fills a notable gap in the existing literature. Through computational experiments using randomly generated data, several metrics were analyzed to validate the proposed approach's efficacy. Preliminary results align with expected outcomes, indicating a promising advancement in TSP solutions.Keywords: optimization problems, traveling salesman problem, heuristic algorithms, “onion husk” algorithm, pseudo-geometric version
Procedia PDF Downloads 2071476 Application of Wavelet Based Approximation for the Solution of Partial Integro-Differential Equation Arising from Viscoelasticity
Authors: Somveer Singh, Vineet Kumar Singh
Abstract:
This work contributes a numerical method based on Legendre wavelet approximation for the treatment of partial integro-differential equation (PIDE). Operational matrices of Legendre wavelets reduce the solution of PIDE into the system of algebraic equations. Some useful results concerning the computational order of convergence and error estimates associated to the suggested scheme are presented. Illustrative examples are provided to show the effectiveness and accuracy of proposed numerical method.Keywords: legendre wavelets, operational matrices, partial integro-differential equation, viscoelasticity
Procedia PDF Downloads 4481475 A Hebbian Neural Network Model of the Stroop Effect
Authors: Vadim Kulikov
Abstract:
The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop
Procedia PDF Downloads 2671474 Cuckoo Search Optimization for Black Scholes Option Pricing
Authors: Manas Shah
Abstract:
Black Scholes option pricing model is one of the most important concepts in modern world of computational finance. However, its practical use can be challenging as one of the input parameters must be estimated; implied volatility of the underlying security. The more precisely these values are estimated, the more accurate their corresponding estimates of theoretical option prices would be. Here, we present a novel model based on Cuckoo Search Optimization (CS) which finds more precise estimates of implied volatility than Particle Swarm Optimization (PSO) and Genetic Algorithm (GA).Keywords: black scholes model, cuckoo search optimization, particle swarm optimization, genetic algorithm
Procedia PDF Downloads 4531473 Effect of the Applied Bias on Miniband Structures in Dimer Fibonacci Inas/Ga1-Xinxas Superlattices
Authors: Z. Aziz, S. Terkhi, Y. Sefir, R. Djelti, S. Bentata
Abstract:
The effect of a uniform electric field across multibarrier systems (InAs/InxGa1-xAs) is exhaustively explored by a computational model using exact airy function formalism and the transfer-matrix technique. In the case of biased DFHBSL structure a strong reduction in transmission properties was observed and the width of the miniband structure linearly decreases with the increase of the applied bias. This is due to the confinement of the states in the miniband structure, which becomes increasingly important (Wannier-Stark Effect).Keywords: dimer fibonacci height barrier superlattices, singular extended state, exact airy function, transfer matrix formalism
Procedia PDF Downloads 3061472 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 1591471 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 591470 Monte Carlo Methods and Statistical Inference of Multitype Branching Processes
Authors: Ana Staneva, Vessela Stoimenova
Abstract:
A parametric estimation of the MBP with Power Series offspring distribution family is considered in this paper. The MLE for the parameters is obtained in the case when the observable data are incomplete and consist only with the generation sizes of the family tree of MBP. The parameter estimation is calculated by using the Monte Carlo EM algorithm. The estimation for the posterior distribution and for the offspring distribution parameters are calculated by using the Bayesian approach and the Gibbs sampler. The article proposes various examples with bivariate branching processes together with computational results, simulation and an implementation using R.Keywords: Bayesian, branching processes, EM algorithm, Gibbs sampler, Monte Carlo methods, statistical estimation
Procedia PDF Downloads 4211469 Spectra Analysis in Sunset Color Demonstrations with a White-Color LED as a Light Source
Authors: Makoto Hasegawa, Seika Tokumitsu
Abstract:
Spectra of light beams emitted from white-color LED torches are different from those of conventional electric torches. In order to confirm if white-color LED torches can be used as light sources for popular sunset color demonstrations in spite of such differences, spectra of travelled light beams and scattered light beams with each of a white-color LED torch (composed of a blue LED and yellow-color fluorescent material) and a conventional electric torch as a light source were measured and compared with each other in a 50 cm-long water tank for sunset color demonstration experiments. Suspension liquid was prepared from acryl-emulsion and tap-water in the water tank, and light beams from the white-color LED torch or the conventional electric torch were allowed to travel in this suspension liquid. Sunset-like color was actually observed when the white-color LED torch was used as the light source in sunset color demonstrations. However, the observed colors when viewed with naked eye look slightly different from those obtainable with the conventional electric torch. At the same time, with the white-color LED, changes in colors in short to middle wavelength regions were recognized with careful observations. From those results, white-color LED torches are confirmed to be applicable as light sources in sunset color demonstrations, although certain attentions have to be paid. Further advanced classes will be successfully performed with white-color LED torches as light sources.Keywords: blue sky demonstration, sunset color demonstration, white LED torch, physics education
Procedia PDF Downloads 2841468 CFD Effect of the Tidal Grating in Opposite Directions
Authors: N. M. Thao, I. Dolguntseva, M. Leijon
Abstract:
Flow blockages referring to the increase in flow are considered as a vital equipment for marine current energy conversion. However, the shape of these devices will result in extracted energy under the operation. The present work investigates the effect of two configurations of a grating, convergent and divergent that located upstream, to the water flow velocity. Computational Fluid Dynamic simulation studies the flow characteristics by using the ANSYS Fluent solver for these specified arrangements of the grating. The results indicate that distinct features of flow velocity between “convergent” and “divergent” grating placements are up to in confined conditions. Furthermore, the velocity in case of granting is higher than that of the divergent grating.Keywords: marine current energy, converter, turbine granting, RANS simulation, water flow velocity
Procedia PDF Downloads 4091467 Numerical Investigation of Flow Past in a Staggered Tube Bundle
Authors: Kerkouri Abdelkadir
Abstract:
Numerical calculations of turbulent flows are one of the most prominent modern interests in various engineering applications. Due to the difficulty of predicting, following up and studying this flow for computational fluid dynamic (CFD), in this paper, we simulated numerical study of a flow past in a staggered tube bundle, using CFD Code ANSYS FLUENT with several models of turbulence following: k-ε, k-ω and SST approaches. The flow is modeled based on the experimental studies. The predictions of mean velocities are in very good agreement with detailed LDA (Laser Doppler Anemometry) measurements performed in 8 stations along the depth of the array. The sizes of the recirculation zones behind the cylinders are also predicted. The simulations are conducted for Reynolds numbers of 12858. The Reynolds number is set to depend experimental results.Keywords: flow, tube bundle, ANSYS Fluent, CFD, turbulence, LDA, RANS (k-ε, k-ω, SST)
Procedia PDF Downloads 1641466 Fast and Accurate Finite-Difference Method Solving Multicomponent Smoluchowski Coagulation Equation
Authors: Alexander P. Smirnov, Sergey A. Matveev, Dmitry A. Zheltkov, Eugene E. Tyrtyshnikov
Abstract:
We propose a new computational technique for multidimensional (multicomponent) Smoluchowski coagulation equation. Using low-rank approximations in Tensor Train format of both the solution and the coagulation kernel, we accelerate the classical finite-difference Runge-Kutta scheme keeping its level of accuracy. The complexity of the taken finite-difference scheme is reduced from O(N^2d) to O(d^2 N log N ), where N is the number of grid nodes and d is a dimensionality of the problem. The efficiency and the accuracy of the new method are demonstrated on concrete problem with known analytical solution.Keywords: tensor train decomposition, multicomponent Smoluchowski equation, runge-kutta scheme, convolution
Procedia PDF Downloads 4321465 Nitrogen Effects on Ignition Delay Time in Supersonic Premixed and Diffusion Flames
Authors: A. M. Tahsini
Abstract:
Computational study of two dimensional supersonic reacting hydrogen-air flows is performed to investigate the nitrogen effects on ignition delay time for premixed and diffusion flames. Chemical reaction is treated using detail kinetics and the advection upstream splitting method is used to calculate the numerical inviscid fluxes. The results show that only in the stoichiometric condition for both premixed and diffusion flames, there is monotone dependency of the ignition delay time to the nitrogen addition. In other situations, the optimal condition from ignition viewpoint should be found using numerical investigations.Keywords: diffusion flame, ignition delay time, mixing layer, numerical simulation, premixed flame, supersonic flow
Procedia PDF Downloads 4631464 Reliability Based Topology Optimization: An Efficient Method for Material Uncertainty
Authors: Mehdi Jalalpour, Mazdak Tootkaboni
Abstract:
We present a computationally efficient method for reliability-based topology optimization under material properties uncertainty, which is assumed to be lognormally distributed and correlated within the domain. Computational efficiency is achieved through estimating the response statistics with stochastic perturbation of second order, using these statistics to fit an appropriate distribution that follows the empirical distribution of the response, and employing an efficient gradient-based optimizer. The proposed algorithm is utilized for design of new structures and the changes in the optimized topology is discussed for various levels of target reliability and correlation strength. Predictions were verified thorough comparison with results obtained using Monte Carlo simulation.Keywords: material uncertainty, stochastic perturbation, structural reliability, topology optimization
Procedia PDF Downloads 6051463 Aerodynamic Analysis of a Frontal Deflector for Vehicles
Authors: C. Malça, N. Alves, A. Mateus
Abstract:
This work was one of the tasks of the Manufacturing2Client project, whose objective was to develop a frontal deflector to be commercialized in the automotive industry, using new project and manufacturing methods. In this task, in particular, it was proposed to develop the ability to predict computationally the aerodynamic influence of flow in vehicles, in an effort to reduce fuel consumption in vehicles from class 3 to 8. With this aim, two deflector models were developed and their aerodynamic performance analyzed. The aerodynamic study was done using the Computational Fluid Dynamics (CFD) software Ansys CFX and allowed the calculation of the drag coefficient caused by the vehicle motion for the different configurations considered. Moreover, the reduction of diesel consumption and carbon dioxide (CO2) emissions associated with the optimized deflector geometry could be assessed.Keywords: erodynamic analysis, CFD, CO2 emissions, drag coefficient, frontal deflector, fuel consumption
Procedia PDF Downloads 4071462 Non-Isothermal Stationary Laminar Oil Flow Numerical Simulation
Authors: Daniyar Bossinov
Abstract:
This paper considers a non-isothermal stationary waxy crude oil flow in a two-dimensional axisymmetric pipe with the transition of a Newtonian fluid to a non-Newtonian fluid. The viscosity and yield stress of waxy crude oil are highly dependent on temperature changes. During the hot pumping of waxy crude oil through a buried pipeline, a non-isothermal flow occurs due to heat transfer to the surrounding soil. This leads to a decrease in flow temperature, an increase in viscosity, the appearance of yield stress, the crystallization of wax, and the deposition of solid particles on the pipeline's inner wall. The deposition of oil solid particles reduces a pipeline flow area and leads to the appearance of a stagnant zone with thermal insulation in the near-wall area. Waxy crude oil properties change. A Newtonian fluid at low temperatures transits to a non-Newtonian fluid. The one-dimensional modeling of a non-isothermal waxy crude oil flow in a two-dimensional axisymmetric pipeline by traditional averaging of temperature and velocity over the pipeline cross-section does not allow for explaining a physics phenomenon. Therefore, in this work, a two-dimensional flow model and the heat transfer of waxy oil are constructed. The calculated data show the transition of a Newtonian fluid to a non-Newtonian fluid due to the heat exchange of waxy oil with the environment.Keywords: non-isothermal laminar flow, waxy crude oil, stagnant zone, yield stress
Procedia PDF Downloads 271461 Facial Emotion Recognition Using Deep Learning
Authors: Ashutosh Mishra, Nikhil Goyal
Abstract:
A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations.Keywords: facial recognition, computational intelligence, convolutional neural network, depth map
Procedia PDF Downloads 2311460 CFD Simulations to Study the Cooling Effects of Different Greening Modifications
Authors: An-Shik Yang, Chih-Yung Wen, Chiang-Ho Cheng, Yu-Hsuan Juan
Abstract:
The objective of this study is to conduct computational fluid dynamic (CFD) simulations for evaluating the cooling efficacy from vegetation implanted in a public park in the Taipei, Taiwan. To probe the impacts of park renewal by means of adding three pavilions and supplementary green areas on urban microclimates, the simulated results have revealed that the park having a higher percentage of green coverage ratio (GCR) tended to experience a better cooling effect. These findings can be used to explore the effects of different greening modifications on urban environments for achieving an effective thermal comfort in urban public spaces.Keywords: CFD simulations, Green Coverage Ratio, Urban heat island, Urban Public Park
Procedia PDF Downloads 4921459 A Method for Improving the Embedded Runge Kutta Fehlberg 4(5)
Authors: Sunyoung Bu, Wonkyu Chung, Philsu Kim
Abstract:
In this paper, we introduce a method for improving the embedded Runge-Kutta-Fehlberg 4(5) method. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. This solution and error are obtained by solving an initial value problem whose solution has the information of the error at each integration step. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. For the assessment of the effectiveness, EULR problem is numerically solved.Keywords: embedded Runge-Kutta-Fehlberg method, initial value problem, EULR problem, integration step
Procedia PDF Downloads 4631458 Environmental Monitoring by Using Unmanned Aerial Vehicle (UAV) Images and Spatial Data: A Case Study of Mineral Exploitation in Brazilian Federal District, Brazil
Authors: Maria De Albuquerque Bercot, Caio Gustavo Mesquita Angelo, Daniela Maria Moreira Siqueira, Augusto Assucena De Vasconcellos, Rodrigo Studart Correa
Abstract:
Mining is an important socioeconomic activity in Brazil although it negatively impacts the environment. Mineral operations cause irreversible changes in topography, removal of vegetation and topsoil, habitat destruction, displacement of fauna, loss of biodiversity, soil erosion, siltation of watercourses and have potential to enhance climate change. Due to the impacts and its pollution potential, mining activity in Brazil is legally subjected to environmental licensing. Unlicensed mining operations or operations that not abide to the terms of an obtained license are taken as environmental crimes in the country. This work reports a case analyzed in the Forensic Institute of the Brazilian Federal District Civil Police. The case consisted of detecting illegal aspects of sand exploitation from a licensed mine in Federal District, nearby Brasilia city. The fieldwork covered an area of roughly 6 ha, which was surveyed with an unmanned aerial vehicle (UAV) (PHANTOM 3 ADVANCED). The overflight with UAV took about 20 min, with maximum flight height of 100 m. 592 UAV georeferenced images were obtained and processed in a photogrammetric software (AGISOFT PHOTOSCAN 1.1.4), which generated a mosaic of geo-referenced images and a 3D model in less than six working hours. The 3D model was analyzed in a forensic software for accurate modeling and volumetric analysis. (MAPTEK I-SITE FORENSIC 2.2). To ensure the 3D model was a true representation of the mine site, coordinates of ten control points and reference measures were taken during fieldwork and compared to respective spatial data in the model. Finally, these spatial data were used for measuring mining area, excavation depth and volume of exploited sand. Results showed that mine holder had not complied with some terms and conditions stated in the granted license, such as sand exploration beyond authorized extension, depth and volume. Easiness, the accuracy and expedition of procedures used in this case highlight the employment of UAV imagery and computational photogrammetry as efficient tools for outdoor forensic exams, especially on environmental issues.Keywords: computational photogrammetry, environmental monitoring, mining, UAV
Procedia PDF Downloads 3191457 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137
Authors: Abdulsalam M. Alhawsawi
Abstract:
Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137
Procedia PDF Downloads 1171456 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering
Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi
Abstract:
In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering
Procedia PDF Downloads 1501455 Optimization of Structures Subjected to Earthquake
Authors: Alireza Lavaei, Alireza Lohrasbi, Mohammadali M. Shahlaei
Abstract:
To reduce the overall time of structural optimization for earthquake loads two strategies are adopted. In the first strategy, a neural system consisting self-organizing map and radial basis function neural networks, is utilized to predict the time history responses. In this case, the input space is classified by employing a self-organizing map neural network. Then a distinct RBF neural network is trained in each class. In the second strategy, an improved genetic algorithm is employed to find the optimum design. A 72-bar space truss is designed for optimal weight using exact and approximate analysis for the El Centro (S-E 1940) earthquake loading. The numerical results demonstrate the computational advantages and effectiveness of the proposed method.Keywords: optimization, genetic algorithm, neural networks, self-organizing map
Procedia PDF Downloads 3111454 CFD Simulations to Examine Natural Ventilation of a Work Area in a Public Building
Authors: An-Shik Yang, Chiang-Ho Cheng, Jen-Hao Wu, Yu-Hsuan Juan
Abstract:
Natural ventilation has played an important role for many low energy-building designs. It has been also noticed as a essential subject to persistently bring the fresh cool air from the outside into a building. This study carried out the computational fluid dynamics (CFD)-based simulations to examine the natural ventilation development of a work area in a public building. The simulated results can be useful to better understand the indoor microclimate and the interaction of wind with buildings. Besides, this CFD simulation procedure can serve as an effective analysis tool to characterize the airing performance, and thereby optimize the building ventilation for strengthening the architects, planners and other decision makers on improving the natural ventilation design of public buildings.Keywords: CFD simulations, natural ventilation, microclimate, wind environment
Procedia PDF Downloads 5741453 Development of an Elastic Functionally Graded Interphase Model for the Micromechanics Response of Composites
Authors: Trevor Sabiston, Mohsen Mohammadi, Mohammed Cherkaoui, Kaan Inal
Abstract:
A new micromechanics framework is developed for long fibre reinforced composites using a single fibre surrounded by a functionally graded interphase and matrix as a representative unit cell. The unit cell is formulated to represent any number of aligned fibres by a single fibre. Using this model the elastic response of long fibre composites is predicted in all directions. The model is calibrated to experimental results and shows very good agreement in the elastic regime. The differences between the proposed model and existing models are discussed.Keywords: computational mechanics, functionally graded interphase, long fibre composites, micromechanics
Procedia PDF Downloads 3191452 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor under Liquefaction and Scour
Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg, Christian Windt
Abstract:
When a structure is installed on a seabed, the presence of the structure will influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scour On fixed structures (bridges and Monopiles) in rivers and oceans has been carried out, and very limited research work on scour and liquefaction for gravity anchors, particularly for floating Tension Leg Platform (TLP) substructures. Due to its importance and need for enhancement of knowledge in scour and liquefaction around marine structures, the MarTERA funded a three-year (2020-2023) research program called NuLIMAS (Numerical Modeling of Liquefaction Around Marine Structures). It’s a group consists of European institutions (Universities, laboratories, and consulting companies). The objective of this study is to build a numerical model that replicates the reality, which indeed helps to simulate (predict) underwater flow conditions and to study different marine scour and Liquefication situations. It helps to design a heavyweight anchor for the TLP substructure and to minimize the time and expenditure on experiments. And also, the achieved results and the numerical model will be a basis for the development of other design and concepts For marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM. A conceptual design of heavyweight anchor for TLP substructure is designed through taking considerations of available state-of-the-art knowledge on scour and Liquefication concepts and references to Previous existing designs. These conceptual designs are validated with the available similar experimental benchmark data and also with the CFD numerical benchmark standards (CFD quality assurance study). CFD optimization model/tool is designed as to minimize the effect of fluid flow, scour, and Liquefication. A parameterized model is also developed to automate the calculation process to reduce user interactions. The parameters such as anchor Lowering Process, flow optimized outer contours, seabed interaction study, and FSSI (Fluid-Structure-Seabed Interactions) are investigated and used to carve the model as to build an optimized anchor.Keywords: gravity anchor, liquefaction, scour, computational fluid dynamics
Procedia PDF Downloads 1441451 Using SMS Mobile Technology to Assess the Mastery of Subject Content Knowledge of Science and Mathematics Teachers of Secondary Schools in Tanzania
Authors: Joel S. Mtebe, Aron Kondoro, Mussa M. Kissaka, Elia Kibga
Abstract:
Sub-Saharan Africa is described as the second fastest growing mobile phone penetration in the world more than in the United States or the European Union. Mobile phones have been used to provide a lot of opportunities to improve people’s lives in the region such as in banking, marketing, entertainment, and paying various bills such as water, TV, and electricity. However, the potential of using mobile phones to enhance teaching and learning has not been explored. This study presents an experience of developing and delivering SMS quizzes questions that were used to assess mastery of the subject content knowledge of science and mathematics secondary school teachers in Tanzania. The SMS quizzes were used as a follow up support mechanism to 500 teachers who participated in a project to upgrade subject content knowledge of science and mathematics subjects. Quizzes of 10-15 questions were sent to teachers each week for 8 weeks and the results were analyzed using SPSS. The results showed that chemistry and biology had better performance compared to mathematics and physics. Teachers reported some challenges that led to poor performance, invalid answers, and non-responses and they are presented. This research has several practical implications for those who are implementing or planning to use mobile phones for teaching and learning especially in rural secondary schools in sub-Saharan Africa.Keywords: mobile learning, elearning, educational technolgies, SMS, secondary education, assessment
Procedia PDF Downloads 283