Search results for: FLUKA Monte Carlo Method
18792 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 12418791 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale
Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal
Abstract:
Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable
Procedia PDF Downloads 30118790 Self-Image of Police Officers
Authors: Leo Carlo B. Rondina
Abstract:
Self-image is an important factor to improve the self-esteem of the personnel. The purpose of the study is to determine the self-image of the police. The respondents were the 503 policemen assigned in different Police Station in Davao City, and they were chosen with the used of random sampling. With the used of Exploratory Factor Analysis (EFA), latent construct variables of police image were identified as follows; professionalism, obedience, morality and justice and fairness. Further, ordinal regression indicates statistical characteristics on ages 21-40 which means the age of the respondent statistically improves self-image.Keywords: police image, exploratory factor analysis, ordinal regression, Galatea effect
Procedia PDF Downloads 28718789 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 8018788 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison
Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison
Procedia PDF Downloads 16218787 A New Computational Package for Using in CFD and Other Problems (Third Edition)
Authors: Mohammad Reza Akhavan Khaleghi
Abstract:
This paper shows changes done to the Reduced Finite Element Method (RFEM) that its result will be the most powerful numerical method that has been proposed so far (some forms of this method are so powerful that they can approximate the most complex equations simply Laplace equation!). Finite Element Method (FEM) is a powerful numerical method that has been used successfully for the solution of the existing problems in various scientific and engineering fields such as its application in CFD. Many algorithms have been expressed based on FEM, but none have been used in popular CFD software. In this section, full monopoly is according to Finite Volume Method (FVM) due to better efficiency and adaptability with the physics of problems in comparison with FEM. It doesn't seem that FEM could compete with FVM unless it was fundamentally changed. This paper shows those changes and its result will be a powerful method that has much better performance in all subjects in comparison with FVM and another computational method. This method is not to compete with the finite volume method but to replace it.Keywords: reduced finite element method, new computational package, new finite element formulation, new higher-order form, new isogeometric analysis
Procedia PDF Downloads 11718786 A Study on the Solutions of the 2-Dimensional and Forth-Order Partial Differential Equations
Abstract:
In this study, we will carry out a comparative study between the reduced differential transform method, the adomian decomposition method, the variational iteration method and the homotopy analysis method. These methods are used in many fields of engineering. This is been achieved by handling a kind of 2-Dimensional and forth-order partial differential equations called the Kuramoto–Sivashinsky equations. Three numerical examples have also been carried out to validate and demonstrate efficiency of the four methods. Furthermost, it is shown that the reduced differential transform method has advantage over other methods. This method is very effective and simple and could be applied for nonlinear problems which used in engineering.Keywords: reduced differential transform method, adomian decomposition method, variational iteration method, homotopy analysis method
Procedia PDF Downloads 43318785 Elvis Improved Method for Solving Simultaneous Equations in Two Variables with Some Applications
Authors: Elvis Adam Alhassan, Kaiyu Tian, Akos Konadu, Ernest Zamanah, Michael Jackson Adjabui, Ibrahim Justice Musah, Esther Agyeiwaa Owusu, Emmanuel K. A. Agyeman
Abstract:
In this paper, how to solve simultaneous equations using the Elvis improved method is shown. The Elvis improved method says; to make one variable in the first equation the subject; make the same variable in the second equation the subject; equate the results and simplify to obtain the value of the unknown variable; put the value of the variable found into one equation from the first or second steps and simplify for the remaining unknown variable. The difference between our Elvis improved method and the substitution method is that: with Elvis improved method, the same variable is made the subject in both equations, and the two resulting equations equated, unlike the substitution method where one variable is made the subject of only one equation and substituted into the other equation. After describing the Elvis improved method, findings from 100 secondary students and the views of 5 secondary tutors to demonstrate the effectiveness of the method are presented. The study's purpose is proved by hypothetical examples.Keywords: simultaneous equations, substitution method, elimination method, graphical method, Elvis improved method
Procedia PDF Downloads 13618784 Different Methods of Fe3O4 Nano Particles Synthesis
Authors: Arezoo Hakimi, Afshin Farahbakhsh
Abstract:
Herein, we comparison synthesized Fe3O4 using, hydrothermal method, Mechanochemical processes and solvent thermal method. The Hydrothermal Technique has been the most popular one, gathering interest from scientists and technologists of different disciplines, particularly in the last fifteen years. In the hydrothermal method Fe3O4 microspheres, in which many nearly monodisperse spherical particles with diameters of about 400nm, in the mechanochemical method regular morphology indicates that the particles are well crystallized and in the solvent thermal method Fe3O4 nanoparticles have good properties of uniform size and good dispersion.Keywords: Fe3O4 nanoparticles, hydrothermal method, mechanochemical processes, solvent thermal method
Procedia PDF Downloads 35118783 A Comparison of Smoothing Spline Method and Penalized Spline Regression Method Based on Nonparametric Regression Model
Authors: Autcha Araveeporn
Abstract:
This paper presents a study about a nonparametric regression model consisting of a smoothing spline method and a penalized spline regression method. We also compare the techniques used for estimation and prediction of nonparametric regression model. We tried both methods with crude oil prices in dollars per barrel and the Stock Exchange of Thailand (SET) index. According to the results, it is concluded that smoothing spline method performs better than that of penalized spline regression method.Keywords: nonparametric regression model, penalized spline regression method, smoothing spline method, Stock Exchange of Thailand (SET)
Procedia PDF Downloads 43918782 Influence of Optimization Method on Parameters Identification of Hyperelastic Models
Authors: Bale Baidi Blaise, Gilles Marckmann, Liman Kaoye, Talaka Dya, Moustapha Bachirou, Gambo Betchewe, Tibi Beda
Abstract:
This work highlights the capabilities of particles swarm optimization (PSO) method to identify parameters of hyperelastic models. The study compares this method with Genetic Algorithm (GA) method, Least Squares (LS) method, Pattern Search Algorithm (PSA) method, Beda-Chevalier (BC) method and the Levenberg-Marquardt (LM) method. Four classic hyperelastic models are used to test the different methods through parameters identification. Then, the study compares the ability of these models to reproduce experimental Treloar data in simple tension, biaxial tension and pure shear.Keywords: particle swarm optimization, identification, hyperelastic, model
Procedia PDF Downloads 17118781 Mathematical Reconstruction of an Object Image Using X-Ray Interferometric Fourier Holography Method
Authors: M. K. Balyan
Abstract:
The main principles of X-ray Fourier interferometric holography method are discussed. The object image is reconstructed by the mathematical method of Fourier transformation. The three methods are presented – method of approximation, iteration method and step by step method. As an example the complex amplitude transmission coefficient reconstruction of a beryllium wire is considered. The results reconstructed by three presented methods are compared. The best results are obtained by means of step by step method.Keywords: dynamical diffraction, hologram, object image, X-ray holography
Procedia PDF Downloads 39418780 Modified Approximation Methods for Finding an Optimal Solution for the Transportation Problem
Authors: N. Guruprasad
Abstract:
This paper presents a modification of approximation method for transportation problems. The initial basic feasible solution can be computed using either Russel's or Vogel's approximation methods. Russell’s approximation method provides another excellent criterion that is still quick to implement on a computer (not manually) In most cases Russel's method yields a better initial solution, though it takes longer than Vogel's method (finding the next entering variable in Russel's method is in O(n1*n2), and in O(n1+n2) for Vogel's method). However, Russel's method normally has a lesser total running time because less pivots are required to reach the optimum for all but small problem sizes (n1+n2=~20). With this motivation behind we have incorporated a variation of the same – what we have proposed it has TMC (Total Modified Cost) to obtain fast and efficient solutions.Keywords: computation, efficiency, modified cost, Russell’s approximation method, transportation, Vogel’s approximation method
Procedia PDF Downloads 54518779 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: steepest descent, line search, iteration, running time, unconstrained optimization, convergence
Procedia PDF Downloads 54018778 Calculating Stress Intensity Factor of Cracked Axis by Using a Meshless Method
Authors: S. Shahrooi, A. Talavari
Abstract:
Numeral study on the crack and discontinuity using element-free methods has been widely spread in recent years. In this study, for stress intensity factor calculation of the cracked axis under torsional loading has been used from a new element-free method as MLPG method. Region range is discretized by some dispersed nodal points. From method of moving least square (MLS) utilized to create the functions using these nodal points. Then, results of meshless method and finite element method (FEM) were compared. The results is shown which the element-free method was of good accuracy.Keywords: stress intensity factor, crack, torsional loading, meshless method
Procedia PDF Downloads 56518777 Towards the Design of Gripper Independent of Substrate Surface Structures
Authors: Annika Schmidt, Ausama Hadi Ahmed, Carlo Menon
Abstract:
End effectors for robotic systems are becoming more and more advanced, resulting in a growing variety of gripping tasks. However, most grippers are application specific. This paper presents a gripper that interacts with an object’s surface rather than being dependent on a defined shape or size. For this purpose, ingressive and astrictive features are combined to achieve the desired gripping capabilities. The developed prototype is tested on a variety of surfaces with different hardness and roughness properties. The results show that the gripping mechanism works on all of the tested surfaces. The influence of the material properties on the amount of the supported load is also studied and the efficiency is discussed.Keywords: claw, dry adhesion, insects, material properties
Procedia PDF Downloads 35918776 An Efficient Approach to Optimize the Cost and Profit of a Tea Garden by Using Branch and Bound Method
Authors: Abu Hashan Md Mashud, M. Sharif Uddin, Aminur Rahman Khan
Abstract:
In this paper, we formulate a new problem as a linear programming and Integer Programming problem and maximize profit within the limited budget and limited resources based on the construction of a tea garden problem. It describes a new idea about how to optimize profit and focuses on the practical aspects of modeling and the challenges of providing a solution to a complex real life problem. Finally, a comparative study is carried out among Graphical method, Simplex method and Branch and bound method.Keywords: integer programming, tea garden, graphical method, simplex method, branch and bound method
Procedia PDF Downloads 62318775 Comparative Study of Outcome of Patients with Wilms Tumor Treated with Upfront Chemotherapy and Upfront Surgery in Alexandria University Hospitals
Authors: Golson Mohamed, Yasmine Gamasy, Khaled EL-Khatib, Anas Al-Natour, Shady Fadel, Haytham Rashwan, Haytham Badawy, Nadia Farghaly
Abstract:
Introduction: Wilm's tumor is the most common malignant renal tumor in children. Much progress has been made in the management of patients with this malignancy over the last 3 decades. Today treatments are based on several trials and studies conducted by the International Society of Pediatric Oncology (SIOP) in Europe and National Wilm's Tumor Study Group (NWTS) in the USA. It is necessary for us to understand why do we follow either of the protocols, NWTS which follows the upfront surgery principle or the SIOP which follows the upfront chemotherapy principle in all stages of the disease. Objective: The aim of is to assess outcome in patients treated with preoperative chemotherapy and patients treated with upfront surgery to compare their effect on overall survival. Study design: to decide which protocol to follow, study was carried out on records for patients aged 1 day to 18 years old suffering from Wilm's tumor who were admitted to Alexandria University Hospital, pediatric oncology, pediatric urology and pediatric surgery departments, with a retrospective survey records from 2010 to 2015, Design and editing of the transfer sheet with a (PRISMA flow study) Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. (11) Qualitative data were described using number and percent. Quantitative data were described using Range (minimum and maximum), mean, standard deviation and median. Comparison between different groups regarding categorical variables was tested using Chi-square test. When more than 20% of the cells have expected count less than 5, correction for chi-square was conducted using Fisher’s Exact test or Monte Carlo correction. The distributions of quantitative variables were tested for normality using Kolmogorov-Smirnov test, Shapiro-Wilk test, and D'Agstino test, if it reveals normal data distribution, parametric tests were applied. If the data were abnormally distributed, non-parametric tests were used. For normally distributed data, a comparison between two independent populations was done using independent t-test. For abnormally distributed data, comparison between two independent populations was done using Mann-Whitney test. Significance of the obtained results was judged at the 5% level. Results: A significantly statistical difference was observed for survival between the two studied groups favoring the upfront chemotherapy(86.4%)as compared to the upfront surgery group (59.3%) where P=0.009. As regard complication, 20 cases (74.1%) out of 27 were complicated in the group of patients treated with upfront surgery. Meanwhile, 30 cases (68.2%) out of 44 had complications in patients treated with upfront chemotherapy. Also, the incidence of intraoperative complication (rupture) was less in upfront chemotherapy group as compared to upfront surgery group. Conclusion: Upfront chemotherapy has superiority over upfront surgery.As the patient who started with upfront chemotherapy shown, higher survival rate, less percent in complication, less percent needed for radiotherapy, and less rate in recurrence.Keywords: Wilm's tumor, renal tumor, chemotherapy, surgery
Procedia PDF Downloads 31718774 Sewer Culvert Installation Method to Accommodate Underground Construction in an Urban Area with Narrow Streets
Authors: Osamu Igawa, Hiroshi Kouchiwa, Yuji Ito
Abstract:
In recent years, a reconstruction project for sewer pipelines has been progressing in Japan with the aim of renewing old sewer culverts. However, it is difficult to secure a sufficient base area for shafts in an urban area because many streets are narrow with a complex layout. As a result, construction in such urban areas is generally very demanding. In urban areas, there is a strong requirement for a safe, reliable and economical construction method that does not disturb the public’s daily life and urban activities. With this in mind, we developed a new construction method called the 'shield switching type micro-tunneling method' which integrates the micro-tunneling method and shield method. In this method, pipeline is constructed first for sections that are gently curved or straight using the economical micro-tunneling method, and then the method is switched to the shield method for sections with a sharp curve or a series of curves without establishing an intermediate shaft. This paper provides the information, features and construction examples of this newly developed method.Keywords: micro-tunneling method, secondary lining applied RC segment, sharp curve, shield method, switching type
Procedia PDF Downloads 40318773 Direct Transient Stability Assessment of Stressed Power Systems
Authors: E. Popov, N. Yorino, Y. Zoka, Y. Sasaki, H. Sugihara
Abstract:
This paper discusses the performance of critical trajectory method (CTrj) for power system transient stability analysis under various loading settings and heavy fault condition. The method obtains Controlling Unstable Equilibrium Point (CUEP) which is essential for estimation of power system stability margins. The CUEP is computed by applying the CTrjto the boundary controlling unstable equilibrium point (BCU) method. The Proposed method computes a trajectory on the stability boundary that starts from the exit point and reaches CUEP under certain assumptions. The robustness and effectiveness of the method are demonstrated via six power system models and five loading conditions. As benchmark is used conventional simulation method whereas the performance is compared with and BCU Shadowing method.Keywords: power system, transient stability, critical trajectory method, energy function method
Procedia PDF Downloads 38618772 The Democratization of 3D Capturing: An Application Investigating Google Tango Potentials
Authors: Carlo Bianchini, Lorenzo Catena
Abstract:
The appearance of 3D scanners and then, more recently, of image-based systems that generate point clouds directly from common digital images have deeply affected the survey process in terms of both capturing and 2D/3D modelling. In this context, low cost and mobile systems are increasingly playing a key role and actually paving the way to the democratization of what in the past was the realm of few specialized technicians and expensive equipment. The application of Google Tango on the ancient church of Santa Maria delle Vigne in Pratica di Mare – Rome presented in this paper is one of these examples.Keywords: the architectural survey, augmented/mixed/virtual reality, Google Tango project, image-based 3D capturing
Procedia PDF Downloads 14818771 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)
Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara
Abstract:
Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry
Procedia PDF Downloads 17518770 A Comparative Study of Cognitive Factors Affecting Social Distancing among Vaccinated and Unvaccinated Filipinos
Authors: Emmanuel Carlo Belara, Albert John Dela Merced, Mark Anthony Dominguez, Diomari Erasga, Jerome Ferrer, Bernard Ombrog
Abstract:
Social distancing errors are a common prevalence between vaccinated and unvaccinated in the Filipino community. This study aims to identify and relate the factors on how they affect our daily lives. Observed factors include memory, attention, anxiety, decision-making, and stress. Upon applying the ergonomic tools and statistical treatment such as t-test and multiple linear regression, stress and attention turned out to have the most impact to the errors of social distancing.Keywords: vaccinated, unvaccinated, socoal distancing, filipinos
Procedia PDF Downloads 20118769 Constant Order Predictor Corrector Method for the Solution of Modeled Problems of First Order IVPs of ODEs
Authors: A. A. James, A. O. Adesanya, M. R. Odekunle, D. G. Yakubu
Abstract:
This paper examines the development of one step, five hybrid point method for the solution of first order initial value problems. We adopted the method of collocation and interpolation of power series approximate solution to generate a continuous linear multistep method. The continuous linear multistep method was evaluated at selected grid points to give the discrete linear multistep method. The method was implemented using a constant order predictor of order seven over an overlapping interval. The basic properties of the derived corrector was investigated and found to be zero stable, consistent and convergent. The region of absolute stability was also investigated. The method was tested on some numerical experiments and found to compete favorably with the existing methods.Keywords: interpolation, approximate solution, collocation, differential system, half step, converges, block method, efficiency
Procedia PDF Downloads 33718768 Development of 3D Particle Method for Calculating Large Deformation of Soils
Authors: Sung-Sik Park, Han Chang, Kyung-Hun Chae, Sae-Byeok Lee
Abstract:
In this study, a three-dimensional (3D) Particle method without using grid was developed for analyzing large deformation of soils instead of using ordinary finite element method (FEM) or finite difference method (FDM). In the 3D Particle method, the governing equations were discretized by various particle interaction models corresponding to differential operators such as gradient, divergence, and Laplacian. The Mohr-Coulomb failure criterion was incorporated into the 3D Particle method to determine soil failure. The yielding and hardening behavior of soil before failure was also considered by varying viscosity of soil. First of all, an unconfined compression test was carried out and the large deformation following soil yielding or failure was simulated by the developed 3D Particle method. The results were also compared with those of a commercial FEM software PLAXIS 3D. The developed 3D Particle method was able to simulate the 3D large deformation of soils due to soil yielding and calculate the variation of normal and shear stresses following clay deformation.Keywords: particle method, large deformation, soil column, confined compressive stress
Procedia PDF Downloads 57218767 The Implementation of Secton Method for Finding the Root of Interpolation Function
Authors: Nur Rokhman
Abstract:
A mathematical function gives relationship between the variables composing the function. Interpolation can be viewed as a process of finding mathematical function which goes through some specified points. There are many interpolation methods, namely: Lagrange method, Newton method, Spline method etc. For some specific condition, such as, big amount of interpolation points, the interpolation function can not be written explicitly. This such function consist of computational steps. The solution of equations involving the interpolation function is a problem of solution of non linear equation. Newton method will not work on the interpolation function, for the derivative of the interpolation function cannot be written explicitly. This paper shows the use of Secton method to determine the numerical solution of the function involving the interpolation function. The experiment shows the fact that Secton method works better than Newton method in finding the root of Lagrange interpolation function.Keywords: Secton method, interpolation, non linear function, numerical solution
Procedia PDF Downloads 37918766 The Classification Accuracy of Finance Data through Holder Functions
Authors: Yeliz Karaca, Carlo Cattani
Abstract:
This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).Keywords: artificial neural networks, finance data, Holder regularity, multifractals
Procedia PDF Downloads 24618765 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy
Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon
Abstract:
Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect
Procedia PDF Downloads 18218764 Ductility Spectrum Method for the Design and Verification of Structures
Authors: B. Chikh, L. Moussa, H. Bechtoula, Y. Mehani, A. Zerzour
Abstract:
This study presents a new method, applicable to evaluation and design of structures has been developed and illustrated by comparison with the capacity spectrum method (CSM, ATC-40). This method uses inelastic spectra and gives peak responses consistent with those obtained when using the nonlinear time history analysis. Hereafter, the seismic demands assessment method is called in this paper DSM, Ductility Spectrum Method. It is used to estimate the seismic deformation of Single-Degree-Of-Freedom (SDOF) systems based on DDRS, Ductility Demand Response Spectrum, developed by the author.Keywords: seismic demand, capacity, inelastic spectra, design and structure
Procedia PDF Downloads 39518763 Top-Down Construction Method in Concrete Structures: Advantages and Disadvantages of This Construction Method
Authors: Hadi Rouhi Belvirdi
Abstract:
The construction of underground structures using the traditional method, which begins with excavation and the implementation of the foundation of the underground structure, continues with the construction of the main structure from the ground up, and concludes with the completion of the final ceiling, is known as the Bottom-Up Method. In contrast to this method, there is an advanced technique called the Top-Down Method, which has practically replaced the traditional construction method in large projects in industrialized countries in recent years. Unlike the traditional approach, this method starts with the construction of surrounding walls, columns, and the final ceiling and is completed with the excavation and construction of the foundation of the underground structure. Some of the most significant advantages of this method include the elimination or minimization of formwork surfaces, the removal of temporary bracing during excavation, the creation of some traffic facilities during the construction of the structure, and the possibility of using it in limited and high-traffic urban spaces. Despite these numerous advantages, unfortunately, there is still insufficient awareness of this method in our country, to the extent that it can be confidently stated that most stakeholders in the construction industry are unaware of the existence of such a construction method. However, it can be utilized as a very important execution option alongside other conventional methods in the construction of underground structures. Therefore, due to the extensive practical capabilities of this method, this article aims to present a methodology for constructing underground structures based on the aforementioned advanced method to the scientific community of the country, examine the advantages and limitations of this method and their impacts on time and costs, and discuss its application in urban spaces. Finally, some underground structures executed in the Ahvaz urban rail, which are being implemented using this advanced method to the best of our best knowledge, will be introduced.Keywords: top-down method, bottom-up method, underground structure, construction method
Procedia PDF Downloads 11