Search results for: particle detector
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1988

Search results for: particle detector

1808 PD Test in Gas Insulated Substation Using UHF Method

Authors: T. Prabakaran

Abstract:

Gas Insulated Substations (GIS) are widely used as important switchgear equipment because of its high reliability, low space requirement, low risk factor and easy maintenance, yet some failures have been reported. Some of the failures are due to presence of metallic particles inside the GIS compartment. The defect can be generated in GIS during production, maintenance, installation and can be due to ageing of the component. The Ultra-High Frequency (UHF) method is used to diagnose the insulation condition of GIS by detecting the PD signals in GIS. This paper identifies PD patterns for free moving particle defect and particle fixed on cone using UHF method. As insulation failure usually starts with PD activity, this paper investigates the differences in PD characteristics in SF6 gas with different types of defects. Experimental results show that correct identification of defects can be achieved based on considered PD characteristics. The method can be applied to prove the quality of assembly work at commissioning, also on a regular basis after many years in service to detect aged and conducting particles as a part of the condition based maintenance.

Keywords: gas insulated substation, partial discharge, free moving particle defect, particle fixed on cone defect, ultra high frequency method

Procedia PDF Downloads 207
1807 Modeling of Particle Reduction and Volatile Compounds Profile during Chocolate Conching by Electronic Nose and Genetic Programming (GP) Based System

Authors: Juzhong Tan, William Kerr

Abstract:

Conching is one critical procedure in chocolate processing, where special flavors are developed, and smooth mouse feel the texture of the chocolate is developed due to particle size reduction of cocoa mass and other additives. Therefore, determination of the particle size and volatile compounds profile of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products. Currently, precise particle size measurement is usually done by laser scattering which is expensive and inaccessible to small/medium size chocolate manufacturers. Also, some other alternatives, such as micrometer and microscopy, can’t provide good measurements and provide little information. Volatile compounds analysis of cocoa during conching, has similar problems due to its high cost and limited accessibility. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was inserted to a conching machine and was used to monitoring the volatile compound profile of chocolate during the conching. A model correlated volatile compounds profiles along with factors including the content of cocoa, sugar, and the temperature during the conching to particle size of chocolate particles by genetic programming was established. The model was used to predict the particle size reduction of chocolates with different cocoa mass to sugar ratio (1:2, 1:1, 1.5:1, 2:1) at 8 conching time (15min, 30min, 1h, 1.5h, 2h, 4h, 8h, and 24h). And the predictions were compared to laser scattering measurements of the same chocolate samples. 91.3% of the predictions were within the range of later scatting measurement ± 5% deviation. 99.3% were within the range of later scatting measurement ± 10% deviation.

Keywords: cocoa bean, conching, electronic nose, genetic programming

Procedia PDF Downloads 222
1806 A Model of Foam Density Prediction for Expanded Perlite Composites

Authors: M. Arifuzzaman, H. S. Kim

Abstract:

Multiple sets of variables associated with expanded perlite particle consolidation in foam manufacturing were analyzed to develop a model for predicting perlite foam density. The consolidation of perlite particles based on the flotation method and compaction involves numerous variables leading to the final perlite foam density. The variables include binder content, compaction ratio, perlite particle size, various perlite particle densities and porosities, and various volumes of perlite at different stages of process. The developed model was found to be useful not only for prediction of foam density but also for optimization between compaction ratio and binder content to achieve a desired density. Experimental verification was conducted using a range of foam densities (0.15–0.5 g/cm3) produced with a range of compaction ratios (1.5-3.5), a range of sodium silicate contents (0.05–0.35 g/ml) in dilution, a range of expanded perlite particle sizes (1-4 mm), and various perlite densities (such as skeletal, material, bulk, and envelope densities). A close agreement between predictions and experimental results was found.

Keywords: expanded perlite, flotation method, foam density, model, prediction, sodium silicate

Procedia PDF Downloads 382
1805 Effect of Roughness and Microstructure on Tribological Behaviour of 35NCD16 Steel

Authors: A. Jourani, C. Trevisiol, S. Bouvier

Abstract:

The aim of this work is to study the coupled effect of microstructure and surface roughness on friction coefficient, wear resistance and wear mechanisms. Friction tests on 35NCD16 steel are performed under different normal loads (50-110 N) on a pin-on-plane configuration at cyclic sliding with abrasive silicon carbide grains ranging from 35 µm to 200 µm. To vary hardness and microstructure, the specimens are subjected to water quenching and tempering at various temperatures from 200°C to 600°C. The evolution of microstructures and wear mechanisms of worn surfaces are analyzed using scanning electron microscopy (SEM). For a given microstructure and hardness, the friction coefficient decreases with increasing of normal load and decreasing of the abrasive particle size. The wear rate increase with increasing of normal load and abrasive particle size. The results also reveal that there is a critical hardness Hcᵣᵢₜᵢcₐₗ around 430 Hv which maximizes the friction coefficient and wear rate. This corresponds to a microstructure transition from martensite laths to carbides and equiaxed grains, for a tempering around 400°C. Above Hcᵣᵢₜᵢcₐₗ the friction coefficient and the amount of material loss decrease with an increase of hardness and martensite volume fraction. This study also shows that the debris size and the space between the abrasive particles decrease with a reduction in the particle size. The coarsest abrasive grains lost their cutting edges, accompanied by particle damage and empty space due to the particle detachment from the resin matrix. The compact packing nature of finer abrasive papers implicates lower particle detachment and facilitates the clogging and the transition from abrasive to adhesive wear.

Keywords: martensite, microstructure, friction, wear, surface roughness

Procedia PDF Downloads 138
1804 Extending Early High Energy Physics Studies with a Tri-Preon Model

Authors: Peter J. Riley

Abstract:

Introductory courses in High Energy Physics (HEP) can be extended with the Tri-Preon (TP) model to both supplements and challenge the Standard Model (SM) theory. TP supplements by simplifying the tracking of Conserved Quantum Numbers at an interaction vertex, e.g., the lepton number can be seen as a di-preon current. TP challenges by proposing extended particle families to three generations of particle triplets for leptons, quarks, and weak bosons. There are extensive examples discussed at an introductory level in six arXiv publications, including supersymmetry, hyper color, and the Higgs. Interesting exercises include pion decay, kaon-antikaon mixing, neutrino oscillations, and K+ decay to muons. It is a revealing exercise for students to weigh the pros and cons of parallel theories at an early stage in their HEP journey.

Keywords: HEP, particle physics, standard model, Tri-Preon model

Procedia PDF Downloads 50
1803 A Hybrid Particle Swarm Optimization-Nelder- Mead Algorithm (PSO-NM) for Nelson-Siegel- Svensson Calibration

Authors: Sofia Ayouche, Rachid Ellaia, Rajae Aboulaich

Abstract:

Today, insurers may use the yield curve as an indicator evaluation of the profit or the performance of their portfolios; therefore, they modeled it by one class of model that has the ability to fit and forecast the future term structure of interest rates. This class of model is the Nelson-Siegel-Svensson model. Unfortunately, many authors have reported a lot of difficulties when they want to calibrate the model because the optimization problem is not convex and has multiple local optima. In this context, we implement a hybrid Particle Swarm optimization and Nelder Mead algorithm in order to minimize by least squares method, the difference between the zero-coupon curve and the NSS curve.

Keywords: optimization, zero-coupon curve, Nelson-Siegel-Svensson, particle swarm optimization, Nelder-Mead algorithm

Procedia PDF Downloads 404
1802 Liquid Chromatographic Determination of Alprazolam with ACE Inhibitors in Bulk, Respective Pharmaceutical Products and Human Serum

Authors: Saeeda Nadir Ali, Najma Sultana, Muhammad Saeed Arayne, Amtul Qayoom

Abstract:

Present study describes a simple and a fast liquid chromatographic method using ultraviolet detector for simultaneous determination of anxiety relief medicine alprazolam with ACE inhibitors i.e; lisinopril, captopril and enalapril employing purospher star C18 (25 cm, 0.46 cm, 5 µm). Separation was achieved within 5 min at ambient temperature via methanol: water (8:2 v/v) with pH adjusted to 2.9, monitoring the detector response at 220 nm. Optimum parameters were set up as per ICH (2006) guidelines. Calibration range was found out to be 0.312-10 µg mL-1 for alprazolam and 0.625-20 µg mL-1 for all the ACE inhibitors with correlation coefficients > 0.998 and detection limits 85, 37, 68 and 32 ng mL-1 for lisinopril, captopril, enalapril and alprazolam respectively. Intra-day, inter-day precision and accuracy of the assay were in acceptable range of 0.05-1.62% RSD and 98.85-100.76% recovery. Method was determined to be robust and effectively useful for the estimation of studied drugs in dosage formulations and human serum without obstruction of excipients or serum components.

Keywords: alprazolam, ACE inhibitors, RP HPLC, serum

Procedia PDF Downloads 488
1801 3-D Modeling of Particle Size Reduction from Micro to Nano Scale Using Finite Difference Method

Authors: Himanshu Singh, Rishi Kant, Shantanu Bhattacharya

Abstract:

This paper adopts a top-down approach for mathematical modeling to predict the size reduction from micro to nano-scale through persistent etching. The process is simulated using a finite difference approach. Previously, various researchers have simulated the etching process for 1-D and 2-D substrates. It consists of two processes: 1) Convection-Diffusion in the etchant domain; 2) Chemical reaction at the surface of the particle. Since the process requires analysis along moving boundary, partial differential equations involved cannot be solved using conventional methods. In 1-D, this problem is very similar to Stefan's problem of moving ice-water boundary. A fixed grid method using finite volume method is very popular for modelling of etching on a one and two dimensional substrate. Other popular approaches include moving grid method and level set method. In this method, finite difference method was used to discretize the spherical diffusion equation. Due to symmetrical distribution of etchant, the angular terms in the equation can be neglected. Concentration is assumed to be constant at the outer boundary. At the particle boundary, the concentration of the etchant is assumed to be zero since the rate of reaction is much faster than rate of diffusion. The rate of reaction is proportional to the velocity of the moving boundary of the particle. Modelling of the above reaction was carried out using Matlab. The initial particle size was taken to be 50 microns. The density, molecular weight and diffusion coefficient of the substrate were taken as 2.1 gm/cm3, 60 and 10-5 cm2/s respectively. The etch-rate was found to decline initially and it gradually became constant at 0.02µ/s (1.2µ/min). The concentration profile was plotted along with space at different time intervals. Initially, a sudden drop is observed at the particle boundary due to high-etch rate. This change becomes more gradual with time due to declination of etch rate.

Keywords: particle size reduction, micromixer, FDM modelling, wet etching

Procedia PDF Downloads 400
1800 Large Eddy Simulation of Particle Clouds Using Open-Source CFD

Authors: Ruo-Qian Wang

Abstract:

Open-source CFD has become increasingly popular and promising. The recent progress in multiphase flow enables new CFD applications, which provides an economic and flexible research tool for complex flow problems. Our numerical study using four-way coupling Euler-Lagrangian Large-Eddy Simulations to resolve particle cloud dynamics with OpenFOAM and CFDEM will be introduced: The fractioned Navier-Stokes equations are numerically solved for fluid phase motion, solid phase motion is addressed by Lagrangian tracking for every single particle, and total momentum is conserved by fluid-solid inter-phase coupling. The grid convergence test was performed, which proves the current resolution of the mesh is appropriate. Then, we validated the code by comparing numerical results with experiments in terms of particle cloud settlement and growth. A good comparison was obtained showing reliability of the present numerical schemes. The time and height at phase separations were defined and analyzed for a variety of initial release conditions. Empirical formulas were drawn to fit the results.

Keywords: four-way coupling, dredging, land reclamation, multiphase flows, oil spill

Procedia PDF Downloads 400
1799 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'

Authors: Anthony Coogan

Abstract:

Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.

Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle

Procedia PDF Downloads 169
1798 Optimal Allocation of Distributed Generation Sources for Loss Reduction and Voltage Profile Improvement by Using Particle Swarm Optimization

Authors: Muhammad Zaheer Babar, Amer Kashif, Muhammad Rizwan Javed

Abstract:

Nowadays distributed generation integration is best way to overcome the increasing load demand. Optimal allocation of distributed generation plays a vital role in reducing system losses and improves voltage profile. In this paper, a Meta heuristic technique is proposed for allocation of DG in order to reduce power losses and improve voltage profile. The proposed technique is based on Multi Objective Particle Swarm optimization. Fewer control parameters are needed in this algorithm. Modification is made in search space of PSO. The effectiveness of proposed technique is tested on IEEE 33 bus test system. Single DG as well as multiple DG scenario is adopted for proposed method. Proposed method is more effective as compared to other Meta heuristic techniques and gives better results regarding system losses and voltage profile.

Keywords: Distributed generation (DG), Multi Objective Particle Swarm Optimization (MOPSO), particle swarm optimization (PSO), IEEE standard Test System

Procedia PDF Downloads 421
1797 Estimation of Particle Size Distribution Using Magnetization Data

Authors: Navneet Kaur, S. D. Tiwari

Abstract:

Magnetic nanoparticles possess fascinating properties which make their behavior unique in comparison to corresponding bulk materials. Superparamagnetism is one such interesting phenomenon exhibited only by small particles of magnetic materials. In this state, the thermal energy of particles become more than their magnetic anisotropy energy, and so particle magnetic moment vectors fluctuate between states of minimum energy. This situation is similar to paramagnetism of non-interacting ions and termed as superparamagnetism. The magnetization of such systems has been described by Langevin function. But, the estimated fit parameters, in this case, are found to be unphysical. It is due to non-consideration of particle size distribution. In this work, analysis of magnetization data on NiO nanoparticles is presented considering the effect of particle size distribution. Nanoparticles of NiO of two different sizes are prepared by heating freshly synthesized Ni(OH)₂ at different temperatures. Room temperature X-ray diffraction patterns confirm the formation of single phase of NiO. The diffraction lines are seen to be quite broad indicating the nanocrystalline nature of the samples. The average crystallite size are estimated to be about 6 and 8 nm. The samples are also characterized by transmission electron microscope. Magnetization of both sample is measured as function of temperature and applied magnetic field. Zero field cooled and field cooled magnetization are measured as a function of temperature to determine the bifurcation temperature. The magnetization is also measured at several temperatures in superparamagnetic region. The data are fitted to an appropriate expression considering a distribution in particle size following a least square fit procedure. The computer codes are written in PYTHON. The presented analysis is found to be very useful for estimating the particle size distribution present in the samples. The estimated distributions are compared with those determined from transmission electron micrographs.

Keywords: anisotropy, magnetization, nanoparticles, superparamagnetism

Procedia PDF Downloads 104
1796 Reactive Fabrics for Chemical Warfare Agent Decomposition Using Particle Crystallization

Authors: Myungkyu Park, Minkun Kim, Sunghoon Kim, Samgon Ryu

Abstract:

Recently, research for reactive fabrics which have the characteristics of CWA (Chemical Warfare Agent) decomposition is being performed actively. The performance level of decomposition for CWA decomposition in various environmental condition is one of the critical factors in applicability as protective materials for NBC (Nuclear, Biological, and Chemical) protective clothing. In this study, results of performance test for CWA decomposition by reactive fabric made of electrospinning web and reactive particle are presented. Currently, the MOF (metal organic framework) type of UiO-66-NH₂ is frequently being studied as material for decomposing CWA especially blister agent HD [Bis(2-chloroethyl) sulfide]. When we test decomposition rate with electrospinning web made of PVB (Polyvinyl Butiral) polymer and UiO-66-NH₂ particle, we can get very high protective performance than the case other particles are applied. Furthermore, if the repellant surface fabric is added on reactive material as the component of protective fabric, the performance of layer by layered reactive fabric could be approached to the level of current NBC protective fabric for HD decomposition rate. Reactive fabric we used in this study is manufactured by electrospinning process of polymer which contains the reactive particle of UiO-66-NH₂, and we performed crystalizing process once again on that polymer fiber web in solvent systems as a second step for manufacturing reactive fabric. Three kinds of polymer materials are used in this process, but PVB was most suitable as an electrospinning fiber polymer considering the shape of product. The density of particle on fiber web and HD decomposition rate is enhanced by secondary crystallization compared with the results which are not processed. The amount of HD penetration by 24hr AVLAG (Aerosol Vapor Liquid Assessment Group) swatch test through the reactive fabrics with secondary crystallization and without crystallization is 24 and 146μg/cm² respectively. Even though all of the reactive fiber webs for this test are combined with repellant surface layer at outer side of swatch, the effects of secondary crystallization of particle for the reactive fiber web are remarkable.

Keywords: CWA, Chemical Warfare Agent, gas decomposition, particle growth, protective clothing, reactive fabric, swatch test

Procedia PDF Downloads 250
1795 Insights into Particle Dispersion, Agglomeration and Deposition in Turbulent Channel Flow

Authors: Mohammad Afkhami, Ali Hassanpour, Michael Fairweather

Abstract:

The work described in this paper was undertaken to gain insight into fundamental aspects of turbulent gas-particle flows with relevance to processes employed in a wide range of applications, such as oil and gas flow assurance in pipes, powder dispersion from dry powder inhalers, and particle resuspension in nuclear waste ponds, to name but a few. In particular, the influence of particle interaction and fluid phase behavior in turbulent flow on particle dispersion in a horizontal channel is investigated. The mathematical modeling technique used is based on the large eddy simulation (LES) methodology embodied in the commercial CFD code FLUENT, with flow solutions provided by this approach coupled to a second commercial code, EDEM, based on the discrete element method (DEM) which is used for the prediction of particle motion and interaction. The results generated by LES for the fluid phase have been validated against direct numerical simulations (DNS) for three different channel flows with shear Reynolds numbers, Reτ = 150, 300 and 590. Overall, the LES shows good agreement, with mean velocities and normal and shear stresses matching those of the DNS in both magnitude and position. The research work has focused on the prediction of those conditions favoring particle aggregation and deposition within turbulent flows. Simulations have been carried out to investigate the effects of particle size, density and concentration on particle agglomeration. Furthermore, particles with different surface properties have been simulated in three channel flows with different levels of flow turbulence, achieved by increasing the Reynolds number of the flow. The simulations mimic the conditions of two-phase, fluid-solid flows frequently encountered in domestic, commercial and industrial applications, for example, air conditioning and refrigeration units, heat exchangers, oil and gas suction and pressure lines. The particle size, density, surface energy and volume fractions selected are 45.6, 102 and 150 µm, 250, 1000 and 2159 kg m-3, 50, 500, and 5000 mJ m-2 and 7.84 × 10-6, 2.8 × 10-5, and 1 × 10-4, respectively; such particle properties are associated with particles found in soil, as well as metals and oxides prevalent in turbulent bounded fluid-solid flows due to erosion and corrosion of inner pipe walls. It has been found that the turbulence structure of the flow dominates the motion of the particles, creating particle-particle interactions, with most of these interactions taking place at locations close to the channel walls and in regions of high turbulence where their agglomeration is aided both by the high levels of turbulence and the high concentration of particles. A positive relationship between particle surface energy, concentration, size and density, and agglomeration was observed. Moreover, the results derived for the three Reynolds numbers considered show that the rate of agglomeration is strongly influenced for high surface energy particles by, and increases with, the intensity of the flow turbulence. In contrast, for lower surface energy particles, the rate of agglomeration diminishes with an increase in flow turbulence intensity.

Keywords: agglomeration, channel flow, DEM, LES, turbulence

Procedia PDF Downloads 291
1794 Influence of Modified and Unmodified Cow Bone on the Mechanical Properties of Reinforced Polyester Composites for Biomedical Applications

Authors: I. O. Oladele, J. A. Omotoyinbo, A. M. Okoro, A. G. Okikiola, J. L. Olajide

Abstract:

This work was carried out to investigate comparatively the effects of modified and unmodified cow bone particles on the mechanical properties of polyester matrix composites in order to investigate the suitability of the materials as biomaterial. Cow bones were procured from an abattoir, sun dried for 4 weeks and crushed. The crushed bones were divided into two, where one part was turned to ash while the other part was pulverized with laboratory ball mill before the two grades were sieved using 75 µm sieve size. Bone ash and bone particle reinforced tensile and flexural composite samples were developed from pre-determined proportions of 2, 4, 6, and 8 %. The samples after curing were stripped from the moulds and were allowed to further cure for 3 weeks before tensile and flexural tests were performed on them. The tensile test result showed that, 8 wt % bone particle reinforced polyester composites has higher tensile properties except for modulus of elasticity where 8 wt % bone ash particle reinforced composites has higher value while for flexural test, bone ash particle reinforced composites demonstrate the best flexural properties. The results show that these materials are structurally compatible.

Keywords: biomedical, composites, cow bone, mechanical properties, polyester, reinforcement

Procedia PDF Downloads 249
1793 A Unified Model for Predicting Particle Settling Velocity in Pipe, Annulus and Fracture

Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li

Abstract:

Transports of solid particles through the drill pipe, drill string-hole annulus and hydraulically generated fractures are important dynamic processes encountered in oil and gas well drilling and completion operations. Different from particle transport in infinite space, the transports of cuttings, proppants and formation sand are hindered by a finite boundary. Therefore, an accurate description of the particle transport behavior under the bounded wall conditions encountered in drilling and hydraulic fracturing operations is needed to improve drilling safety and efficiency. In this study, the particle settling experiments were carried out to investigate the particle settling behavior in the pipe, annulus and between the parallel plates filled with power-law fluids. Experimental conditions simulated the particle Reynolds number ranges of 0.01-123.87, the dimensionless diameter ranges of 0.20-0.80 and the fluid flow behavior index ranges of 0.48-0.69. Firstly, the wall effect of the annulus is revealed by analyzing the settling process of the particles in the annular geometry with variable inner pipe diameter. Then, the geometric continuity among the pipe, annulus and parallel plates was determined by introducing the ratio of inner diameter to an outer diameter of the annulus. Further, a unified dimensionless diameter was defined to confirm the relationship between the three different geometry in terms of the wall effect. In addition, a dimensionless term independent from the settling velocity was introduced to establish a unified explicit settling velocity model applicable to pipes, annulus and fractures with a mean relative error of 8.71%. An example case study was provided to demonstrate the application of the unified model for predicting particle settling velocity. This paper is the first study of annulus wall effects based on the geometric continuity concept and the unified model presented here will provide theoretical guidance for improved hydraulic design of cuttings transport, proppant placement and sand management operations.

Keywords: wall effect, particle settling velocity, cuttings transport, proppant transport in fracture

Procedia PDF Downloads 133
1792 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction

Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey

Abstract:

In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.

Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization

Procedia PDF Downloads 321
1791 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 372
1790 Micro- and Nanoparticle Transport and Deposition in Elliptic Obstructed Channels by Lattice Boltzmann Method

Authors: Salman Piri

Abstract:

In this study, a two-dimensional lattice Boltzmann method (LBM) was considered for the numerical simulation of fluid flow in a channel. Also, the Lagrangian method was used for particle tracking in one-way coupling. Three hundred spherical particles with specific diameters were released in the channel entry and an elliptical object was placed in the channel for flow obstruction. The effect of gravity, the drag force, the Saffman lift and the Brownian forces were evaluated in the particle motion trajectories. Also, the effect of the geometrical parameter, ellipse aspect ratio, and the flow characteristic or Reynolds number was surveyed for the transport and deposition of particles. Moreover, the influence of particle diameter between 0.01 and 10 µm was investigated. Results indicated that in small Reynolds, more inertial and gravitational trapping occurred on the obstacle surface for particles with larger diameters. Whereas, for nano-particles, influenced by Brownian diffusion and vortices behind the obstacle, the inertial and gravitational mechanisms were insignificant and diffusion was the dominant deposition mechanism. In addition, in Reynolds numbers larger than 400, there was no significant difference between the deposition of finer and larger particles. Also, in higher aspect ratios of the ellipse, more inertial trapping occurred for particles of larger diameter (10 micrometers), while in lower cases, interception and gravitational mechanisms were dominant.

Keywords: ellipse aspect elito, particle tracking diffusion, lattice boltzman method, larangain particle tracking

Procedia PDF Downloads 53
1789 Artificial Neural Networks Based Calibration Approach for Six-Port Receiver

Authors: Nadia Chagtmi, Nejla Rejab, Noureddine Boulejfen

Abstract:

This paper presents a calibration approach based on artificial neural networks (ANN) to determine the envelop signal (I+jQ) of a six-port based receiver (SPR). The memory effects called also dynamic behavior and the nonlinearity brought by diode based power detector have been taken into consideration by the ANN. Experimental set-up has been performed to validate the efficiency of this method. The efficiency of this approach has been confirmed by the obtained results in terms of waveforms. Moreover, the obtained error vector magnitude (EVM) and the mean absolute error (MAE) have been calculated in order to confirm and to test the ANN’s performance to achieve I/Q recovery using the output voltage detected by the power based detector. The baseband signal has been recovered using ANN with EVMs no higher than 1 % and an MAE no higher than 17, 26 for the SPR excited different type of signals such QAM (quadrature amplitude modulation) and LTE (Long Term Evolution).

Keywords: six-port based receiver; calibration, nonlinearity, memory effect, artificial neural network

Procedia PDF Downloads 39
1788 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 85
1787 Quantification of Hydrogen Sulfide and Methyl Mercaptan in Air Samples from a Waste Management Facilities

Authors: R. F. Vieira, S. A. Figueiredo, O. M. Freitas, V. F. Domingues, C. Delerue-Matos

Abstract:

The presence of sulphur compounds like hydrogen sulphide and mercaptans is one of the reasons for waste-water treatment and waste management being associated with odour emissions. In this context having a quantifying method for these compounds helps in the optimization of treatment with the goal of their elimination, namely biofiltration processes. The aim of this study was the development of a method for quantification of odorous gases in waste treatment plants air samples. A method based on head space solid phase microextraction (HS-SPME) coupled with gas chromatography - flame photometric detector (GC-FPD) was used to analyse H2S and Metil Mercaptan (MM). The extraction was carried out with a 75-μm Carboxen-polydimethylsiloxane fiber coating at 22 ºC for 20 min, and analysed by a GC 2010 Plus A from Shimadzu with a sulphur filter detector: splitless mode (0.3 min), the column temperature program was from 60 ºC, increased by 15 ºC/min to 100 ºC (2 min). The injector temperature was held at 250 ºC, and the detector at 260 ºC. For calibration curve a gas diluter equipment (digital Hovagas G2 - Multi Component Gas Mixer) was used to do the standards. This unit had two input connections, one for a stream of the dilute gas and another for a stream of nitrogen and an output connected to a glass bulb. A 40 ppm H2S and a 50 ppm MM cylinders were used. The equipment was programmed to the selected concentration, and it automatically carried out the dilution to the glass bulb. The mixture was left flowing through the glass bulb for 5 min and then the extremities were closed. This method allowed the calibration between 1-20 ppm for H2S and 0.02-0.1 ppm and 1-3.5 ppm for MM. Several quantifications of air samples from inlet and outlet of a biofilter operating in a waste management facility in the north of Portugal allowed the evaluation the biofilters performance.

Keywords: biofiltration, hydrogen sulphide, mercaptans, quantification

Procedia PDF Downloads 445
1786 Design and Simulation of a Radiation Spectrometer Using Scintillation Detectors

Authors: Waleed K. Saib, Abdulsalam M. Alhawsawi, Essam Banoqitah

Abstract:

The idea of this research is to design a radiation spectrometer using LSO scintillation detector coupled to a C series of SiPM (silicon photomultiplier). The device can be used to detects gamma and X-ray radiation. This device is also designed to estimates the activity of the source contamination. The SiPM will detect light in the visible range above the threshold and read them as counts. Three gamma sources were used for these experiments Cs-137, Am-241 and Co-60 with various activities. These sources are applied for four experiments operating the SiPM as a spectrometer, energy resolution, pile-up set and efficiency. The SiPM is connected to a MCA to perform as a spectrometer. Cerium doped Lutetium Silicate (Lu₂SiO₅) with light yield 26000 photons/Mev coupled with the SiPM. As a result, all the main features of the Cs-137, Am-241 and Co-60 are identified in MCA. The experiment shows how photon energy and probability of interaction are inversely related. Total attenuation reduces as photon energy increases. An analytical calculation was made to obtain the FWHM resolution for each gamma source. The FWHM resolution for Am-241 (59 keV) is 28.75 %, for Cs-137 (662 keV) is 7.85 %, for Co-60 (1173 keV) is 4.46 % and for Co-60 (1332 keV) is 3.70%. Moreover, the experiment shows that the dead time and counts number decreased when the pile-up rejection was disabled and the FWHM decreased when the pile-up was enabled. The efficiencies were calculated at four different distances from the detector 2, 4, 8 and 16 cm. The detection efficiency was observed to declined exponentially with increasing distance from the detector face. Conclusively, the SiPM board operated with an LSO scintillator crystal as a spectrometer. The SiPM energy resolution for the three gamma sources used was a decent comparison to other PMTs.

Keywords: PMT, radiation, radiation detection, scintillation detectors, silicon photomultiplier, spectrometer

Procedia PDF Downloads 129
1785 Interaction of Metals with Non-Conventional Solvents

Authors: Evgeny E. Tereshatov, C. M. Folden

Abstract:

Ionic liquids and deep eutectic mixtures represent so-called non-conventional solvents. The former, composed of discrete ions, is a salt with a melting temperature below 100°С. The latter, consisting of hydrogen bond donors and acceptors, is a mixture of at least two compounds, resulting in a melting temperature depression in comparison with that of the individual moiety. These systems also can be water-immiscible, which makes them applicable for metal extraction. This work will cover interactions of In, Tl, Ir, and Rh in hydrochloric acid media with eutectic mixtures and Er, Ir, and At in a gas phase with chemically modified α-detectors. The purpose is to study chemical systems based on non-conventional solvents in terms of their interaction with metals. Once promising systems are found, the next step is to modify the surface of α-detectors used in the online element production at cyclotrons to get the detector chemical selectivity. Initially, the metal interactions are studied by means of the liquid-liquid extraction technique. Then appropriate molecules are chemisorbed on the surrogate surface first to understand the coating quality. Finally, a detector is covered with the same molecule, and the metal sorption on such detectors is studied in the online regime. It was found that chemical treatment of the surface can result in 99% coverage with a monolayer formation. This surface is chemically active and can adsorb metals from hydrochloric acid solutions. Similarly, a detector surface was modified and tested during cyclotron-based experiments. Thus, a procedure of detectors functionalization has been developed, and this opens an interesting opportunity of studying chemisorption of elements which do not have stable isotopes.

Keywords: mechanism, radioisotopes, solvent extraction, gas phase sorption

Procedia PDF Downloads 78
1784 The MCNP Simulation of Prompt Gamma-Ray Neutron Activation Analysis at TRR-1/M1

Authors: S. Sangaroon, W. Ratanatongchai, S. Khaweerat, R. Picha, J. Channuie

Abstract:

The prompt gamma-ray neutron activation analysis system (PGNAA) has been constructed and installed at a 6 inch diameter neutron beam port of the Thai Research Reactor-1/ Modification 1 (TRR-1/M1) since 1989. It was designed for the reactor operating power at 1.2 MW. The purpose of the system is for an elemental and isotopic analytical. In 2016, the PGNAA facility will be developed to reduce the leakage and background of neutrons and gamma radiation at the sample and detector position. In this work, the designed condition of these facilities is carried out based on the Monte Carlo method using MCNP5 computer code. The conditions with different modification materials, thicknesses and structure of the PGNAA facility, including gamma collimator and radiation shields of the detector, are simulated, and then the optimal structure parameters with a significantly improved performance of the facility are obtained.

Keywords: MCNP simulation, PGNAA, Thai research reactor (TRR-1/M1), radiation shielding

Procedia PDF Downloads 349
1783 Thinned Elliptical Cylindrical Antenna Array Synthesis Using Particle Swarm Optimization

Authors: Rajesh Bera, Durbadal Mandal, Rajib Kar, Sakti P. Ghoshal

Abstract:

This paper describes optimal thinning of an Elliptical Cylindrical Array (ECA) of uniformly excited isotropic antennas which can generate directive beam with minimum relative Side Lobe Level (SLL). The Particle Swarm Optimization (PSO) method, which represents a new approach for optimization problems in electromagnetic, is used in the optimization process. The PSO is used to determine the optimal set of ‘ON-OFF’ elements that provides a radiation pattern with maximum SLL reduction. Optimization is done without prefixing the value of First Null Beam Width (FNBW). The variation of SLL with element spacing of thinned array is also reported. Simulation results show that the number of array elements can be reduced by more than 50% of the total number of elements in the array with a simultaneous reduction in SLL to less than -27dB.

Keywords: thinned array, Particle Swarm Optimization, Elliptical Cylindrical Array, Side Lobe Label.

Procedia PDF Downloads 414
1782 Nonlinear Free Surface Flow Simulations Using Smoothed Particle Hydrodynamics

Authors: Abdelraheem M. Aly, Minh Tuan Nguyen, Sang-Wook Lee

Abstract:

The incompressible smoothed particle hydrodynamics (ISPH) is used to simulate impact free surface flows. In the ISPH, pressure is evaluated by solving pressure Poisson equation using a semi-implicit algorithm based on the projection method. The current ISPH method is applied to simulate dam break flow over an inclined plane with different inclination angles. The effects of inclination angle in the velocity of wave front and pressure distribution is discussed. The impact of circular cylinder over water in tank has also been simulated using ISPH method. The computed pressures on the solid boundaries is studied and compared with the experimental results.

Keywords: incompressible smoothed particle hydrodynamics, free surface flow, inclined plane, water entry impact

Procedia PDF Downloads 372
1781 Improved Multi-Objective Particle Swarm Optimization Applied to Design Problem

Authors: Kapse Swapnil, K. Shankar

Abstract:

Aiming at optimizing the weight and deflection of cantilever beam subjected to maximum stress and maximum deflection, Multi-objective Particle Swarm Optimization (MOPSO) with Utopia Point based local search is implemented. Utopia point is used to govern the search towards the Pareto Optimal set. The elite candidates obtained during the iterations are stored in an archive according to non-dominated sorting and also the archive is truncated based on least crowding distance. Local search is also performed on elite candidates and the most diverse particle is selected as the global best. This method is implemented on standard test functions and it is observed that the improved algorithm gives better convergence and diversity as compared to NSGA-II in fewer iterations. Implementation on practical structural problem shows that in 5 to 6 iterations, the improved algorithm converges with better diversity as evident by the improvement of cantilever beam on an average of 0.78% and 9.28% in the weight and deflection respectively compared to NSGA-II.

Keywords: Utopia point, multi-objective particle swarm optimization, local search, cantilever beam

Procedia PDF Downloads 483
1780 X-Corner Detection for Camera Calibration Using Saddle Points

Authors: Abdulrahman S. Alturki, John S. Loomis

Abstract:

This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations.

Keywords: camera calibration, corner detector, edge detector, saddle points

Procedia PDF Downloads 375
1779 Measurement of Steady Streaming from an Oscillating Bubble Using Particle Image Velocimetry

Authors: Yongseok Kwon, Woowon Jeong, Eunjin Cho, Sangkug Chung, Kyehan Rhee

Abstract:

Steady streaming flow fields induced by a 500 um bubble oscillating at 12 kHz were measured using microscopic particle image velocimetry (PIV). The accuracy of velocity measurement using a micro PIV system was checked by comparing the measured velocity fields with the theoretical velocity profiles in fully developed laminar flow. The steady streaming flow velocities were measured in the saggital plane of the bubble attached on the wall. Measured velocity fields showed upward jet flow with two symmetric counter-rotating vortices, and the maximum streaming velocity was about 12 mm/s, which was within the velocity ranges measured by other researchers. The measured streamlines were compared with the analytic solution, and they also showed a reasonable agreement.

Keywords: oscillating bubble, particle image velocimetry, microstreaming, vortices,

Procedia PDF Downloads 380