Search results for: asymmetrical three phase work.
487 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals
Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty
Abstract:
A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient, but not the magnitude. A neural network with two hidden layers was then used to learn the coefficient magnitudes, along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.
Keywords: Quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 187486 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.
Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016485 Effect of Initial Conditions on Aerodynamic and Acoustic Characteristics of High Subsonic Jets from Sharp Edged Circular Orifice
Authors: Murugan, K. N. Sharma, S. D.
Abstract:
The present work involves measurements to examine the effects of initial conditions on aerodynamic and acoustic characteristics of a Jet at M=0.8 by changing the orientation of sharp edged orifice plate. A thick plate with chamfered orifice presented divergent and convergent openings when it was flipped over. The centerline velocity was found to decay more rapidly for divergent orifice and that was consistent with the enhanced mass entrainment suggesting quicker spread of the jet compared with that from the convergent orifice. The mixing layer region elucidated this effect of initial conditions at an early stage – the growth was found to be comparatively more pronounced for the divergent orifice resulting in reduced potential core size. The acoustic measurements, carried out in the near field noise region outside the jet within potential core length, showed the jet from the divergent orifice to be less noisy. The frequency spectra of the noise signal exhibited that in the initial region of comparatively thin mixing layer for the convergent orifice, the peak registered a higher SPL and a higher frequency as well. The noise spectra and the mixing layer development suggested a direct correlation between the coherent structures developing in the initial region of the jet and the noise captured in the surrounding near field.Keywords: Convergent orifice jet, Divergent orifice jet, Mass entrainment, mixing layer, near field noise, frequency spectrum, SPL, Strouhal number, wave number, reactive pressure field, propagating pressure field.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1559484 The Design of a Vehicle Traffic Flow Prediction Model for a Gauteng Freeway Based on an Ensemble of Multi-Layer Perceptron
Authors: Tebogo Emma Makaba, Barnabas Ndlovu Gatsheni
Abstract:
The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.Keywords: Bagging ensemble methods, confusion matrix, multi-layer perceptron, vehicle traffic flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1777483 Natural Gas Dehydration Process Simulation and Optimization: A Case Study of Khurmala Field in Iraqi Kurdistan Region
Authors: R. Abdulrahman, I. Sebastine
Abstract:
Natural gas is the most popular fossil fuel in the current era and future as well. Natural gas is existed in underground reservoirs so it may contain many of non-hydrocarbon components for instance, hydrogen sulfide, nitrogen and water vapor. These impurities are undesirable compounds and cause several technical problems for example, corrosion and environment pollution. Therefore, these impurities should be reduce or removed from natural gas stream. Khurmala dome is located in southwest Erbil-Kurdistan region. The Kurdistan region government has paid great attention for this dome to provide the fuel for Kurdistan region. However, the Khurmala associated natural gas is currently flaring at the field. Moreover, nowadays there is a plan to recover and trade this gas and to use it either as feedstock to power station or to sell it in global market. However, the laboratory analysis has showed that the Khurmala sour gas has huge quantities of H2S about (5.3%) and CO2 about (4.4%). Indeed, Khurmala gas sweetening process has been removed in previous study by using Aspen HYSYS. However, Khurmala sweet gas still contents some quintets of water about 23 ppm in sweet gas stream. This amount of water should be removed or reduced. Indeed, water content in natural gas cause several technical problems such as hydrates and corrosion. Therefore, this study aims to simulate the prospective Khurmala gas dehydration process by using Aspen HYSYS V. 7.3 program. Moreover, the simulation process succeeded in reducing the water content to less than 0.1ppm. In addition, the simulation work is also achieved process optimization by using several desiccant types for example, TEG and DEG and it also study the relationship between absorbents type and its circulation rate with HCs losses from glycol regenerator tower.Keywords: Aspen Hysys, Process simulation, gas dehydration, process optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8970482 Formant Tracking Linear Prediction Model using HMMs for Noisy Speech Processing
Authors: Zaineb Ben Messaoud, Dorra Gargouri, Saida Zribi, Ahmed Ben Hamida
Abstract:
This paper presents a formant-tracking linear prediction (FTLP) model for speech processing in noise. The main focus of this work is the detection of formant trajectory based on Hidden Markov Models (HMM), for improved formant estimation in noise. The approach proposed in this paper provides a systematic framework for modelling and utilization of a time- sequence of peaks which satisfies continuity constraints on parameter; the within peaks are modelled by the LP parameters. The formant tracking LP model estimation is composed of three stages: (1) a pre-cleaning multi-band spectral subtraction stage to reduce the effect of residue noise on formants (2) estimation stage where an initial estimate of the LP model of speech for each frame is obtained (3) a formant classification using probability models of formants and Viterbi-decoders. The evaluation results for the estimation of the formant tracking LP model tested in Gaussian white noise background, demonstrate that the proposed combination of the initial noise reduction stage with formant tracking and LPC variable order analysis, results in a significant reduction in errors and distortions. The performance was evaluated with noisy natual vowels extracted from international french and English vocabulary speech signals at SNR value of 10dB. In each case, the estimated formants are compared to reference formants.Keywords: Formants Estimation, HMM, Multi Band Spectral Subtraction, Variable order LPC coding, White Gauusien Noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962481 The MUST ADS Concept
Authors: J-B. Clavel, N. Thiollière, B. Mouginot
Abstract:
The presented work is motivated by a French law regarding nuclear waste management. A new conceptual Accelerator Driven System (ADS) designed for the Minor Actinides (MA) transmutation has been assessed by numerical simulation. The MUltiple Spallation Target (MUST) ADS combines high thermal power (up to 1.4 GWth) and high specific power. A 30 mA and 1 GeV proton beam is divided into three secondary beams transmitted on three liquid lead-bismuth spallation targets. Neutron and thermalhydraulic simulations have been performed with the code MURE, based on the Monte-Carlo transport code MCNPX. A methodology has been developed to define characteristic of the MUST ADS concept according to a specific transmutation scenario. The reference scenario is based on a MA flux (neptunium, americium and curium) providing from European Fast Reactor (EPR) and a plutonium multireprocessing strategy is accounted for. The MUST ADS reference concept is a sodium cooled fast reactor. The MA fuel at equilibrium is mixed with MgO inert matrix to limit the core reactivity and improve the fuel thermal conductivity. The fuel is irradiated over five years. Five years of cooling and two years for the fuel fabrication are taken into account. The MUST ADS reference concept burns about 50% of the initial MA inventory during a complete cycle. In term of mass, up to 570 kg/year are transmuted in one concept. The methodology to design the MUST ADS and to calculate fuel composition at equilibrium is precisely described in the paper. A detailed fuel evolution analysis is performed and the reference scenario is compared to a scenario where only americium transmutation is performed.Keywords: Accelerator Driven System, double strata scenario, minor actinides, MUST, transmutation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690480 Near-Infrared Hyperspectral Imaging Spectroscopy to Detect Microplastics and Pieces of Plastic in Almond Flour
Authors: H. Apaza, L. Chévez, H. Loro
Abstract:
Plastic and microplastic pollution in human food chain is a big problem for human health that requires more elaborated techniques that can identify their presences in different kinds of food. Hyperspectral imaging technique is an optical technique than can detect the presence of different elements in an image and can be used to detect plastics and microplastics in a scene. To do this statistical techniques are required that need to be evaluated and compared in order to find the more efficient ones. In this work, two problems related to the presence of plastics are addressed, the first is to detect and identify pieces of plastic immersed in almond seeds, and the second problem is to detect and quantify microplastic in almond flour. To do this we make use of the analysis hyperspectral images taken in the range of 900 to 1700 nm using 4 unmixing techniques of hyperspectral imaging which are: least squares unmixing (LSU), non-negatively constrained least squares unmixing (NCLSU), fully constrained least squares unmixing (FCLSU), and scaled constrained least squares unmixing (SCLSU). NCLSU, FCLSU, SCLSU techniques manage to find the region where the plastic is found and also manage to quantify the amount of microplastic contained in the almond flour. The SCLSU technique estimated a 13.03% abundance of microplastics and 86.97% of almond flour compared to 16.66% of microplastics and 83.33% abundance of almond flour prepared for the experiment. Results show the feasibility of applying near-infrared hyperspectral image analysis for the detection of plastic contaminants in food.
Keywords: Food, plastic, microplastic, NIR hyperspectral imaging, unmixing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1188479 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation
Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu
Abstract:
This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.
Keywords: machine learning, neural network, pressurized water reactor, supervisory controller
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 514478 Digital Automatic Gain Control Integrated on WLAN Platform
Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel
Abstract:
In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.Keywords: WLAN, AGC, RSSI, baseband processor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3949477 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.
Keywords: Composite material, crashworthiness, finite element analysis, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1129476 Quality Assessment of Hollow Sandcrete Blocks in Minna, Nigeria
Authors: M. Abdullahi, S. Sadiku, Bashar S. Mohammed, J. I. Aguwa
Abstract:
The properties of hollow sandcrete blocks produced in Minna, Nigeria are presented. Sandcrete block is made of cement, water and sand binded together in certain mix proportions. For the purpose of this work, fifty (50) commercial sandcrete block industries were visited in Minna, Nigeria to obtain block samples and aggregates used for the manufacture, and to take inventory of the mix composition and the production process. Sieve analysis tests were conduction on the soil sample from various block industries to ascertain their quality to be used for block making. The mix ratios were also investigated. Five (5) nine inches (9’’ or 225mm) blocks were obtained from each block industry and tested for dimensional compliance and compressive strength. The results of the soil test shows that the grading fall within the limit for natural aggregate and can easily are used to obtain workable mix. Physical examinations of the block sizes show slight deviation from the standard requirement in NIS 87:2000. Compressive strength of hollow sandcrete blocks in range of 0.12 N/mm2 to 0.54 N/mm2 was obtained which is below the recommendable value of 3.45 N/mm2 for load bearing hollow sandcrete blocks. This indicates that these blocks are below the standard for load-bearing sandcrete blocks and cannot be used as load bearing walling units. The mix composition also indicated low cement content resulting in low compressive strength. Most of the commercial block industries visited does not take curing very serious. Water were only sprinkled ones or twice before the blocks were stacked and made readily available for sale. It is recommended that a mix ratio of 1:4 to 1:6 should be used for the production of sandcrete blocks and proper curing practice should be adhered. Blocks should also be cured for 14 days before making them available for consumers.Keywords: Compressive strength, dimensions, mix proportions, sandcrete blocks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990475 Stress Analysis of Non-persistent Rock Joints under Biaxial Loading
Authors: Omer S. Mughieda
Abstract:
Two-dimensional finite element model was created in this work to investigate the stresses distribution within rock-like samples with offset open non-persistent joints under biaxial loading. The results of this study have explained the fracture mechanisms observed in tests on rock-like material with open non-persistent offset joints [1]. Finite element code SAP2000 was used to study the stresses distribution within the specimens. Four-nodded isoperimetric plain strain element with two degree of freedom per node, and the three-nodded constant strain triangular element with two degree of freedom per node were used in the present study.The results of the present study explained the formation of wing cracks at the tip of the joints for low confining stress as well as the formation of wing cracks at the middle of the joint for the higher confining stress. High shear stresses found in the numerical study at the tip of the joints explained the formation of secondary cracks at the tip of the joints in the experimental study. The study results coincide with the experimental observations which showed that for bridge inclination of 0o, the coalescence occurred due to shear failure and for bridge inclination of 90o the coalescence occurred due to tensile failure while for the other bridge inclinations coalescence occurred due to mixed tensile and shear failure.
Keywords: Finite element, open offset rock joint, SAP2000, biaxial loading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2148474 Method of Estimating Absolute Entropy of Municipal Solid Waste
Authors: Francis Chinweuba Eboh, Peter Ahlström, Tobias Richards
Abstract:
Entropy, as an outcome of the second law of thermodynamics, measures the level of irreversibility associated with any process. The identification and reduction of irreversibility in the energy conversion process helps to improve the efficiency of the system. The entropy of pure substances known as absolute entropy is determined at an absolute reference point and is useful in the thermodynamic analysis of chemical reactions; however, municipal solid waste (MSW) is a structurally complicated material with unknown absolute entropy. In this work, an empirical model to calculate the absolute entropy of MSW based on the content of carbon, hydrogen, oxygen, nitrogen, sulphur, and chlorine on a dry ash free basis (daf) is presented. The proposed model was derived from 117 relevant organic substances which represent the main constituents in MSW with known standard entropies using statistical analysis. The substances were divided into different waste fractions; namely, food, wood/paper, textiles/rubber and plastics waste and the standard entropies of each waste fraction and for the complete mixture were calculated. The correlation of the standard entropy of the complete waste mixture derived was found to be somsw= 0.0101C + 0.0630H + 0.0106O + 0.0108N + 0.0155S + 0.0084Cl (kJ.K-1.kg) and the present correlation can be used for estimating the absolute entropy of MSW by using the elemental compositions of the fuel within the range of 10.3% ≤ C ≤ 95.1%, 0.0% ≤ H ≤ 14.3%, 0.0% ≤ O ≤ 71.1%, 0.0 ≤ N ≤ 66.7%, 0.0% ≤ S ≤ 42.1%, 0.0% ≤ Cl ≤ 89.7%. The model is also applicable for the efficient modelling of a combustion system in a waste-to-energy plant.
Keywords: Absolute entropy, irreversibility, municipal solid waste, waste-to-energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1840473 Implementation of Sprite Animation for Multimedia Application
Authors: Ms. Yi Mon Thant
Abstract:
Animation is simply defined as the sequencing of a series of static images to generate the illusion of movement. Most people believe that actual drawings or creation of the individual images is the animation, when in actuality it is the arrangement of those static images that conveys the motion. To become an animator, it is often assumed that needed the ability to quickly design masterpiece after masterpiece. Although some semblance of artistic skill is a necessity for the job, the real key to becoming a great animator is in the comprehension of timing. This paper will use a combination of sprite animation, frame animation, and some other techniques to cause a group of multi-colored static images to slither around in the bounded area. In addition to slithering, the images will also change the color of different parts of their body, much like the real world creatures that have this amazing ability to change the colors on their bodies do. This paper was implemented by using Java 2 Standard Edition (J2SE). It is both time-consuming and expensive to create animations, regardless if they are created by hand or by using motion-capture equipment. If the animators could reuse old animations and even blend different animations together, a lot of work would be saved in the process. The main objective of this paper is to examine a method for blending several animations together in real time. This paper presents and analyses a solution using Weighted Skeleton Animation (WSA) resulting in limited CPU time and memory waste as well as saving time for the animators. The idea presented is described in detail and implemented. In this paper, text animation, vertex animation, sprite part animation and whole sprite animation were tested. In this research paper, the resolution, smoothness and movement of animated images will be carried out from the parameters, which will be obtained from the experimental research of implementing this paper.Keywords: Weighted Skeleton Animation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832472 Prediction of Product Size Distribution of a Vertical Stirred Mill Based on Breakage Kinetics
Authors: C. R. Danielle, S. Erik, T. Patrick, M. Hugh
Abstract:
In the last decade there has been an increase in demand for fine grinding due to the depletion of coarse-grained orebodies and an increase of processing fine disseminated minerals and complex orebodies. These ores have provided new challenges in concentrator design because fine and ultra-fine grinding is required to achieve acceptable recovery rates. Therefore, the correct design of a grinding circuit is important for minimizing unit costs and increasing product quality. The use of ball mills for grinding in fine size ranges is inefficient and, therefore, vertical stirred grinding mills are becoming increasingly popular in the mineral processing industry due to its already known high energy efficiency. This work presents a hypothesis of a methodology to predict the product size distribution of a vertical stirred mill using a Bond ball mill. The Population Balance Model (PBM) was used to empirically analyze the performance of a vertical mill and a Bond ball mill. The breakage parameters obtained for both grinding mills are compared to determine the possibility of predicting the product size distribution of a vertical mill based on the results obtained from the Bond ball mill. The biggest advantage of this methodology is that most of the minerals processing laboratories already have a Bond ball mill to perform the tests suggested in this study. Preliminary results show the possibility of predicting the performance of a laboratory vertical stirred mill using a Bond ball mill.
Keywords: Bond ball mill, population balance model, product size distribution, vertical stirred mill.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1148471 Turbine Compressor Vibration Analysis and Rotor Movement Evaluation by Shaft Center Line Method (The Case History Related to Main Turbine Compressor of an Olefin Plant in Iran Oil Industries)
Authors: Omid A. Zargar
Abstract:
Vibration monitoring methods of most critical equipment like main turbine and compressors always plays important role in preventive maintenance and management consideration in big industrial plants. There are a number of traditional methods like monitoring the overall vibration data from Bently Nevada panel and the time wave form (TWF) or fast Fourier transform (FFT) monitoring. Besides, Shaft centerline monitoring method developed too much in recent years. There are a number of arguments both in favor of and against this method between people who work in preventive maintenance and condition monitoring systems (vibration analysts). In this paper basic principal of Turbine compressor vibration analysis and rotor movement evaluation by shaft centerline method discussed in details through a case history. This case history is related to main turbine compressor of an olefin plant in Iran oil industry. In addition, some common mistakes that may occur by vibration analyst during the process discussed in details. It is worthy to know that, these mistakes may one of the reasons that sometimes this method seems to be not effective. Furthermore, recent patent and innovation in shaft position and movement evaluation are discussed in this paper.
Keywords: Shaft centerline position, attitude angle, journal bearing, sleeve bearing, tilting pad, steam turbine, main compressor, multistage compressor, condition monitoring, non-contact probe
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7127470 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector
Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu
Abstract:
In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have a higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of a polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical obervation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the nondestructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.
Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1258469 Enhancement of Mechanical Properties for Al-Mg-Si Alloy Using Equal Channel Angular Pressing
Authors: A. Nassef, S. Samy, W. H. El Garaihy
Abstract:
Equal channel angular pressing (ECAP) of commercial Al-Mg-Si alloy was conducted using two strain rates. The ECAP processing was conducted at room temperature and at 250°C. Route A was adopted up to a total number of four passes in the present work. Structural evolution of the aluminum alloy discs was investigated before and after ECAP processing using optical microscopy (OM). Following ECAP, simple compression tests and Vicker’s hardness were performed. OM micrographs showed that, the average grain size of the as-received Al-Mg-Si disc tends to be larger than the size of the ECAP processed discs. Moreover, significant difference in the grain morphologies of the as-received and processed discs was observed. Intensity of deformation was observed via the alignment of the Al-Mg-Si consolidated particles (grains) in the direction of shear, which increased with increasing the number of passes via ECAP. Increasing the number of passes up to 4 resulted in increasing the grains aspect ratio up to ~5. It was found that the pressing temperature has a significant influence on the microstructure, Hv-values, and compressive strength of the processed discs. Hardness measurements demonstrated that 1-pass resulted in increase of Hv-value by 42% compared to that of the as-received alloy. 4-passes of ECAP processing resulted in additional increase in the Hv-value. A similar trend was observed for the yield and compressive strength. Experimental data of the Hv-values demonstrated that there is a lack of any significant dependence on the processing strain rate.
Keywords: Al-Mg-Si alloy, Equal channel angular pressing, Grain refinement, Severe plastic deformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2246468 Application Reliability Method for Concrete Dams
Authors: Mustapha Kamel Mihoubi, Mohamed Essadik Kerkar
Abstract:
Probabilistic risk analysis models are used to provide a better understanding of the reliability and structural failure of works, including when calculating the stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of reliability analysis methods including the methods used in engineering. It is in our case, the use of level 2 methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type first order risk method (FORM) and the second order risk method (SORM). By way of comparison, a level three method was used which generates a full analysis of the problem and involves an integration of the probability density function of random variables extended to the field of security using the Monte Carlo simulation method. Taking into account the change in stress following load combinations: normal, exceptional and extreme acting on the dam, calculation of the results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities, thus causing a significant decrease in strength, shear forces then induce a shift that threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case the increase of uplift in a hypothetical default of the drainage system.
Keywords: Dam, failure, limit-state, Monte Carlo simulation, reliability, probability, simulation, sliding, Taylor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1225467 Simulation of Lean Principles Impact in a Multi-Product Supply Chain
Authors: M. Rossini, A. Portioli Studacher
Abstract:
The market competition is moving from the single firm to the whole supply chain because of increasing competition and growing need for operational efficiencies and customer orientation. Supply chain management allows companies to look beyond their organizational boundaries to develop and leverage resources and capabilities of their supply chain partners. This creates competitive advantages in the marketplace and because of this SCM has acquired strategic importance. Lean Approach is a management strategy that focuses on reducing every type of waste present in an organization. This approach is becoming more and more popular among supply chain managers. The supply chain application of lean approach is not frequent. In particular, it is not well studied which are the impacts of lean approach principles in a supply chain context. In literature there are only few studies aimed at understanding the qualitative impact of the lean approach in supply chains. Therefore, the goal of this research work is to study the impacts of lean principles implementation along a supply chain. To achieve this, a simulation model of a threeechelon multi-product supply chain has been built. Kanban system (and several priority policies) and setup time reduction degrees are implemented in the lean-configured supply chain to apply pull and lot-sizing decrease principles respectively. To evaluate the benefits of lean approach, lean supply chain is compared with an EOQ-configured supply chain. The simulation results show that Kanban system and setup-time reduction improve inventory stock level. They also show that logistics efforts are affected to lean implementation degree. The paper concludes describing performances of lean supply chain in different contexts.Keywords: Inventory policy, Kanban, lean supply chain, simulation study, supply chain management, planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129466 The Effects of TiO2 Nanoparticles on Tumor Cell Colonies: Fractal Dimension and Morphological Properties
Authors: T. Sungkaworn, W. Triampo, P. Nalakarn, D. Triampo, I. M. Tang, Y. Lenbury, P. Picha
Abstract:
Semiconductor nanomaterials like TiO2 nanoparticles (TiO2-NPs) approximately less than 100 nm in diameter have become a new generation of advanced materials due to their novel and interesting optical, dielectric, and photo-catalytic properties. With the increasing use of NPs in commerce, to date few studies have investigated the toxicological and environmental effects of NPs. Motivated by the importance of TiO2-NPs that may contribute to the cancer research field especially from the treatment prospective together with the fractal analysis technique, we have investigated the effect of TiO2-NPs on colony morphology in the dark condition using fractal dimension as a key morphological characterization parameter. The aim of this work is mainly to investigate the cytotoxic effects of TiO2-NPs in the dark on the growth of human cervical carcinoma (HeLa) cell colonies from morphological aspect. The in vitro studies were carried out together with the image processing technique and fractal analysis. It was found that, these colonies were abnormal in shape and size. Moreover, the size of the control colonies appeared to be larger than those of the treated group. The mean Df +/- SEM of the colonies in untreated cultures was 1.085±0.019, N= 25, while that of the cultures treated with TiO2-NPs was 1.287±0.045. It was found that the circularity of the control group (0.401±0.071) is higher than that of the treated group (0.103±0.042). The same tendency was found in the diameter parameters which are 1161.30±219.56 μm and 852.28±206.50 μm for the control and treated group respectively. Possible explanation of the results was discussed, though more works need to be done in terms of the for mechanism aspects. Finally, our results indicate that fractal dimension can serve as a useful feature, by itself or in conjunction with other shape features, in the classification of cancer colonies.Keywords: Tumor growth, Cell colonies, TiO2, Nanoparticles, Fractal, Morphology, Aggregation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004465 Numerical Simulation of the Dynamic Behavior of a LaNi5 Water Pumping System
Authors: Miled Amel, Ben Maad Hatem, Askri Faouzi, Ben Nasrallah Sassi
Abstract:
Metal hydride water pumping system uses hydrogen as working fluid to pump water for low head and high discharge. The principal operation of this pump is based on the desorption of hydrogen at high pressure and its absorption at low pressure by a metal hydride. This work is devoted to study a concept of the dynamic behavior of a metal hydride pump using unsteady model and LaNi5 as hydriding alloy. This study shows that with MHP, it is possible to pump 340l/kg-cycle of water in 15 000s using 1 Kg of LaNi5 at a desorption temperature of 360 K, a pumping head equal to 5 m and a desorption gear ratio equal to 33. This study reveals also that the error given by the steady model, using LaNi5 is about 2%.A dimensional mathematical model and the governing equations of the pump were presented to predict the coupled heat and mass transfer within the MHP. Then, a numerical simulation is carried out to present the time evolution of the specific water discharge and to test the effect of different parameters (desorption temperature, absorption temperature, desorption gear ratio) on the performance of the water pumping system (specific water discharge, pumping efficiency and pumping time). In addition, a comparison between results obtained with steady and unsteady model is performed with different hydride mass. Finally, a geometric configuration of the reactor is simulated to optimize the pumping time.
Keywords: Dynamic behavior, unsteady model, LaNi5, performance of the water pumping system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 770464 Characterization of Brewery Wastewater Composition
Authors: Abimbola M. Enitan, Josiah Adeyemo, Sheena Kumari, Feroz M. Swalaha, Faizal Bux
Abstract:
Industries produce millions of cubic meters of effluent every year and the wastewater produced may be released into the surrounding water bodies, treated on-site or at municipal treatment plants. The determination of organic matter in the wastewater generated is very important to avoid any negative effect on the aquatic ecosystem. The scope of the present work is to assess the physicochemical composition of the wastewater produced from one of the brewery industry in South Africa. This is to estimate the environmental impact of its discharge into the receiving water bodies or the municipal treatment plant. The parameters monitored for the quantitative analysis of brewery wastewater include biological oxygen demand (BOD5), chemical oxygen demand (COD), total suspended solids, volatile suspended solids, ammonia, total oxidized nitrogen, nitrate, nitrite, phosphorus and alkalinity content. In average, the COD concentration of the brewery effluent was 5340.97 mg/l with average pH values of 4.0 to 6.7. The BOD5 and the solids content of the wastewater from the brewery industry were high. This means that the effluent is very rich in organic content and its discharge into the water bodies or the municipal treatment plant could cause environmental pollution or damage the treatment plant. In addition, there were variations in the wastewater composition throughout the monitoring period. This might be as a result of different activities that take place during the production process, as well as the effects of peak period of beer production on the water usage.Keywords: Brewery wastewater, environmental pollution, industrial effluents, physicochemical composition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10355463 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)
Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude
Abstract:
Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.Keywords: Coconut, melon, optimization, processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2153462 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes
Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono
Abstract:
Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is widely used for LV segmentation, but it suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is improved to achieve a fast and efficient LV segmentation. First, a robust and efficient detection based on Hough forest localizes cardiac feature points. Such feature points are used to predict the initial fitting of the LV shape model. Second, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. With the robust initialization, ASM is able to achieve more accurate segmentation. The performance of the proposed method is evaluated on a dataset of 810 cardiac ultrasound images that are mostly abnormal shapes. This proposed method is compared with several combinations of ASM and existing initialization methods. Our experiment results demonstrate that accuracy of the proposed method for feature point detection for initialization was 40% higher than the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops and thus speeds up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.Keywords: Hough forest, active shape model, segmentation, cardiac left ventricle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504461 Hierarchies Based On the Number of Cooperating Systems of Finite Automata on Four-Dimensional Input Tapes
Authors: Makoto Sakamoto, Yasuo Uchida, Makoto Nagatomo, Takao Ito, Tsunehiro Yoshinaga, Satoshi Ikeda, Masahiro Yokomichi, Hiroshi Furutani
Abstract:
In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.
Keywords: computational complexity, cooperating system, finite automaton, four-dimension, hierarchy, multihead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888460 The DAQ Debugger for iFDAQ of the COMPASS Experiment
Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius
Abstract:
In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.Keywords: DAQ debugger, data acquisition system, FPGA, system signals, Qt framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 893459 Response Surface Methodology Approach to Defining Ultrafiltration of Steepwater from Corn Starch Industry
Authors: Zita I. Šereš, Ljubica P. Dokić, Dragana M. Šoronja Simović, Cecilia Hodur, Zsuzsanna Laszlo, Ivana Nikolić, Nikola Maravić
Abstract:
In this work the concentration of steepwater from corn starch industry is monitored using ultrafiltration membrane. The aim was to examine the conditions of ultrafiltration of steepwater by applying the membrane of 2.5nm. The parameters that vary during the course of ultrafiltration, were the transmembrane pressure, flow rate, while the permeate flux and the dry matter content of permeate and retentate were the dependent parameter constantly monitored during the process. Experiments of ultrafiltration are conducted on the samples of steepwater, which were obtained from the starch wet milling plant „Jabuka“ Pancevo. The procedure of ultrafiltration on a single-channel 250mm lenght, with inner diameter of 6.8mm and outer diameter of 10mm membrane were carried on. The membrane is made of a-Al2O3 with TiO2 layer obtained from GEA (Germany). The experiments are carried out at a flow rate ranging from 100 to 200lh-1 and transmembrane pressure of 1-3 bars. During the experiments of steepwater ultrafiltration, the change of permeate flux, dry matter content of permeate and retentate, as well as the absorbance changes of the permeate and retentate were monitored. The experimental results showed that the maximum flux reaches about 40lm-2h-1. For responses obtained after experiments, a polynomial model of the second degree is established to evaluate and quantify the influence of the variables. The quadratic equitation fits with the experimental values, where the coefficient of determination for flux is 0.96. The dry matter content of the retentate is increased for about 6%, while the dry matter content of permeate was reduced for about 35-40%, respectively. During steepwater ultrafiltration in permeate stays 40% less dry matter compared to the feed.
Keywords: Ultrafiltration, steepwater, starch industry, ceramic membrane.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2136458 Modification of Electrical and Switching Characteristics of a Non Punch-Through Insulated Gate Bipolar Transistor by Gamma Irradiation
Authors: Hani Baek, Gwang Min Sun, Chansun Shin, Sung Ho Ahn
Abstract:
Fast neutron irradiation using nuclear reactors is an effective method to improve switching loss and short circuit durability of power semiconductor (insulated gate bipolar transistors (IGBT) and insulated gate transistors (IGT), etc.). However, not only fast neutrons but also thermal neutrons, epithermal neutrons and gamma exist in the nuclear reactor. And the electrical properties of the IGBT may be deteriorated by the irradiation of gamma. Gamma irradiation damages are known to be caused by Total Ionizing Dose (TID) effect and Single Event Effect (SEE), Displacement Damage. Especially, the TID effect deteriorated the electrical properties such as leakage current and threshold voltage of a power semiconductor. This work can confirm the effect of the gamma irradiation on the electrical properties of 600 V NPT-IGBT. Irradiation of gamma forms lattice defects in the gate oxide and Si-SiO2 interface of the IGBT. It was confirmed that this lattice defect acts on the center of the trap and affects the threshold voltage, thereby negatively shifted the threshold voltage according to TID. In addition to the change in the carrier mobility, the conductivity modulation decreases in the n-drift region, indicating a negative influence that the forward voltage drop decreases. The turn-off delay time of the device before irradiation was 212 ns. Those of 2.5, 10, 30, 70 and 100 kRad(Si) were 225, 258, 311, 328, and 350 ns, respectively. The gamma irradiation increased the turn-off delay time of the IGBT by approximately 65%, and the switching characteristics deteriorated.Keywords: NPT-IGBT, gamma irradiation, switching, turn-off delay time, recombination, trap center.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 871