Search results for: large grain size
3032 Effect of Processing Methods on Texture Evolution in AZ31 Mg Alloy Sheet
Authors: Jung-Ho Moon, Tae Kwon Ha
Abstract:
Textures of AZ31 Mg alloy sheets were evaluated by using neutron diffraction method in this study. The AZ31 sheets were fabricated either by conventional casting and subsequent hot rolling or strip casting. The effect of warm rolling was investigated using the AZ31 Mg alloy sheet produced by conventional casting. Warm rolling of 30% thickness reduction per pass was possible without any side-crack at temperatures as low as 200oC under the roll speed of 30 m/min. The initial microstructure of conventionally cast specimen was found to be partially recrystallized structures. Grain refinement was found to occur actively during the warm rolling. The (0002),(10-10) (10-11),and (10-12) complete pole figures were measured using the HANARO FCD (Neutron Four Circle Diffractometer) and ODF were calculated. The major texture of all specimens can be expressed by ND//(0001) fiber texture. Texture of hot rolled specimen showed the strongest fiber component, while that of strip cast sheet seemed to be similar to random distribution.
Keywords: Mg alloy, texture, pole figure, ODF, neutron diffraction, warm rolling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22343031 Effect of Atmospheric Pressure on the Flow at the Outlet of a Propellant Nozzle
Authors: R. Haoui
Abstract:
The purpose of this work is to simulate the flow at the exit of Vulcan 1 engine of European launcher Ariane 5. The geometry of the propellant nozzle is already determined using the characteristics method. The pressure in the outlet section of the nozzle is less than atmospheric pressure on the ground, causing the existence of oblique and normal shock waves at the exit. During the rise of the launcher, the atmospheric pressure decreases and the shock wave disappears. The code allows the capture of shock wave at exit of nozzle. The numerical technique uses the Flux Vector Splitting method of Van Leer to ensure convergence and avoid the calculation instabilities. The Courant, Friedrichs and Lewy coefficient (CFL) and mesh size level are selected to ensure the numerical convergence. The nonlinear partial derivative equations system which governs this flow is solved by an explicit unsteady numerical scheme by the finite volume method. The accuracy of the solution depends on the size of the mesh and also the step of time used in the discretized equations. We have chosen in this study the mesh that gives us a stationary solution with good accuracy.
Keywords: Launchers, supersonic flow, finite volume, nozzles, shock wave.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8773030 The Effect of Pyridoxine and Different Levels of Nitrogen on Physiological Indices of Corn(Zea Mays L.var.sc704)
Authors: Gholamreza Farrokhi, Babak Paykarestan
Abstract:
One field experiment was conducted on corn (Zea mays L.Var. SC 704) to study the effect of three different basic levels of nitrogen (90, 140and 190 Kg/ha as urea) with 0.01% and 0.02% pyridoxine pre-sowing seed soaking for 8 hours. Water-soaked seeds were treated as controled. biomass production was recorded on 45, 70 and 95 days after sowing. Total dry material (TDM), leaf area index (LAI), crop growth rate (CGR), relative growth rate (RGR) and net assimilation rate (NAR) was calculated form 45until 95 days after sowing. Yield and its components such as kernel yield, grain weight, biologic yield, harvest index and protein percentage was measured at harvest. In general, 0.02% pyridoxine and 190 Kg pure nitrogen/ha was shown gave maximum value for growth and yield parameters. N190 + 0.02 % pyridoxine enhanced seed yield and biologic yield by 57.15% and 62.98% compared to 90kg N and water – soaked treatment.Keywords: Corn, Growth Indices, Nitrogen Levels, Physiological Indices, Pyridoxine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17263029 Evaluating some Feature Selection Methods for an Improved SVM Classifier
Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp
Abstract:
Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of features selection methods to reduce the dimensionality of the document-representation vector. Four feature selection methods are evaluated: Random Selection, Information Gain (IG), Support Vector Machine (called SVM_FS) and Genetic Algorithm with SVM (GA_FS). We showed that the best results were obtained with SVM_FS and GA_FS methods for a relatively small dimension of the features vector comparative with the IG method that involves longer vectors, for quite similar classification accuracies. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).
Keywords: Features selection, learning with kernels, support vector machine, genetic algorithms and classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15393028 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG
Authors: Mamta Garg
Abstract:
While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16773027 Optimal Capacitor Allocation for loss reduction in Distribution System Using Fuzzy and Plant Growth Simulation Algorithm
Authors: R. Srinivasa Rao
Abstract:
This paper presents a new and efficient approach for capacitor placement in radial distribution systems that determine the optimal locations and size of capacitor with an objective of improving the voltage profile and reduction of power loss. The solution methodology has two parts: in part one the loss sensitivity factors are used to select the candidate locations for the capacitor placement and in part two a new algorithm that employs Plant growth Simulation Algorithm (PGSA) is used to estimate the optimal size of capacitors at the optimal buses determined in part one. The main advantage of the proposed method is that it does not require any external control parameters. The other advantage is that it handles the objective function and the constraints separately, avoiding the trouble to determine the barrier factors. The proposed method is applied to 9 and 34 bus radial distribution systems. The solutions obtained by the proposed method are compared with other methods. The proposed method has outperformed the other methods in terms of the quality of solution.Keywords: Distribution systems, Capacitor allocation, Loss reduction, Fuzzy, PGSA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22793026 Investigation of Improved Chaotic Signal Tracking by Echo State Neural Networks and Multilayer Perceptron via Training of Extended Kalman Filter Approach
Authors: Farhad Asadi, S. Hossein Sadati
Abstract:
This paper presents a prediction performance of feedforward Multilayer Perceptron (MLP) and Echo State Networks (ESN) trained with extended Kalman filter. Feedforward neural networks and ESN are powerful neural networks which can track and predict nonlinear signals. However, their tracking performance depends on the specific signals or data sets, having the risk of instability accompanied by large error. In this study we explore this process by applying different network size and leaking rate for prediction of nonlinear or chaotic signals in MLP neural networks. Major problems of ESN training such as the problem of initialization of the network and improvement in the prediction performance are tackled. The influence of coefficient of activation function in the hidden layer and other key parameters are investigated by simulation results. Extended Kalman filter is employed in order to improve the sequential and regulation learning rate of the feedforward neural networks. This training approach has vital features in the training of the network when signals have chaotic or non-stationary sequential pattern. Minimization of the variance in each step of the computation and hence smoothing of tracking were obtained by examining the results, indicating satisfactory tracking characteristics for certain conditions. In addition, simulation results confirmed satisfactory performance of both of the two neural networks with modified parameterization in tracking of the nonlinear signals.Keywords: Feedforward neural networks, nonlinear signal prediction, echo state neural networks approach, leaking rates, capacity of neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7593025 Lattice Boltzmann Method for Turbulent Heat Transfer in Wavy Channel Flows
Authors: H.Y. Lai, S. C. Chang, W. L. Chen
Abstract:
The hydrodynamic and thermal lattice Boltzmann methods are applied to investigate the turbulent convective heat transfer in the wavy channel flows. In this study, the turbulent phenomena are modeling by large-eddy simulations with the Smagorinsky model. As a benchmark, the laminar and turbulent backward-facing step flows are simulated first. The results give good agreement with other numerical and experimental data. For wavy channel flows, the distribution of Nusselt number and the skin-friction coefficients are calculated to evaluate the heat transfer effect and the drag force. It indicates that the vortices at the trough would affect the magnitude of drag and weaken the heat convection effects on the wavy surface. In turbulent cases, if the amplitude of the wavy boundary is large enough, the secondary vortices would be generated at troughs and contribute to the heat convection. Finally, the effects of different Re on the turbulent transport phenomena are discussed.
Keywords: Heat transfer, lattice Boltzmann method, turbulence, wavy channel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25023024 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators
Authors: Wei Zhang
Abstract:
With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8983023 Titanium Dioxide Modified with Glutathione as Potential Drug Carrier with Reduced Toxic Properties
Authors: Olga Długosz, Jolanta Pulit-Prociak, Marcin Banach
Abstract:
The paper presents a process to obtain glutathione-modified titanium oxide nanoparticles. The processes were carried out in a microwave radiation field. The influence of the molar ratio of glutathione to titanium oxide and the effect of the fold of NaOH vs. stoichiometric amount on the size of the formed TiO2 nanoparticles was determined. The physicochemical properties of the obtained products were evaluated using dynamic light scattering (DLS), transmission electron microscope- energy-dispersive X-ray spectroscopy (TEM-EDS), low-temperature nitrogen adsorption method (BET), X-Ray Diffraction (XRD) and Fourier-transform infrared spectroscopy (FTIR) microscopy methods. The size of TiO2 nanoparticles was characterized from 30 nm to 336 nm. The release of titanium ions from the prepared products was evaluated. These studies were carried out using different media in which the powders were incubated for a specific time. These were: water, SBF and Ringer's solution. The release of titanium ions from modified products is weaker compared to unmodified titanium oxide nanoparticles. The reduced release of titanium ions may allow the use of such modified materials as substances in drug delivery systems.
Keywords: titanium dioxide, nanoparticles, drug carrier, glutathione
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5533022 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure
Authors: T. Nozu, K. Hibi, T. Nishiie
Abstract:
This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.
Keywords: Deflagration, Large Eddy Simulation, Turbulent combustion, Vented enclosure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14773021 Optimal Capacitor Placement in a Radial Distribution System using Plant Growth Simulation Algorithm
Authors: R. Srinivasa Rao, S. V. L. Narasimham
Abstract:
This paper presents a new and efficient approach for capacitor placement in radial distribution systems that determine the optimal locations and size of capacitor with an objective of improving the voltage profile and reduction of power loss. The solution methodology has two parts: in part one the loss sensitivity factors are used to select the candidate locations for the capacitor placement and in part two a new algorithm that employs Plant growth Simulation Algorithm (PGSA) is used to estimate the optimal size of capacitors at the optimal buses determined in part one. The main advantage of the proposed method is that it does not require any external control parameters. The other advantage is that it handles the objective function and the constraints separately, avoiding the trouble to determine the barrier factors. The proposed method is applied to 9, 34, and 85-bus radial distribution systems. The solutions obtained by the proposed method are compared with other methods. The proposed method has outperformed the other methods in terms of the quality of solution.
Keywords: Distribution systems, Capacitor placement, loss reduction, Loss sensitivity factors, PGSA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52823020 Complex Flow Simulation Using a Partially Lagging One-Equation Turbulence Model
Authors: M. Elkhoury
Abstract:
A recently developed one-equation turbulence model has been successfully applied to simulate turbulent flows with various complexities. The model, which is based on the transformation of the k-ε closure, is wall-distance free and equipped with lagging destruction/dissipation terms. Test cases included shockboundary- layer interaction flows over the NACA 0012 airfoil, an axisymmetric bump, and the ONERA M6 wing. The capability of the model to operate in a Scale Resolved Simulation (SRS) mode is demonstrated through the simulation of a massive flow separation over a circular cylinder at Re= 1.2 x106. An assessment of the results against available experiments Menter (k-ε)1Eq and the Spalart- Allmaras model that belongs to the single equation closure family is made.Keywords: Turbulence modeling, complex flow simulation, scale adaptive simulation, one-equation turbulence model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14723019 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data
Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu
Abstract:
Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.
Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8713018 Colour Image Compression Method Based On Fractal Block Coding Technique
Authors: Dibyendu Ghoshal, Shimal Das
Abstract:
Image compression based on fractal coding is a lossy compression method and normally used for gray level images range and domain blocks in rectangular shape. Fractal based digital image compression technique provide a large compression ratio and in this paper, it is proposed using YUV colour space and the fractal theory which is based on iterated transformation. Fractal geometry is mainly applied in the current study towards colour image compression coding. These colour images possesses correlations among the colour components and hence high compression ratio can be achieved by exploiting all these redundancies. The proposed method utilises the self-similarity in the colour image as well as the cross-correlations between them. Experimental results show that the greater compression ratio can be achieved with large domain blocks but more trade off in image quality is good to acceptable at less than 1 bit per pixel.
Keywords: Fractal coding, Iterated Function System (IFS), Image compression, YUV colour space.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19773017 Air Classification of Dust from Steel Converter Secondary De-dusting for Zinc Enrichment
Authors: C. Lanzerstorfer
Abstract:
The off-gas from the basic oxygen furnace (BOF), where pig iron is converted into steel, is treated in the primary ventilation system. This system is in full operation only during oxygen-blowing when the BOF converter vessel is in a vertical position. When pig iron and scrap are charged into the BOF and when slag or steel are tapped, the vessel is tilted. The generated emissions during charging and tapping cannot be captured by the primary off-gas system. To capture these emissions, a secondary ventilation system is usually installed. The emissions are captured by a canopy hood installed just above the converter mouth in tilted position. The aim of this study was to investigate the dependence of Zn and other components on the particle size of BOF secondary ventilation dust. Because of the high temperature of the BOF process it can be expected that Zn will be enriched in the fine dust fractions. If Zn is enriched in the fine fractions, classification could be applied to split the dust into two size fractions with a different content of Zn. For this air classification experiments with dust from the secondary ventilation system of a BOF were performed. The results show that Zn and Pb are highly enriched in the finest dust fraction. For Cd, Cu and Sb the enrichment is less. In contrast, the non-volatile metals Al, Fe, Mn and Ti were depleted in the fine fractions. Thus, air classification could be considered for the treatment of dust from secondary BOF off-gas cleaning.Keywords: Air classification, converter dust, recycling, zinc.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12203016 Bioprocessing of Proximally Analyzed Wheat Straw for Enhanced Cellulase Production through Process Optimization with Trichodermaviride under SSF
Authors: Ishtiaq Ahmed, Muhammad Anjum Zia, Hafiz Muhammad Nasir Iqbal
Abstract:
The purpose of the present work was to study the production and process parameters optimization for the synthesis of cellulase from Trichoderma viride in solid state fermentation (SSF) using an agricultural wheat straw as substrates; as fungal conversion of lignocellulosic biomass for cellulase production is one among the major increasing demand for various biotechnological applications. An optimization of process parameters is a necessary step to get higher yield of product. Several kinetic parameters like pretreatment, extraction solvent, substrate concentration, initial moisture content, pH, incubation temperature and inoculum size were optimized for enhanced production of third most demanded industrially important cellulase. The maximum cellulase enzyme activity 398.10±2.43 μM/mL/min was achieved when proximally analyzed lignocellulosic substrate wheat straw inocubated at 2% HCl as pretreatment tool along with distilled water as extraction solvent, 3% substrate concentration 40% moisture content with optimum pH 5.5 at 45°C incubation temperature and 10% inoculum size.Keywords: Cellulase, Lignocellulosic residue, Processoptimization, Proximal analysis, SSF, Trichoderma viride.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25393015 Joint Adaptive Block Matching Search (JABMS) Algorithm
Authors: V.K.Ananthashayana, Pushpa.M.K
Abstract:
In this paper a new Joint Adaptive Block Matching Search (JABMS) algorithm is proposed to generate motion vector and search a best match macro block by classifying the motion vector movement based on prediction error. Diamond Search (DS) algorithm generates high estimation accuracy when motion vector is small and Adaptive Rood Pattern Search (ARPS) algorithm can handle large motion vector but is not very accurate. The proposed JABMS algorithm which is capable of considering both small and large motions gives improved estimation accuracy and the computational cost is reduced by 15.2 times compared with Exhaustive Search (ES) algorithm and is 1.3 times less compared with Diamond search algorithm.Keywords: Adaptive rood pattern search, Block matching, Diamond search, Joint Adaptive search, Motion estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16943014 Simulation of Complex-Shaped Particle Breakage Using the Discrete Element Method
Authors: Felix Platzer, Eric Fimbinger
Abstract:
In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.
Keywords: Bonded particle model (BPM), DEM, filter cake, particle breakage, particle fracture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4023013 Mining Sequential Patterns Using Hybrid Evolutionary Algorithm
Authors: Mourad Ykhlef, Hebah ElGibreen
Abstract:
Mining Sequential Patterns in large databases has become an important data mining task with broad applications. It is an important task in data mining field, which describes potential sequenced relationships among items in a database. There are many different algorithms introduced for this task. Conventional algorithms can find the exact optimal Sequential Pattern rule but it takes a long time, particularly when they are applied on large databases. Nowadays, some evolutionary algorithms, such as Particle Swarm Optimization and Genetic Algorithm, were proposed and have been applied to solve this problem. This paper will introduce a new kind of hybrid evolutionary algorithm that combines Genetic Algorithm (GA) with Particle Swarm Optimization (PSO) to mine Sequential Pattern, in order to improve the speed of evolutionary algorithms convergence. This algorithm is referred to as SP-GAPSO.Keywords: Genetic Algorithm, Hybrid Evolutionary Algorithm, Particle Swarm Optimization algorithm, Sequential Pattern mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20263012 Seasonal Heat Stress Effect on Cholesterol, Estradiol and Progesterone during Follicular Development in Egyptian Buffalo
Authors: Heba F. Hozyen, Hodallah H. Ahmed, S. I. A. Shalaby, G. E. S. Essawy
Abstract:
Biochemical and hormonal changes that occur in both follicular fluid and blood are involved in the control of ovarian physiology. The present study was conducted on follicular fluid and serum samples obtained from 708 buffaloes. Samples were examined for estradiol, progesterone, and cholesterol concentrations in relation to seasonal changes, ovarian follicular size, and stage of estrous cycle. The obtained results revealed that follicular fluid and serum levels of estradiol, progesterone, and cholesterol were significantly lower during summer and autumn when compared to winter and spring seasons. With the increase in follicular size, the follicular fluid levels of progesterone and cholesterol were significantly decreased, while estradiol levels were significantly increased. Estradiol and progesterone levels were significantly higher in follicular fluid than blood, while cholesterol was significantly lower in follicular fluid than serum. In conclusion, the current study threw a light on the hormonal changes in the follicular fluid and blood under the effect of heat stress which could be related to the low fertility of buffalo in the summer.Keywords: Buffalo, follicular fluid, follicular development, seasonal changes, steroids.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19583011 Seed-Based Region Growing (SBRG) vs Adaptive Network-Based Inference System (ANFIS) vs Fuzzyc-Means (FCM): Brain Abnormalities Segmentation
Authors: Shafaf Ibrahim, Noor Elaiza Abdul Khalid, Mazani Manaf
Abstract:
Segmentation of Magnetic Resonance Imaging (MRI) images is the most challenging problems in medical imaging. This paper compares the performances of Seed-Based Region Growing (SBRG), Adaptive Network-Based Fuzzy Inference System (ANFIS) and Fuzzy c-Means (FCM) in brain abnormalities segmentation. Controlled experimental data is used, which designed in such a way that prior knowledge of the size of the abnormalities are known. This is done by cutting various sizes of abnormalities and pasting it onto normal brain tissues. The normal tissues or the background are divided into three different categories. The segmentation is done with fifty seven data of each category. The knowledge of the size of the abnormalities by the number of pixels are then compared with segmentation results of three techniques proposed. It was proven that the ANFIS returns the best segmentation performances in light abnormalities, whereas the SBRG on the other hand performed well in dark abnormalities segmentation.
Keywords: Seed-Based Region Growing (SBRG), Adaptive Network-Based Fuzzy Inference System (ANFIS), Fuzzy c-Means (FCM), Brain segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23053010 Bread Quality Improvement with Special Novel Additives
Authors: Mónika Bartalné-Berceli, Eszter Izsó, Szilveszter Gergely, András Salgó
Abstract:
Presently a significant portion of the Earth's population does not have access to healthy food. Either because they cannot afford it or because they do not know which one are they. The aim of the VII th Framework Chance project (Nr. 266331) supported by the European Union has been to develop relatively cheap food with favourable nutritional value and it should have acceptable quality for consumers. As one task of the project we manufactured bread products as a basic food. We examined the enrichment of bread products with four kinds of bran, with a special milling product of grain industry (aleurone-rich flour) and with a soy-based sprouted additive. The applied concentration of the six mentioned additives has been optimized and the physical properties of the bread products were monitored. The weight/density of the enriched breads increased a bit, however the volume and height decreased slightly compared to the corresponding data of the control bread. The optimized composition of the final product is favourably affected by these additives having highly preferred composition from nutritional point of view.
Keywords: Aleurone-rich flour, Brans, Bread products, Sprouted soybean, YASO.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18013009 Energy-Level Structure of a Confined Electron-Positron Pair in Nanostructure
Authors: Tokuei Sako, Paul-Antoine Hervieux
Abstract:
The energy-level structure of a pair of electron and positron confined in a quasi-one-dimensional nano-scale potential well has been investigated focusing on its trend in the small limit of confinement strength ω, namely, the Wigner molecular regime. An anisotropic Gaussian-type basis functions supplemented by high angular momentum functions as large as l = 19 has been used to obtain reliable full configuration interaction (FCI) wave functions. The resultant energy spectrum shows a band structure characterized by ω for the large ω regime whereas for the small ω regime it shows an energy-level pattern dominated by excitation into the in-phase motion of the two particles. The observed trend has been rationalized on the basis of the nodal patterns of the FCI wave functions.
Keywords: Confined systems, positron, wave function, Wigner molecule, quantum dots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18533008 Fabrication of Nanoengineered Radiation Shielding Multifunctional Polymeric Sandwich Composites
Authors: Nasim Abuali Galehdari, Venkat Mani, Ajit D. Kelkar
Abstract:
Space Radiation has become one of the major factors in successful long duration space exploration. Exposure to space radiation not only can affect the health of astronauts but also can disrupt or damage materials and electronics. Hazards to materials include degradation of properties, such as, modulus, strength, or glass transition temperature. Electronics may experience single event effects, gate rupture, burnout of field effect transistors and noise. Presently aluminum is the major component in most of the space structures due to its lightweight and good structural properties. However, aluminum is ineffective at blocking space radiation. Therefore, most of the past research involved studying at polymers which contain large amounts of hydrogen. Again, these materials are not structural materials and would require large amounts of material to achieve the structural properties needed. One of the materials to alleviate this problem is polymeric composite materials, which has good structural properties and use polymers that contained large amounts of hydrogen. This paper presents steps involved in fabrication of multi-functional hybrid sandwich panels that can provide beneficial radiation shielding as well as structural strength. Multifunctional hybrid sandwich panels were manufactured using vacuum assisted resin transfer molding process and were subjected to radiation treatment. Study indicates that various nanoparticles including Boron Nano powder, Boron Carbide and Gadolinium nanoparticles can be successfully used to block the space radiation without sacrificing the structural integrity.Keywords: Multi-functional, polymer composites, radiation shielding, sandwich composites.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18173007 Large-Dimensional Shells under Mining Tremors from Various Mining Regions in Poland
Authors: Joanna M. Dulińska, Maria Fabijańska
Abstract:
In the paper a detailed analysis of the dynamic response of a cooling tower shell to mining tremors originated from two main regions of mining activity in Poland (Upper Silesian Coal Basin and Legnica-Glogow Copper District) was presented. The representative time histories registered in the both regions were used as ground motion data in calculations of the dynamic response of the structure. It was proved that the dynamic response of the shell is strongly dependent not only on the level of vibration amplitudes but on the dominant frequency range of the mining shock typical for the mining region as well. Also a vertical component of vibrations occurred to have considerable influence on the total dynamic response of the shell. Finally, it turned out that non-uniformity of kinematic excitation resulting from spatial variety of ground motion plays a significant role in dynamic analysis of large-dimensional shells under mining shocks.Keywords: Cooling towers, dynamic response, mining tremors, non-uniform kinematic excitation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14213006 Correlation between Heat Treatment, Microstructure and Properties of Trip-Assisted Steels
Authors: A. Talapatra, N. R. Bandhyopadhyay, J. Datta
Abstract:
In the present study, two TRIP-assisted steels were designated as A (having no Cr and Cu content) and B (having higher Ni, Cr and Cu content) heat treated under different conditions, and the correlation between its heat treatment, microstructure and properties were investigated. Micro structural examination was carried out by optical microscope and scanning electron microscope after electrolytic etching. Non-destructive electrochemical and ultrasonic testing on two TRIP-assisted steels was used to find out corrosion and mechanical properties of different alter microstructure phase’s steels. Furthermore, micro structural studies accompanied by the evaluation of mechanical properties revealed that steels having martensite phases with higher corrosive and hardness value were less sound velocity and also steel’s microstructure having finer grains that was more grain boundary was less corrosion resistance. Steel containing more Cu, Ni and Cr was less corrosive compared to other steels having same processing or microstructure.
Keywords: TRIP-assisted steels, heat treatment, corrosion, electrochemical techniques, micro-structural characterization, non-destructive (ultrasonic) technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30163005 Degree of Milling Effects on the Sorghum (Sorghum bicolor) Flours, Physicochemical Properties and Kinetics of Starch Digestion
Authors: Brou K., Guéhi T., Konan A. G., Gbakayoro J. B., Gnakri D.
Abstract:
Two types of crushing were applied to grains of red sorghum: manual crushing using a mortar and pestle of kitchen and mechanical crushing using a hammer mill. The flours obtained at the end of these various crushing were filtered and subdivided in different fractions according to the diameters of the mesh of the sieves (0.16mm; 0.25mm; 0.315mm; 0.4mm, and 0.63mm…). Some physical, chemical and nutritional traits of these flours were evaluated using Association of Official Analytical Chemists (AOAC). In vitro digestibility of these flours was also studied with freezing of flour 1% like substrate and α-amylase from B. licheniformis (E.C.3.2.1.1; Megazyme, Wicklow, Ireland). The results revealed that the batches of flours which have the finest diameters as 0.16mm; 0.25mm are the richest one in nutrients and are also the most digestible. Also mechanical crushing is the best mean to obtain significant amount of flours. In conclusion, the type of crushing and the size of the particles have an impact on the final concentration of some nutrients of the flours obtained. Indeed, the finest particles (0.16mm – 0.25mm 0.315mm) obtained after sifting of the flours are more nutritive and have a better digestibility than others size. So the finest particles could be advised for management of cereals namely the sorghum for the production of the infantile foods.
Keywords: Nutrients, digestibility, crush, flour, milling, granulometry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20243004 Application of Single Subject Experimental Designs in Adapted Physical Activity Research: A Descriptive Analysis
Authors: Jiabei Zhang, Ying Qi
Abstract:
The purpose of this study was to develop a descriptive profile of the adapted physical activity research using single subject experimental designs. All research articles using single subject experimental designs published in the journal of Adapted Physical Activity Quarterly from 1984 to 2013 were employed as the data source. Each of the articles was coded in a subcategory of seven categories: (a) the size of sample; (b) the age of participants; (c) the type of disabilities; (d) the type of data analysis; (e) the type of designs, (f) the independent variable, and (g) the dependent variable. Frequencies, percentages, and trend inspection were used to analyze the data and develop a profile. The profile developed characterizes a small portion of research articles used single subject designs, in which most researchers used a small sample size, recruited children as subjects, emphasized learning and behavior impairments, selected visual inspection with descriptive statistics, preferred a multiple baseline design, focused on effects of therapy, inclusion, and strategy, and measured desired behaviors more often, with a decreasing trend over years.Keywords: Adapted physical activity research, single subject experimental designs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18423003 Estimating Shortest Circuit Path Length Complexity
Authors: Azam Beg, P. W. Chandana Prasad, S.M.N.A Senenayake
Abstract:
When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.Keywords: Monte Carlo circuit simulation data, binary decision diagrams, neural network modeling, shortest path length estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378