Search results for: wavelet particle decomposition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2379

Search results for: wavelet particle decomposition

1989 Annular Hyperbolic Profile Fins with Variable Thermal Conductivity Using Laplace Adomian Transform and Double Decomposition Methods

Authors: Yinwei Lin, Cha'o-Kuang Chen

Abstract:

In this article, the Laplace Adomian transform method (LADM) and double decomposition method (DDM) are used to solve the annular hyperbolic profile fins with variable thermal conductivity. As the thermal conductivity parameter ε is relatively large, the numerical solution using DDM become incorrect. Moreover, when the terms of DDM are more than seven, the numerical solution using DDM is very complicated. However, the present method can be easily calculated as terms are over seven and has more precisely numerical solutions. As the thermal conductivity parameter ε is relatively large, LADM also has better accuracy than DDM.

Keywords: fins, thermal conductivity, Laplace transform, Adomian, nonlinear

Procedia PDF Downloads 318
1988 Urban-Rural Inequality in Mexico after Nafta: A Quantile Regression Analysis

Authors: Rene Valdiviezo-Issa

Abstract:

In this paper, we use Mexico’s Households Income and Expenditures (ENIGH) survey to explain the behaviour that the urban-rural expenditure gap has had since Mexico’s incorporation to the North American Free Trade Agreement (NAFTA) in 1994 and we compare it with the latest available survey, which took place in 2014. We use real trimestral expenditure per capita (RTEPC) as the measure of welfare. We use quantile regressions and a quantile regression decomposition to describe the gap between urban and rural distributions of log RTEPC. We discover that the decrease in the difference between the urban and rural distributions of log RTEPC, or inequality, is motivated because of a deprivation of the urban areas, in very specific characteristics, rather than an improvement of the urban areas. When using the decomposition we observe that the gap is primarily brought about because differences in returns to covariates between the urban and rural areas.

Keywords: quantile regression, urban-rural inequality, inequality in Mexico, income decompositon

Procedia PDF Downloads 266
1987 Comparison of ANFIS Update Methods Using Genetic Algorithm, Particle Swarm Optimization, and Artificial Bee Colony

Authors: Michael R. Phangtriastu, Herriyandi Herriyandi, Diaz D. Santika

Abstract:

This paper presents a comparison of the implementation of metaheuristic algorithms to train the antecedent parameters and consequence parameters in the adaptive network-based fuzzy inference system (ANFIS). The algorithms compared are genetic algorithm (GA), particle swarm optimization (PSO), and artificial bee colony (ABC). The objective of this paper is to benchmark well-known metaheuristic algorithms. The algorithms are applied to several data set with different nature. The combinations of the algorithms' parameters are tested. In all algorithms, a different number of populations are tested. In PSO, combinations of velocity are tested. In ABC, a different number of limit abandonment are tested. Experiments find out that ABC is more reliable than other algorithms, ABC manages to get better mean square error (MSE) than other algorithms in all data set.

Keywords: ANFIS, artificial bee colony, genetic algorithm, metaheuristic algorithm, particle swarm optimization

Procedia PDF Downloads 336
1986 A Gene Selection Algorithm for Microarray Cancer Classification Using an Improved Particle Swarm Optimization

Authors: Arfan Ali Nagra, Tariq Shahzad, Meshal Alharbi, Khalid Masood Khan, Muhammad Mugees Asif, Taher M. Ghazal, Khmaies Ouahada

Abstract:

Gene selection is an essential step for the classification of microarray cancer data. Gene expression cancer data (DNA microarray) facilitates computing the robust and concurrent expression of various genes. Particle swarm optimization (PSO) requires simple operators and less number of parameters for tuning the model in gene selection. The selection of a prognostic gene with small redundancy is a great challenge for the researcher as there are a few complications in PSO based selection method. In this research, a new variant of PSO (Self-inertia weight adaptive PSO) has been proposed. In the proposed algorithm, SIW-APSO-ELM is explored to achieve gene selection prediction accuracies. This new algorithm balances the exploration capabilities of the improved inertia weight adaptive particle swarm optimization and the exploitation. The self-inertia weight adaptive particle swarm optimization (SIW-APSO) is used to search the solution. The SIW-APSO is updated with an evolutionary process in such a way that each particle iteratively improves its velocities and positions. The extreme learning machine (ELM) has been designed for the selection procedure. The proposed method has been to identify a number of genes in the cancer dataset. The classification algorithm contains ELM, K- centroid nearest neighbor (KCNN), and support vector machine (SVM) to attain high forecast accuracy as compared to the start-of-the-art methods on microarray cancer datasets that show the effectiveness of the proposed method.

Keywords: microarray cancer, improved PSO, ELM, SVM, evolutionary algorithms

Procedia PDF Downloads 66
1985 Characterization of Enhanced Thermostable Polyhydroxyalkanoates

Authors: Ahmad Idi

Abstract:

The biosynthesis and properties of polyhydroxyalkanoate (PHA) are determined by the bacterial strain and the culture condition. Hence this study elucidates the structure and properties of PHA produced by a newly isolated strain of photosynthetic bacterium, Rhodobacter sphaeroides ADZ101 grown under the optimized culture condition. The properties of the accumulated PHA were determined via FTIR, NMR, TGA, and GCMS analyses. The results showed that acetate and ammonia chloride had the highest PHA accumulation with a ratio of 32.5 mM at neutral pH. The structural analyses showed that the polymer comprises both short and medium-chain length monomers ranging from C5, C13, C14, and C18, as well as the presence of novel PHA monomers. The thermal analysis revealed that the maximum temperature of decomposition occurred at 395°C and 454°C, indicating two major decomposition reactions. Thus this bacterial strain, optimized culture condition, and the abundance of novel monomers enhanced the thermostability of the accumulated PHA.

Keywords: bioplastic polyhydroxyalkanoates Rhodobacter sphaeroides ADZ101 thermostable PHA

Procedia PDF Downloads 129
1984 Fault Detection of Pipeline in Water Distribution Network System

Authors: Shin Je Lee, Go Bong Choi, Jeong Cheol Seo, Jong Min Lee, Gibaek Lee

Abstract:

Water pipe network is installed underground and once equipped; it is difficult to recognize the state of pipes when the leak or burst happens. Accordingly, post management is often delayed after the fault occurs. Therefore, the systematic fault management system of water pipe network is required to prevent the accident and minimize the loss. In this work, we develop online fault detection system of water pipe network using data of pipes such as flow rate or pressure. The transient model describing water flow in pipelines is presented and simulated using Matlab. The fault situations such as the leak or burst can be also simulated and flow rate or pressure data when the fault happens are collected. Faults are detected using statistical methods of fast Fourier transform and discrete wavelet transform, and they are compared to find which method shows the better fault detection performance.

Keywords: fault detection, water pipeline model, fast Fourier transform, discrete wavelet transform

Procedia PDF Downloads 497
1983 Preparation and Characterization of Titania-Coated Glass Fibrous Filters Using Aqueous Peroxotitanium Acid Solution

Authors: Ueda Honoka, Yasuo Hasegawa, Fumihiro Nishimura, Jae-Ho Kim, Susumu Yonezawa

Abstract:

Aqueous peroxotitanium acid solution prepared from the TiO₂ fluorinated by F₂ gas was used for the TiO₂ coating on glass fibrous filters in this study. The coating of TiO₂ on the surface of glass fibers was carried out at 120℃ and for 15 min ~ 24 h with aqueous peroxotitanium acid solution using a hydrothermal synthesis autoclave reactor. The morphology TiO₂ coating layer was largely dependent on the reaction time, as shown in the results of scanning electron microscopy and energy dispersive X-ray spectroscopy. Increasing the reaction times, the TiO₂ layer on the glass expanded uniformly. Moreover, the surface fluorination of glass fibers can promote the formation of the TiO₂ layer on the surface. The photocatalytic activity of prepared titania-coated glass fibrous filters was investigated by both the degradation test of methylene blue (MB) and the decomposition test of gaseous acetaldehyde. The MB decomposition ratio with fluorinated samples was about 95% for 30 min of UV irradiation time, and it was much higher than that (70%) with the untreated thing. The decomposition ratio (50%) of gaseous acetaldehyde with fluorinated samples was also higher than that (30%) with the untreated thing. Consequently, photocatalytic activity is enhanced by surface fluorination.

Keywords: aqueous peroxotitanium acid solution, titania-coated glass fibrous filters, photocatalytic activity, surface fluorination

Procedia PDF Downloads 73
1982 Empirical Mode Decomposition Based Denoising by Customized Thresholding

Authors: Wahiba Mohguen, Raïs El’hadi Bekka

Abstract:

This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding

Procedia PDF Downloads 292
1981 Solutions of Fractional Reaction-Diffusion Equations Used to Model the Growth and Spreading of Biological Species

Authors: Kamel Al-Khaled

Abstract:

Reaction-diffusion equations are commonly used in population biology to model the spread of biological species. In this paper, we propose a fractional reaction-diffusion equation, where the classical second derivative diffusion term is replaced by a fractional derivative of order less than two. Based on the symbolic computation system Mathematica, Adomian decomposition method, developed for fractional differential equations, is directly extended to derive explicit and numerical solutions of space fractional reaction-diffusion equations. The fractional derivative is described in the Caputo sense. Finally, the recent appearance of fractional reaction-diffusion equations as models in some fields such as cell biology, chemistry, physics, and finance, makes it necessary to apply the results reported here to some numerical examples.

Keywords: fractional partial differential equations, reaction-diffusion equations, adomian decomposition, biological species

Procedia PDF Downloads 358
1980 Particle Swarm Optimization Based Method for Minimum Initial Marking in Labeled Petri Nets

Authors: Hichem Kmimech, Achref Jabeur Telmoudi, Lotfi Nabli

Abstract:

The estimation of the initial marking minimum (MIM) is a crucial problem in labeled Petri nets. In the case of multiple choices, the search for the initial marking leads to a problem of optimization of the minimum allocation of resources with two constraints. The first concerns the firing sequence that could be legal on the initial marking with respect to the firing vector. The second deals with the total number of tokens that can be minimal. In this article, the MIM problem is solved by the meta-heuristic particle swarm optimization (PSO). The proposed approach presents the advantages of PSO to satisfy the two previous constraints and find all possible combinations of minimum initial marking with the best computing time. This method, more efficient than conventional ones, has an excellent impact on the resolution of the MIM problem. We prove through a set of definitions, lemmas, and examples, the effectiveness of our approach.

Keywords: marking, production system, labeled Petri nets, particle swarm optimization

Procedia PDF Downloads 163
1979 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 236
1978 The Time-Frequency Domain Reflection Method for Aircraft Cable Defects Localization

Authors: Reza Rezaeipour Honarmandzad

Abstract:

This paper introduces an aircraft cable fault detection and location method in light of TFDR keeping in mind the end goal to recognize the intermittent faults adequately and to adapt to the serial and after-connector issues being hard to be distinguished in time domain reflection. In this strategy, the correlation function of reflected and reference signal is used to recognize and find the airplane fault as per the qualities of reflected and reference signal in time-frequency domain, so the hit rate of distinguishing and finding intermittent faults can be enhanced adequately. In the work process, the reflected signal is interfered by the noise and false caution happens frequently, so the threshold de-noising technique in light of wavelet decomposition is used to diminish the noise interference and lessen the shortcoming alert rate. At that point the time-frequency cross connection capacity of the reference signal and the reflected signal based on Wigner-Ville appropriation is figured so as to find the issue position. Finally, LabVIEW is connected to execute operation and control interface, the primary capacity of which is to connect and control MATLAB and LABSQL. Using the solid computing capacity and the bottomless capacity library of MATLAB, the signal processing turn to be effortlessly acknowledged, in addition LabVIEW help the framework to be more dependable and upgraded effectively.

Keywords: aircraft cable, fault location, TFDR, LabVIEW

Procedia PDF Downloads 464
1977 A Novel Geometrical Approach toward the Mechanical Properties of Particle Reinforced Composites

Authors: Hamed Khezrzadeh

Abstract:

Many investigations on the micromechanical structure of materials indicate that there exist fractal patterns at the micro scale in some of the main construction and industrial materials. A recently presented micro-fractal theory brings together the well-known periodic homogenization and the fractal geometry to construct an appropriate model for determination of the mechanical properties of particle reinforced composite materials. The proposed multi-step homogenization scheme considers the mechanical properties of different constituent phases in the composite together with the interaction between these phases throughout a step-by-step homogenization technique. In the proposed model the interaction of different phases is also investigated. By using this method the effect of fibers grading on the mechanical properties also could be studied. The theory outcomes are compared to the experimental data for different types of particle-reinforced composites which very good agreement with the experimental data is observed.

Keywords: fractal geometry, homogenization, micromehcanics, particulate composites

Procedia PDF Downloads 275
1976 Denoising Convolutional Neural Network Assisted Electrocardiogram Signal Watermarking for Secure Transmission in E-Healthcare Applications

Authors: Jyoti Rani, Ashima Anand, Shivendra Shivani

Abstract:

In recent years, physiological signals obtained in telemedicine have been stored independently from patient information. In addition, people have increasingly turned to mobile devices for information on health-related topics. Major authentication and security issues may arise from this storing, degrading the reliability of diagnostics. This study introduces an approach to reversible watermarking, which ensures security by utilizing the electrocardiogram (ECG) signal as a carrier for embedding patient information. In the proposed work, Pan-Tompkins++ is employed to convert the 1D ECG signal into a 2D signal. The frequency subbands of a signal are extracted using RDWT(Redundant discrete wavelet transform), and then one of the subbands is subjected to MSVD (Multiresolution singular valued decomposition for masking. Finally, the encrypted watermark is embedded within the signal. The experimental results show that the watermarked signal obtained is indistinguishable from the original signals, ensuring the preservation of all diagnostic information. In addition, the DnCNN (Denoising convolutional neural network) concept is used to denoise the retrieved watermark for improved accuracy. The proposed ECG signal-based watermarking method is supported by experimental results and evaluations of its effectiveness. The results of the robustness tests demonstrate that the watermark is susceptible to the most prevalent watermarking attacks.

Keywords: ECG, VMD, watermarking, PanTompkins++, RDWT, DnCNN, MSVD, chaotic encryption, attacks

Procedia PDF Downloads 79
1975 Synthesis and Characterization of CaZrTi2O7 from Tartrate Precursor Employing Microwave Heating Technique

Authors: B. M. Patil, S. R. Dharwadkar

Abstract:

Zirconolite (CaZrTi2O7) is one of the three major phases in the synthetic ceramic 'SYNROC' which is used for immobilization of high-level nuclear waste and also acts as photocatalytic and photophysical properties. In the present work the nanocrystalline CaZrTi2O7 was synthesized from Calcium Zirconyl Titanate tartrate precursor (CZTT) employing two different heating techniques such as Conventional heating (Muffle furnace) and Microwave heating (Microwave Oven). Thermal decomposition of the CZTT precursors in air yielded nanocrystalline CaZrTi2O7 powder as the end product. The products obtained by annealing the CZTT precursor using both heating method were characterized using simultaneous TG-DTA, FTIR, XRD, SEM, TEM, NTA and thermodilatometric study. The physical characteristics such as crystallinity, morphology and particle size of the product obtained by heating the CZTT precursor at the different temperatures in a Muffle furnace and Microwave oven were found to be significantly different. The microwave heating technique considerably lowered the synthesis temperature of CaZrTi2O7. The influence of microwave heating was more pronounced as compared to Muffle furnace heating. The details of the synthesis of CaZrTi2O7 from CZTT precursor are discussed.

Keywords: CZTT, CaZrTi2O7, microwave, SYNROC, zirconolite

Procedia PDF Downloads 152
1974 Experimental Study on Capturing of Magnetic Nanoparticles Transported in an Implant Assisted Cylindrical Tube under Magnetic Field

Authors: Anurag Gaur Nidhi

Abstract:

Targeted drug delivery is a method of delivering medication to a patient in a manner that increases the concentration of the medication in some parts of the body relative to others. Targeted drug delivery seeks to concentrate the medication in the tissues of interest while reducing the relative concentration of the medication in the remaining tissues. This improves efficacy of the while reducing side effects. In the present work, we investigate the effect of magnetic field, flow rate and particle concentration on the capturing of magnetic particles transported in a stent implanted fluidic channel. Iron oxide magnetic nanoparticles (Fe3O4) nanoparticles were synthesized via co-precipitation method. The synthesized Fe3O4 nanoparticles were added in the de-ionized (DI) water to prepare the Fe3O4 magnetic particle suspended fluid. This fluid is transported in a cylindrical tube of diameter 8 mm with help of a peristaltic pump at different flow rate (25-40 ml/min). A ferromagnetic coil of SS 430 has been implanted inside the cylindrical tube to enhance the capturing of magnetic nanoparticles under magnetic field. The capturing of magnetic nanoparticles was observed at different magnetic magnetic field, flow rate and particle concentration. It is observed that capture efficiency increases from 47-67 % at magnetic field 2-5kG, respectively at particle concentration 0.6 mg/ml and at flow rate 30 ml/min. However, the capture efficiency decreases from 65 to 44 % by increasing the flow rate from 25 to 40 ml/min, respectively. Furthermore, it is observed that capture efficiency increases from 51 to 67 % by increasing the particle concentration from 0.3 to 0.6 mg/ml, respectively.

Keywords: capture efficiency, implant assisted-Magnetic drug targeting (IA-MDT), magnetic nanoparticles, In-vitro study

Procedia PDF Downloads 288
1973 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI

Procedia PDF Downloads 131
1972 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area

Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya

Abstract:

In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.

Keywords: brain-computer interface, speech recognition, artificial neural network, electroencephalography, EEG, wernicke area

Procedia PDF Downloads 256
1971 Analysis of the Significance of Multimedia Channels Using Sparse PCA and Regularized SVD

Authors: Kourosh Modarresi

Abstract:

The abundance of media channels and devices has given users a variety of options to extract, discover, and explore information in the digital world. Since, often, there is a long and complicated path that a typical user may venture before taking any (significant) action (such as purchasing goods and services), it is critical to know how each node (media channel) in the path of user has contributed to the final action. In this work, the significance of each media channel is computed using statistical analysis and machine learning techniques. More specifically, “Regularized Singular Value Decomposition”, and “Sparse Principal Component” has been used to compute the significance of each channel toward the final action. The results of this work are a considerable improvement compared to the present approaches.

Keywords: multimedia attribution, sparse principal component, regularization, singular value decomposition, feature significance, machine learning, linear systems, variable shrinkage

Procedia PDF Downloads 292
1970 Effect of Impact Angle on Erosive Abrasive Wear of Ductile and Brittle Materials

Authors: Ergin Kosa, Ali Göksenli

Abstract:

Erosion and abrasion are wear mechanisms reducing the lifetime of machine elements like valves, pump and pipe systems. Both wear mechanisms are acting at the same time, causing a “Synergy” effect, which leads to a rapid damage of the surface. Different parameters are effective on erosive abrasive wear rate. In this study effect of particle impact angle on wear rate and wear mechanism of ductile and brittle materials was investigated. A new slurry pot was designed for experimental investigation. As abrasive particle, silica sand was used. Particle size was ranking between 200-500 µm. All tests were carried out in a sand-water mixture of 20% concentration for four hours. Impact velocities of the particles were 4,76 m/s. As ductile material steel St 37 with Brinell Hardness Number (BHN) of 245 and quenched St 37 with 510 BHN was used as brittle material. After wear tests, morphology of the eroded surfaces were investigated for better understanding of the wear mechanisms acting at different impact angles by using optical microscopy and Scanning Electron Microscope. The results indicated that wear rate of ductile material was higher than brittle material. Maximum wear was observed by ductile material at a particle impact angle of 300. On the contrary wear rate increased by brittle materials by an increase in impact angle and reached maximum value at 450. High amount of craters were detected after observation on ductile material surface Also plastic deformation zones were detected, which are typical failure modes for ductile materials. Craters formed by particles were deeper according to brittle material worn surface. Amount of craters decreased on brittle material surface. Microcracks around craters were detected which are typical failure modes of brittle materials. Deformation wear was the dominant wear mechanism on brittle material. At the end it is concluded that wear rate could not be directly related to impact angle of the hard particle due to the different responses of ductile and brittle materials.

Keywords: erosive wear, particle impact angle, silica sand, wear rate, ductile-brittle material

Procedia PDF Downloads 370
1969 Model-Based Control for Piezoelectric-Actuated Systems Using Inverse Prandtl-Ishlinskii Model and Particle Swarm Optimization

Authors: Jin-Wei Liang, Hung-Yi Chen, Lung Lin

Abstract:

In this paper feedforward controller is designed to eliminate nonlinear hysteresis behaviors of a piezoelectric stack actuator (PSA) driven system. The control design is based on inverse Prandtl-Ishlinskii (P-I) hysteresis model identified using particle swarm optimization (PSO) technique. Based on the identified P-I model, both the inverse P-I hysteresis model and feedforward controller can be determined. Experimental results obtained using the inverse P-I feedforward control are compared with their counterparts using hysteresis estimates obtained from the identified Bouc-Wen model. Effectiveness of the proposed feedforward control scheme is demonstrated. To improve control performance feedback compensation using traditional PID scheme is adopted to integrate with the feedforward controller.

Keywords: the Bouc-Wen hysteresis model, particle swarm optimization, Prandtl-Ishlinskii model, automation engineering

Procedia PDF Downloads 500
1968 Eco-Fashion Dyeing of Denim and Knitwear with Particle-Dyes

Authors: Adriana Duarte, Sandra Sampaio, Catia Ferreira, Jaime I. N. R. Gomes

Abstract:

With the fashion of faded worn garments the textile industry has moved from indigo and pigments to dyes that are fixed by cationization, with products that can be toxic, and that can show this effect after washing down the dye with friction and/or treating with enzymes in a subsequent operation. Increasingly they are treated with bleaches, such as hypochlorite and permanganate, both toxic substances. An alternative process is presented in this work for both garment and jet dyeing processes, without the use of pre-cationization and the alternative use of “particle-dyes”. These are hybrid products, made up by an inorganic particle and an organic dye. With standard soluble dyes, it is not possible to avoid diffusion into the inside of the fiber unless using previous cationization. Only in this way can diffusion be avoided keeping the centre of the fibres undyed so as to produce the faded effect by removing the surface dye and showing the white fiber beneath. With “particle-dyes”, previous cationization is avoided. By applying low temperatures, the dye does not diffuse completely into the inside of the fiber, since it is a particle and not a soluble dye, being then able to give the faded effect. Even though bleaching can be used it can also be avoided, by the use of friction and enzymes they can be used just as for other dyes. This fashion brought about new ways of applying reactive dyes by the use of previous cationization of cotton, lowering the salt, and temperatures that reactive dyes usually need for reacting and as a side effect the application of a more environmental process. However, cationization is a process that can be problematic in applying it outside garment dyeing, such as jet dyeing, being difficult to obtain level dyeings. It also should be applied by a pad-fix or Pad-batch process due to the low affinity of the pre-cationization products making it a more expensive process, and the risk of unlevelness in processes such as jet dyeing. Wit particle-dyes, since no pre-cationizartion is necessary, they can be applied in jet dyeing. The excess dye is fixed by a fixing agent, fixing the insoluble dye onto the surface of the fibers. By applying the fixing agent only one to 1-3 rinses in water at room temperature are necessary, saving water and improving the washfastness.

Keywords: denim, garment dyeing, worn look, eco-fashion

Procedia PDF Downloads 522
1967 Correlation to Predict the Effect of Particle Type on Axial Voidage Profile in Circulating Fluidized Beds

Authors: M. S. Khurram, S. A. Memon, S. Khan

Abstract:

Bed voidage behavior among different flow regimes for Geldart A, B, and D particles (fluid catalytic cracking catalyst (FCC), particle A and glass beads) of diameter range 57-872 μm, apparent density 1470-3092 kg/m3, and bulk density range 890-1773 kg/m3 were investigated in a gas-solid circulating fluidized bed of 0.1 m-i.d. and 2.56 m-height of plexi-glass. Effects of variables (gas velocity, particle properties, and static bed height) were analyzed on bed voidage. The axial voidage profile showed a typical trend along the riser: a dense bed at the lower part followed by a transition in the splash zone and a lean phase in the freeboard. Bed expansion and dense bed voidage increased with an increase of gas velocity as usual. From experimental results, a generalized model relationship based on inverse fluidization number for dense bed voidage from bubbling to fast fluidization regimes was presented.

Keywords: axial voidage, circulating fluidized bed, splash zone, static bed

Procedia PDF Downloads 272
1966 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 301
1965 Synthesis and Characterization of LiCoO2 Cathode Material by Sol-Gel Method

Authors: Nur Azilina Abdul Aziz, Tuti Katrina Abdullah, Ahmad Azmin Mohamad

Abstract:

Lithium-transition metals and some of their oxides, such as LiCoO2, LiMn2O2, LiFePO4, and LiNiO2 have been used as cathode materials in high performance lithium-ion rechargeable batteries. Among the cathode materials, LiCoO2 has potential to been widely used as a lithium-ion battery because of its layered crystalline structure, good capacity, high cell voltage, high specific energy density, high power rate, low self-discharge, and excellent cycle life. This cathode material has been widely used in commercial lithium-ion batteries due to its low irreversible capacity loss and good cycling performance. However, there are several problems that interfere with the production of material that has good electrochemical properties, including the crystallinity, the average particle size and particle size distribution. In recent years, synthesis of nanoparticles has been intensively investigated. Powders prepared by the traditional solid-state reaction have a large particle size and broad size distribution. On the other hand, solution method can reduce the particle size to nanometer range and control the particle size distribution. In this study, LiCoO2 was synthesized using the sol–gel preparation method, which Lithium acetate and Cobalt acetate were used as reactants. The stoichiometric amounts of the reactants were dissolved in deionized water. The solutions were stirred for 30 hours using magnetic stirrer, followed by heating at 80°C under vigorous stirring until a viscous gel was formed. The as-formed gel was calcined at 700°C for 7 h under a room atmosphere. The structural and morphological analysis of LiCoO2 was characterized using X-ray diffraction and Scanning electron microscopy. The diffraction pattern of material can be indexed based on the α-NaFeO2 structure. The clear splitting of the hexagonal doublet of (006)/(102) and (108)/(110) in this patterns indicates materials are formed in a well-ordered hexagonal structure. No impurity phase can be seen in this range probably due to the homogeneous mixing of the cations in the precursor. Furthermore, SEM micrograph of the LiCoO2 shows the particle size distribution is almost uniform while particle size is between 0.3-0.5 microns. In conclusion, LiCoO2 powder was successfully synthesized using the sol–gel method. LiCoO2 showed a hexagonal crystal structure. The sample has been prepared clearly indicate the pure phase of LiCoO2. Meanwhile, the morphology of the sample showed that the particle size and size distribution of particles is almost uniform.

Keywords: cathode material, LiCoO2, lithium-ion rechargeable batteries, Sol-Gel method

Procedia PDF Downloads 358
1964 Variable Tree Structure QR Decomposition-M Algorithm (QRD-M) in Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Systems

Authors: Jae-Hyun Ro, Jong-Kwang Kim, Chang-Hee Kang, Hyoung-Kyu Song

Abstract:

In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, QR decomposition-M algorithm (QRD-M) has suboptimal error performance. However, the QRD-M has still high complexity due to many calculations at each layer in tree structure. To reduce the complexity of the QRD-M, proposed QRD-M modifies existing tree structure by eliminating unnecessary candidates at almost whole layers. The method of the elimination is discarding the candidates which have accumulated squared Euclidean distances larger than calculated threshold. The simulation results show that the proposed QRD-M has same bit error rate (BER) performance with lower complexity than the conventional QRD-M.

Keywords: complexity, MIMO-OFDM, QRD-M, squared Euclidean distance

Procedia PDF Downloads 319
1963 The Utilization of Particle Swarm Optimization Method to Solve Nurse Scheduling Problem

Authors: Norhayati Mohd Rasip, Abd. Samad Hasan Basari , Nuzulha Khilwani Ibrahim, Burairah Hussin

Abstract:

The allocation of working schedule especially for shift environment is hard to fulfill its fairness among them. In the case of nurse scheduling, to set up the working time table for them is time consuming and complicated, which consider many factors including rules, regulation and human factor. The scenario is more complicated since most nurses are women which have personnel constraints and maternity leave factors. The undesirable schedule can affect the nurse productivity, social life and the absenteeism can significantly as well affect patient's life. This paper aimed to enhance the scheduling process by utilizing the particle swarm optimization in order to solve nurse scheduling problem. The result shows that the generated multiple initial schedule is fulfilled the requirements and produces the lowest cost of constraint violation.

Keywords: nurse scheduling, particle swarm optimisation, nurse rostering, hard and soft constraint

Procedia PDF Downloads 352
1962 Improving Coverage in Wireless Sensor Networks Using Particle Swarm Optimization Algorithm

Authors: Ehsan Abdolzadeh, Sanaz Nouri, Siamak Khalaj

Abstract:

Today WSNs have many applications in different fields like the environment, military operations, discoveries, monitoring operations, and so on. Coverage size and energy consumption are the important challenges that these networks need to face. This paper tries to solve the problem of coverage with a requirement of k-coverage and minimum energy consumption. In order to minimize energy consumption, visual sensor networks have been used that observe and process just those targets that are located in their view direction. As a result, sensor rotations have decreased, and subsequently, energy consumption has been minimized. To solve the problem of coverage particle swarm optimization, coverage optimization has been able to ensure coverage requirement together with minimizing sensor rotations while meeting the problem requirement of k≤14. So energy consumption has decreased, and this could extend the sensors’ lifetime subsequently.

Keywords: K coverage, particle union optimization algorithm, wireless sensor networks, visual sensor networks

Procedia PDF Downloads 99
1961 A Comparison of Sequential Quadratic Programming, Genetic Algorithm, Simulated Annealing, Particle Swarm Optimization for the Design and Optimization of a Beam Column

Authors: Nima Khosravi

Abstract:

This paper describes an integrated optimization technique with concurrent use of sequential quadratic programming, genetic algorithm, and simulated annealing particle swarm optimization for the design and optimization of a beam column. In this research, the comparison between 4 different types of optimization methods. The comparison is done and it is found out that all the methods meet the required constraints and the lowest value of the objective function is achieved by SQP, which was also the fastest optimizer to produce the results. SQP is a gradient based optimizer hence its results are usually the same after every run. The only thing which affects the results is the initial conditions given. The initial conditions given in the various test run were very large as compared. Hence, the value converged at a different point. Rest of the methods is a heuristic method which provides different values for different runs even if every parameter is kept constant.

Keywords: beam column, genetic algorithm, particle swarm optimization, sequential quadratic programming, simulated annealing

Procedia PDF Downloads 372
1960 An Efficient Encryption Scheme Using DWT and Arnold Transforms

Authors: Ali Abdrhman M. Ukasha

Abstract:

Data security needed in data transmission, storage, and communication to ensure the security. The color image is decomposed into red, green, and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using a key image that has same original size and is generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours of color image recovery can be obtained with accepted level of distortion using Canny edge detector. Experiments have demonstrated that proposed algorithm can fully encrypt 2D color image and completely reconstructed without any distortion. It has shown that the color image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.

Keywords: color image, wavelet transform, edge detector, Arnold transform, lossy image encryption

Procedia PDF Downloads 466