Search results for: Floor estimation algorithm
1350 An Improved Switching Median filter for Uniformly Distributed Impulse Noise Removal
Authors: Rajoo Pandey
Abstract:
The performance of an image filtering system depends on its ability to detect the presence of noisy pixels in the image. Most of the impulse detection schemes assume the presence of salt and pepper noise in the images and do not work satisfactorily in case of uniformly distributed impulse noise. In this paper, a new algorithm is presented to improve the performance of switching median filter in detection of uniformly distributed impulse noise. The performance of the proposed scheme is demonstrated by the results obtained from computer simulations on various images.Keywords: Switching median filter, Impulse noise, Imagefiltering, Impulse detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19571349 Comparison of Alternative Models to Predict Lean Meat Percentage of Lamb Carcasses
Authors: Vasco A. P. Cadavez, Fernando C. Monteiro
Abstract:
The objective of this study was to develop and compare alternative prediction equations of lean meat proportion (LMP) of lamb carcasses. Forty (40) male lambs, 22 of Churra Galega Bragançana Portuguese local breed and 18 of Suffolk breed were used. Lambs were slaughtered, and carcasses weighed approximately 30 min later in order to obtain hot carcass weight (HCW). After cooling at 4º C for 24-h a set of seventeen carcass measurements was recorded. The left side of carcasses was dissected into muscle, subcutaneous fat, inter-muscular fat, bone, and remainder (major blood vessels, ligaments, tendons, and thick connective tissue sheets associated with muscles), and the LMP was evaluated as the dissected muscle percentage. Prediction equations of LMP were developed, and fitting quality was evaluated through the coefficient of determination of estimation (R2 e) and standard error of estimate (SEE). Models validation was performed by k-fold crossvalidation and the coefficient of determination of prediction (R2 p) and standard error of prediction (SEP) were computed. The BT2 measurement was the best single predictor and accounted for 37.8% of the LMP variation with a SEP of 2.30%. The prediction of LMP of lamb carcasses can be based simple models, using as predictors the HCW and one fat thickness measurement.
Keywords: Bootstrap, Carcass, Lambs, Lean meat
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16211348 On Improving Breast Cancer Prediction Using GRNN-CP
Authors: Kefaya Qaddoum
Abstract:
The aim of this study is to predict breast cancer and to construct a supportive model that will stimulate a more reliable prediction as a factor that is fundamental for public health. In this study, we utilize general regression neural networks (GRNN) to replace the normal predictions with prediction periods to achieve a reasonable percentage of confidence. The mechanism employed here utilises a machine learning system called conformal prediction (CP), in order to assign consistent confidence measures to predictions, which are combined with GRNN. We apply the resulting algorithm to the problem of breast cancer diagnosis. The results show that the prediction constructed by this method is reasonable and could be useful in practice.
Keywords: Neural network, conformal prediction, cancer classification, regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8391347 Arabic Character Recognition using Artificial Neural Networks and Statistical Analysis
Authors: Ahmad M. Sarhan, Omar I. Al Helalat
Abstract:
In this paper, an Arabic letter recognition system based on Artificial Neural Networks (ANNs) and statistical analysis for feature extraction is presented. The ANN is trained using the Least Mean Squares (LMS) algorithm. In the proposed system, each typed Arabic letter is represented by a matrix of binary numbers that are used as input to a simple feature extraction system whose output, in addition to the input matrix, are fed to an ANN. Simulation results are provided and show that the proposed system always produces a lower Mean Squared Error (MSE) and higher success rates than the current ANN solutions.Keywords: ANN, Backpropagation, Gaussian, LMS, MSE, Neuron, standard deviation, Widrow-Hoff rule.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20161346 Design and Development of Automatic Leveling and Equalizing Hoist Device for Spacecraft
Authors: Fu Hao, Sun Gang, Tang Laiying, Cui Junfeng
Abstract:
To solve the quick and accurate level-adjusting problem in the process of spacecraft precise mating, automatic leveling and equalizing hoist device for spacecraft is developed. Based on lifting point adjustment by utilizing XY-workbench, the leveling and equalizing controller by a self-adaptive control algorithm is proposed. By simulation analysis and lifting test using engineering prototype, validity and reliability of the hoist device is verified, which can meet the precision mating requirements of practical applications for spacecraft.Keywords: automatic leveling and equalizing, hoist device, lifting point adjustment, self-adaptive control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20211345 Influencing of Rice Residue Management Method on GHG Emission from Rice Cultivation
Authors: Cheewaphongphan P., Garivait S., Pongpullponsak A., Patumsawad S.
Abstract:
Thailand is one of the world-s leaders of rice producers and exporters. Farmers have to increase the rice cultivation frequency for serving the national increasing of export-s demand. It leads to an elimination of rice residues by open burning which is the quickest and costless management method. The open burning of rice residue is one of the major causes of air pollutants and greenhouse gas (GHG) emission. Under ASEAN agreement on trans-boundary haze, Thailand set the master plan to mitigate air pollutant emission from open burning of agricultural residues. In this master plan, residues incorporation is promoted as alternative management method to open burning. However, the assessment of both options in term of GHG emission in order to investigate their contribution to long-term global warming is still scarce or inexistent. In this study, a method on rice residues assessment was first developed in order to estimate and compare GHG emissions from rice cultivation under rice residues open burning and the case with incorporation of the same amount of rice residues, using 2006 IPCC guidelines for emission estimation and Life Cycle Analysis technique. The emission from rice cultivation in different preparing area practice was also discussed.Keywords: Greenhouse gases, Incorporation, Rice cultivation, Rice field residue, Rice residue management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32271344 Evaluation of Settlement of Coastal Embankments Using Finite Elements Method
Authors: Sina Fadaie, Seyed Abolhassan Naeini
Abstract:
Coastal embankments play an important role in coastal structures by reducing the effect of the wave forces and controlling the movement of sediments. Many coastal areas are underlain by weak and compressible soils. Estimation of during construction settlement of coastal embankments is highly important in design and safety control of embankments and appurtenant structures. Accordingly, selecting and establishing of an appropriate model with a reasonable level of complication is one of the challenges for engineers. Although there are advanced models in the literature regarding design of embankments, there is not enough information on the prediction of their associated settlement, particularly in coastal areas having considerable soft soils. Marine engineering study in Iran is important due to the existence of two important coastal areas located in the northern and southern parts of the country. In the present study, the validity of Terzaghi’s consolidation theory has been investigated. In addition, the settlement of these coastal embankments during construction is predicted by using special methods in PLAXIS software by the help of appropriate boundary conditions and soil layers. The results indicate that, for the existing soil condition at the site, some parameters are important to be considered in analysis. Consequently, a model is introduced to estimate the settlement of the embankments in such geotechnical conditions.
Keywords: Consolidation, coastal embankments, settlement, numerical methods, finite elements method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7891343 A Visual Control Flow Language and Its Termination Properties
Authors: László Lengyel, Tihamér Levendovszky, Hassan Charaf
Abstract:
This paper presents the visual control flow support of Visual Modeling and Transformation System (VMTS), which facilitates composing complex model transformations out of simple transformation steps and executing them. The VMTS Visual Control Flow Language (VCFL) uses stereotyped activity diagrams to specify control flow structures and OCL constraints to choose between different control flow branches. This work discusses the termination properties of VCFL and provides an algorithm to support the termination analysis of VCFL transformations.
Keywords: Control Flow, Metamodel-Based Visual Model Transformation, OCL, Termination Properties, UML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20661342 Economic Evaluation Offshore Wind Project under Uncertainly and Risk Circumstances
Authors: Sayed Amir Hamzeh Mirkheshti
Abstract:
Offshore wind energy as a strategic renewable energy, has been growing rapidly due to availability, abundance and clean nature of it. On the other hand, budget of this project is incredibly higher in comparison with other renewable energies and it takes more duration. Accordingly, precise estimation of time and cost is needed in order to promote awareness in the developers and society and to convince them to develop this kind of energy despite its difficulties. Occurrence risks during on project would cause its duration and cost constantly changed. Therefore, to develop offshore wind power, it is critical to consider all potential risks which impacted project and to simulate their impact. Hence, knowing about these risks could be useful for the selection of most influencing strategies such as avoidance, transition, and act in order to decrease their probability and impact. This paper presents an evaluation of the feasibility of 500 MV offshore wind project in the Persian Gulf and compares its situation with uncertainty resources and risk. The purpose of this study is to evaluate time and cost of offshore wind project under risk circumstances and uncertain resources by using Monte Carlo simulation. We analyzed each risk and activity along with their distribution function and their effect on the project.
Keywords: Wind energy project; uncertain resources; risks; Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8001341 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: G. Candel, D. Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.
Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4891340 Axisymmetric Nonlinear Analysis of Point Supported Shallow Spherical Shells
Authors: M. Altekin, R. F. Yükseler
Abstract:
Geometrically nonlinear axisymmetric bending of a shallow spherical shell with a point support at the apex under linearly varying axisymmetric load was investigated numerically. The edge of the shell was assumed to be simply supported or clamped. The solution was obtained by the finite difference and the Newton-Raphson methods. The thickness of the shell was considered to be uniform and the material was assumed to be homogeneous and isotropic. Sensitivity analysis was made for two geometrical parameters. The accuracy of the algorithm was checked by comparing the deflection with the solution of point supported circular plates and good agreement was obtained.
Keywords: Bending, nonlinear, plate, point support, shell.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18821339 Estimation of the Parameters of Muskingum Methods for the Prediction of the Flood Depth in the Moudjar River Catchment
Authors: Fares Laouacheria, Said Kechida, Moncef Chabi
Abstract:
The objective of the study was based on the hydrological routing modelling for the continuous monitoring of the hydrological situation in the Moudjar river catchment, especially during floods with Hydrologic Engineering Center–Hydrologic Modelling Systems (HEC-HMS). The HEC-GeoHMS was used to transform data from geographic information system (GIS) to HEC-HMS for delineating and modelling the catchment river in order to estimate the runoff volume, which is used as inputs to the hydrological routing model. Two hydrological routing models were used, namely Muskingum and Muskingum routing models, for conducting this study. In this study, a comparison between the parameters of the Muskingum and Muskingum-Cunge routing models in HEC-HMS was used for modelling flood routing in the Moudjar river catchment and determining the relationship between these parameters and the physical characteristics of the river. The results indicate that the effects of input parameters such as the weighting factor "X" and travel time "K" on the output results are more significant, where the Muskingum routing model was more sensitive to input parameters than the Muskingum-Cunge routing model. This study can contribute to understand and improve the knowledge of the mechanisms of river floods, especially in ungauged river catchments.
Keywords: HEC-HMS, hydrological modelling, Muskingum routing model, Muskingum-Cunge routing model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11981338 Model Predictive Control of Turbocharged Diesel Engine with Exhaust Gas Recirculation
Authors: U. Yavas, M. Gokasan
Abstract:
Control of diesel engine’s air path has drawn a lot of attention due to its multi input-multi output, closed coupled, non-linear relation. Today, precise control of amount of air to be combusted is a must in order to meet with tight emission limits and performance targets. In this study, passenger car size diesel engine is modeled by AVL Boost RT, and then simulated with standard, industry level PID controllers. Finally, linear model predictive control is designed and simulated. This study shows the importance of modeling and control of diesel engines with flexible algorithm development in computer based systems.Keywords: Predictive control, engine control, engine modeling, PID control, feedforward compensation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18171337 A High Quality Speech Coder at 600 bps
Authors: Yong Zhang, Ruimin Hu
Abstract:
This paper presents a vocoder to obtain high quality synthetic speech at 600 bps. To reduce the bit rate, the algorithm is based on a sinusoidally excited linear prediction model which extracts few coding parameters, and three consecutive frames are grouped into a superframe and jointly vector quantization is used to obtain high coding efficiency. The inter-frame redundancy is exploited with distinct quantization schemes for different unvoiced/voiced frame combinations in the superframe. Experimental results show that the quality of the proposed coder is better than that of 2.4kbps LPC10e and achieves approximately the same as that of 2.4kbps MELP and with high robustness.
Keywords: Speech coding, Vector quantization, linear predicition, Mixed sinusoidal excitation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21881336 Simulation of the Visco-Elasto-Plastic Deformation Behaviour of Short Glass Fibre Reinforced Polyphthalamides
Authors: V. Keim, J. Spachtholz, J. Hammer
Abstract:
The importance of fibre reinforced plastics continually increases due to the excellent mechanical properties, low material and manufacturing costs combined with significant weight reduction. Today, components are usually designed and calculated numerically by using finite element methods (FEM) to avoid expensive laboratory tests. These programs are based on material models including material specific deformation characteristics. In this research project, material models for short glass fibre reinforced plastics are presented to simulate the visco-elasto-plastic deformation behaviour. Prior to modelling specimens of the material EMS Grivory HTV-5H1, consisting of a Polyphthalamide matrix reinforced by 50wt.-% of short glass fibres, are characterized experimentally in terms of the highly time dependent deformation behaviour of the matrix material. To minimize the experimental effort, the cyclic deformation behaviour under tensile and compressive loading (R = −1) is characterized by isothermal complex low cycle fatigue (CLCF) tests. Combining cycles under two strain amplitudes and strain rates within three orders of magnitude and relaxation intervals into one experiment the visco-elastic deformation is characterized. To identify visco-plastic deformation monotonous tensile tests either displacement controlled or strain controlled (CERT) are compared. All relevant modelling parameters for this complex superposition of simultaneously varying mechanical loadings are quantified by these experiments. Subsequently, two different material models are compared with respect to their accuracy describing the visco-elasto-plastic deformation behaviour. First, based on Chaboche an extended 12 parameter model (EVP-KV2) is used to model cyclic visco-elasto-plasticity at two time scales. The parameters of the model including a total separation of elastic and plastic deformation are obtained by computational optimization using an evolutionary algorithm based on a fitness function called genetic algorithm. Second, the 12 parameter visco-elasto-plastic material model by Launay is used. In detail, the model contains a different type of a flow function based on the definition of the visco-plastic deformation as a part of the overall deformation. The accuracy of the models is verified by corresponding experimental LCF testing.Keywords: Complex low cycle fatigue, material modelling, short glass fibre reinforced polyphthalamides, visco-elasto-plastic deformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13731335 Dynamic Admission Control for Quality of Service in IP Networks
Authors: J. Kasigwa, V. Baryamureeba, D. Williams
Abstract:
The goal of admission control is to support the Quality of Service demands of real-time applications via resource reservation in IP networks. In this paper we introduce a novel Dynamic Admission Control (DAC) mechanism for IP networks. The DAC dynamically allocates network resources using the previous network pattern for each path and uses the dynamic admission algorithm to improve bandwidth utilization using bandwidth brokers. We evaluate the performance of the proposed mechanism through trace-driven simulation experiments in view point of blocking probability, throughput and normalized utilization.Keywords: Bandwidth broker, dynamic admission control(DAC), IP networks, quality of service, real-time flows.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12951334 Minimizing Examinee Collusion with a Latin- Square Treatment Structure
Authors: M. H. Omar
Abstract:
Cheating on standardized tests has been a major concern as it potentially minimizes measurement precision. One major way to reduce cheating by collusion is to administer multiple forms of a test. Even with this approach, potential collusion is still quite large. A Latin-square treatment structure for distributing multiple forms is proposed to further reduce the colluding potential. An index to measure the extent of colluding potential is also proposed. Finally, with a simple algorithm, the various Latin-squares were explored to find the best structure to keep the colluding potential to a minimum.Keywords: Colluding pairs, Scale for Colluding Potential, Latin-Square Structure, Minimization of Cheating.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11661333 Re-interpreting Ruskin with Respect to the Wall
Authors: Anjali Sadanand, R. V. Nagarajan
Abstract:
Architecture morphs with advances in technology and the roof, wall, and floor as basic elements of a building, follow in redefining themselves over time. Their contribution is bound by time and held by design principles that deal with function, sturdiness, and beauty. Architecture engages with people to give joy through its form, material, design structure, and spatial qualities. This paper attempts to re-interpret John Ruskin’s “Seven lamps of Architecture” in the context of the architecture of the modern and present period. The paper focuses on the “wall” as an element of study in this context. Four of Ruskin’s seven lamps will be discussed, namely beauty, truth, life, and memory, through examples of architecture ranging from modernism to contemporary architecture of today. The study will focus on the relevance of Ruskin’s principles to the “wall” in specific, in buildings of different materials and over a range of typologies from all parts of the world. Two examples will be analyzed for each lamp. It will be shown that in each case, there is relevance to the significance of Ruskin’s lamps in modern and contemporary architecture. Nature to which Ruskin alludes to for his lamp of “beauty” is found in the different expressions of interpretation used by Corbusier in his Villa Stein façade based on proportion found in nature and in the direct expression of Toyo Ito in his translation of an understanding of the structure of trees into his façade design of the showroom for a Japanese bag boutique. “Truth” is shown in Mies van der Rohe’s Crown Hall building with its clarity of material and structure and Studio Mumbai’s Palmyra House, which celebrates the use of natural materials and local craftsmanship. “Life” is reviewed with a sustainable house in Kerala by Ashrams Ravi and Alvar Aalto’s summer house, which illustrate walls as repositories of intellectual thought and craft. “Memory” is discussed with Charles Correa’s Jawahar Kala Kendra and Venturi’s Vana Venturi house and discloses facades as text in the context of its materiality and iconography. Beauty is reviewed in Villa Stein and Toyo Ito’s Branded Retail building in Tokyo. The paper thus concludes that Ruskin’s Lamps can be interpreted in today’s context and add richness to meaning to the understanding of architecture.
Keywords: Beauty, design, façade, modernism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5201332 Statically Fused Unbiased Converted Measurements Kalman Filter
Authors: Zhengkun Guo, Yanbin Li, Wenqing Wang, Bo Zou
Abstract:
Active radar and sonar systems often report Doppler measurements in addition to the position measurements such as range and bearing. The tracker can perform better by making full use of the Doppler measurements. However, due to the high nonlinearity of the Doppler measurements with respect to the target state in the Cartesian coordinate systems, those measurements are not always fully exploited. This paper mainly focuses on dealing with the Doppler measurements as well as the position measurements in Polar coordinates. The Statically Fused Converted Position and Doppler Measurements Kalman Filter (SF-CMKF) with additive debiased measurement conversion has been presented. However, the exact compensation for the bias of the measurement conversion are multiplicative and depend on the statistics of the cosine of the angle measurement errors. As a result, the consistency and performance of the SF-CMKF may be suboptimal in the large angle error situations. In this paper, the multiplicative unbiased position and Doppler measurement conversion for two-dimensional (Polar-to-Cartesian) tracking are derived, and the SF-CMKF is improved by using those conversion. Monte Carlo simulations are presented to demonstrate the statistic consistency of the multiplicative unbiased conversion and the superior performance of the modified SF-CMKF (SF-UCMKF).
Keywords: Measurement conversion, Doppler, Kalman filter, estimation, tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3761331 Proposal of Additional Fuzzy Membership Functions in Smoothing Transition Autoregressive Models
Authors: Ε. Giovanis
Abstract:
In this paper we present, propose and examine additional membership functions for the Smoothing Transition Autoregressive (STAR) models. More specifically, we present the tangent hyperbolic, Gaussian and Generalized bell functions. Because Smoothing Transition Autoregressive (STAR) models follow fuzzy logic approach, more fuzzy membership functions should be tested. Furthermore, fuzzy rules can be incorporated or other training or computational methods can be applied as the error backpropagation or genetic algorithm instead to nonlinear squares. We examine two macroeconomic variables of US economy, the inflation rate and the 6-monthly treasury bills interest rates.Keywords: Forecast , Fuzzy membership functions, Smoothingtransition, Time-series
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15261330 Robust Steam Temperature Regulation for Distillation of Essential Oil Extraction Process using Hybrid Fuzzy-PD plus PID Controller
Authors: Nurhani Kasuan, Zakariah Yusuf, Mohd Nasir Taib, Mohd Hezri Fazalul Rahiman, Nazurah Tajuddin, Mohd Azri Abdul Aziz
Abstract:
This paper presents a hybrid fuzzy-PD plus PID (HFPP) controller and its application to steam distillation process for essential oil extraction system. Steam temperature is one of the most significant parameters that can influence the composition of essential oil yield. Due to parameter variations and changes in operation conditions during distillation, a robust steam temperature controller becomes nontrivial to avoid the degradation of essential oil quality. Initially, the PRBS input is triggered to the system and output of steam temperature is modeled using ARX model structure. The parameter estimation and tuning method is adopted by simulation using HFPP controller scheme. The effectiveness and robustness of proposed controller technique is validated by real time implementation to the system. The performance of HFPP using 25 and 49 fuzzy rules is compared. The experimental result demonstrates the proposed HFPP using 49 fuzzy rules achieves a better, consistent and robust controller compared to PID when considering the test on tracking the set point and the effects due to disturbance.Keywords: Fuzzy Logic controller, steam temperature, steam distillation, real time control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28421329 A Parallel Implementation of the Reverse Converter for the Moduli Set {2n, 2n–1, 2n–1–1}
Authors: Mehdi Hosseinzadeh, Amir Sabbagh Molahosseini, Keivan Navi
Abstract:
In this paper, a new reverse converter for the moduli set {2n, 2n–1, 2n–1–1} is presented. We improved a previously introduced conversion algorithm for deriving an efficient hardware design for reverse converter. Hardware architecture of the proposed converter is based on carry-save adders and regular binary adders, without the requirement for modular adders. The presented design is faster than the latest introduced reverse converter for moduli set {2n, 2n–1, 2n–1–1}. Also, it has better performance than the reverse converters for the recently introduced moduli set {2n+1–1, 2n, 2n–1}
Keywords: Residue arithmetic, Residue number system, Residue-to-Binary converter, Reverse converter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13141328 Iterative Clustering Algorithm for Analyzing Temporal Patterns of Gene Expression
Authors: Seo Young Kim, Jae Won Lee, Jong Sung Bae
Abstract:
Microarray experiments are information rich; however, extensive data mining is required to identify the patterns that characterize the underlying mechanisms of action. For biologists, a key aim when analyzing microarray data is to group genes based on the temporal patterns of their expression levels. In this paper, we used an iterative clustering method to find temporal patterns of gene expression. We evaluated the performance of this method by applying it to real sporulation data and simulated data. The patterns obtained using the iterative clustering were found to be superior to those obtained using existing clustering algorithms.Keywords: Clustering, microarray experiment, temporal pattern of gene expression data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13561327 Adaptive Weighted Averaging Filter Using the Appropriate Number of Consecutive Frames
Authors: Mahmoud Saeidi, Ali Nazemipour
Abstract:
In this paper, we propose a novel adaptive spatiotemporal filter that utilizes image sequences in order to remove noise. The consecutive frames include: current, previous and next noisy frames. The filter proposed in this paper is based upon the weighted averaging pixels intensity and noise variance in image sequences. It utilizes the Appropriate Number of Consecutive Frames (ANCF) based on the noisy pixels intensity among the frames. The number of consecutive frames is adaptively calculated for each region in image and its value may change from one region to another region depending on the pixels intensity within the region. The weights are determined by a well-defined mathematical criterion, which is adaptive to the feature of spatiotemporal pixels of the consecutive frames. It is experimentally shown that the proposed filter can preserve image structures and edges under motion while suppressing noise, and thus can be effectively used in image sequences filtering. In addition, the AWA filter using ANCF is particularly well suited for filtering sequences that contain segments with abruptly changing scene content due to, for example, rapid zooming and changes in the view of the camera.Keywords: Appropriate Number of Consecutive Frames, Adaptive Weighted Averaging, Motion Estimation, Noise Variance, Motion Compensation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18191326 Using PFA in Feature Analysis and Selection for H.264 Adaptation
Authors: Nora A. Naguib, Ahmed E. Hussein, Hesham A. Keshk, Mohamed I. El-Adawy
Abstract:
Classification of video sequences based on their contents is a vital process for adaptation techniques. It helps decide which adaptation technique best fits the resource reduction requested by the client. In this paper we used the principal feature analysis algorithm to select a reduced subset of video features. The main idea is to select only one feature from each class based on the similarities between the features within that class. Our results showed that using this feature reduction technique the source video features can be completely omitted from future classification of video sequences.
Keywords: Adaptation, feature selection, H.264, Principal Feature Analysis (PFA)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16071325 A Complexity-Based Approach in Image Compression using Neural Networks
Authors: Hadi Veisi, Mansour Jamzad
Abstract:
In this paper we present an adaptive method for image compression that is based on complexity level of the image. The basic compressor/de-compressor structure of this method is a multilayer perceptron artificial neural network. In adaptive approach different Back-Propagation artificial neural networks are used as compressor and de-compressor and this is done by dividing the image into blocks, computing the complexity of each block and then selecting one network for each block according to its complexity value. Three complexity measure methods, called Entropy, Activity and Pattern-based are used to determine the level of complexity in image blocks and their ability in complexity estimation are evaluated and compared. In training and evaluation, each image block is assigned to a network based on its complexity value. Best-SNR is another alternative in selecting compressor network for image blocks in evolution phase which chooses one of the trained networks such that results best SNR in compressing the input image block. In our evaluations, best results are obtained when overlapping the blocks is allowed and choosing the networks in compressor is based on the Best-SNR. In this case, the results demonstrate superiority of this method comparing with previous similar works and JPEG standard coding.Keywords: Adaptive image compression, Image complexity, Multi-layer perceptron neural network, JPEG Standard, PSNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22221324 A High Bitrate Information Hiding Algorithm for Video in Video
Authors: Wang Shou-Dao, Xiao Chuang-Bai, Lin Yu
Abstract:
In high bitrate information hiding techniques, 1 bit is embedded within each 4 x 4 Discrete Cosine Transform (DCT) coefficient block by means of vector quantization, then the hidden bit can be effectively extracted in terminal end. In this paper high bitrate information hiding algorithms are summarized, and the scheme of video in video is implemented. Experimental result shows that the host video which is embedded numerous auxiliary information have little visually quality decline. Peak Signal to Noise Ratio (PSNR)Y of host video only degrades 0.22dB in average, while the hidden information has a high percentage of survives and keeps a high robustness in H.264/AVC compression, the average Bit Error Rate(BER) of hiding information is 0.015%.Keywords: Information Hiding, Embed, Quantification, Extract
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18991323 Solving Machine Loading Problem in Flexible Manufacturing Systems Using Particle Swarm Optimization
Authors: S. G. Ponnambalam, Low Seng Kiat
Abstract:
In this paper, a particle swarm optimization (PSO) algorithm is proposed to solve machine loading problem in flexible manufacturing system (FMS), with bicriterion objectives of minimizing system unbalance and maximizing system throughput in the occurrence of technological constraints such as available machining time and tool slots. A mathematical model is used to select machines, assign operations and the required tools. The performance of the PSO is tested by using 10 sample dataset and the results are compared with the heuristics reported in the literature. The results support that the proposed PSO is comparable with the algorithms reported in the literature.Keywords: Machine loading problem, FMS, Particle Swarm Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21211322 A Content-Based Optimization of Data Stream Television Multiplex
Authors: Jaroslav Polec, Martin Šimek, Michal Martinovič, Elena Šikudová
Abstract:
The television multiplex has reserved capacity and therefore we can use only limited number of videos for propagation of it. Appropriate composition of the multiplex has a major impact on how many videos is spread by multiplex. Therefore in this paper is designed a simple algorithm to optimize capacity utilization multiplex. Significant impact on the number of programs in the multiplex has also the fact from which programs is composed. Content of multiplex can be movies, news, sport, animated stories, documentaries, etc. These types have their own specific characteristics that affect their resulting data stream. In this paper is also done an impact analysis of the composition of the multiplex to use its capacity by video content.
Keywords: Multiplex, content, group of pictures, frame, capacity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14751321 Elimination of Redundant Links in Web Pages– Mathematical Approach
Authors: G. Poonkuzhali, K.Thiagarajan, K.Sarukesi
Abstract:
With the enormous growth on the web, users get easily lost in the rich hyper structure. Thus developing user friendly and automated tools for providing relevant information without any redundant links to the users to cater to their needs is the primary task for the website owners. Most of the existing web mining algorithms have concentrated on finding frequent patterns while neglecting the less frequent one that are likely to contain the outlying data such as noise, irrelevant and redundant data. This paper proposes new algorithm for mining the web content by detecting the redundant links from the web documents using set theoretical(classical mathematics) such as subset, union, intersection etc,. Then the redundant links is removed from the original web content to get the required information by the user..Keywords: Web documents, Web content mining, redundantlink, outliers, set theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2015