Search results for: PID controller and Genetic Algorithm (GA).
2024 Smartphone Video Source Identification Based on Sensor Pattern Noise
Authors: Raquel Ramos López, Anissa El-Khattabi, Ana Lucila Sandoval Orozco, Luis Javier García Villalba
Abstract:
An increasing number of mobile devices with integrated cameras has meant that most digital video comes from these devices. These digital videos can be made anytime, anywhere and for different purposes. They can also be shared on the Internet in a short period of time and may sometimes contain recordings of illegal acts. The need to reliably trace the origin becomes evident when these videos are used for forensic purposes. This work proposes an algorithm to identify the brand and model of mobile device which generated the video. Its procedure is as follows: after obtaining the relevant video information, a classification algorithm based on sensor noise and Wavelet Transform performs the aforementioned identification process. We also present experimental results that support the validity of the techniques used and show promising results.Keywords: Digital video, forensics analysis, key frame, mobile device, PRNU, sensor noise, source identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11982023 ORank: An Ontology Based System for Ranking Documents
Authors: Mehrnoush Shamsfard, Azadeh Nematzadeh, Sarah Motiee
Abstract:
Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques for extracting phrases and stemming words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18882022 FPGA based Relative Distance Measurement using Stereo Vision Technology
Authors: Manasi Pathade, Prachi Kadam, Renuka Kulkarni, Tejas Teredesai
Abstract:
In this paper, we propose a novel concept of relative distance measurement using Stereo Vision Technology and discuss its implementation on a FPGA based real-time image processor. We capture two images using two CCD cameras and compare them. Disparity is calculated for each pixel using a real time dense disparity calculation algorithm. This algorithm is based on the concept of indexed histogram for matching. Disparity being inversely proportional to distance (Proved Later), we can thus get the relative distances of objects in front of the camera. The output is displayed on a TV screen in the form of a depth image (optionally using pseudo colors). This system works in real time on a full PAL frame rate (720 x 576 active pixels @ 25 fps).Keywords: Stereo Vision, Relative Distance Measurement, Indexed Histogram, Real time FPGA Image Processor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30022021 An Alternative Approach for Assessing the Impact of Cutting Conditions on Surface Roughness Using Single Decision Tree
Authors: S. Ghorbani, N. I. Polushin
Abstract:
In this study, an approach to identify factors affecting on surface roughness in a machining process is presented. This study is based on 81 data about surface roughness over a wide range of cutting tools (conventional, cutting tool with holes, cutting tool with composite material), workpiece materials (AISI 1045 Steel, AA2024 aluminum alloy, A48-class30 gray cast iron), spindle speed (630-1000 rpm), feed rate (0.05-0.075 mm/rev), depth of cut (0.05-0.15 mm) and tool overhang (41-65 mm). A single decision tree (SDT) analysis was done to identify factors for predicting a model of surface roughness, and the CART algorithm was employed for building and evaluating regression tree. Results show that a single decision tree is better than traditional regression models with higher rate and forecast accuracy and strong value.
Keywords: Cutting condition, surface roughness, decision tree, CART algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8692020 Hybrid Artificial Immune System for Job Shop Scheduling Problem
Authors: Bin Cai, Shilong Wang, Haibo Hu
Abstract:
The job shop scheduling problem (JSSP) is a notoriously difficult problem in combinatorial optimization. This paper presents a hybrid artificial immune system for the JSSP with the objective of minimizing makespan. The proposed approach combines the artificial immune system, which has a powerful global exploration capability, with the local search method, which can exploit the optimal antibody. The antibody coding scheme is based on the operation based representation. The decoding procedure limits the search space to the set of full active schedules. In each generation, a local search heuristic based on the neighborhood structure proposed by Nowicki and Smutnicki is applied to improve the solutions. The approach is tested on 43 benchmark problems taken from the literature and compared with other approaches. The computation results validate the effectiveness of the proposed algorithm.Keywords: Artificial immune system, Job shop scheduling problem, Local search, Metaheuristic algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19262019 Using Artificial Neural Network Algorithm for Voltage Stability Improvement
Authors: Omid Borazjani, Mahmoud Roosta, Khodakhast Isapour, Ali Reza Rajabi
Abstract:
This paper presents an application of Artificial Neural Network (ANN) algorithm for improving power system voltage stability. The training data is obtained by solving several normal and abnormal conditions using the Linear Programming technique. The selected objective function gives minimum deviation of the reactive power control variables, which leads to the maximization of minimum Eigen value of load flow Jacobian. The considered reactive power control variables are switchable VAR compensators, OLTC transformers and excitation of generators. The method has been implemented on a modified IEEE 30-bus test system. The results obtain from the test clearly show that the trained neural network is capable of improving the voltage stability in power system with a high level of precision and speed.Keywords: Artificial Neural Network (ANN), Load Flow, Voltage Stability, Power Systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19702018 Multiphase Flow Regime Detection Algorithm for Gas-Liquid Interface Using Ultrasonic Pulse-Echo Technique
Authors: Serkan Solmaz, Jean-Baptiste Gouriet, Nicolas Van de Wyer, Christophe Schram
Abstract:
Efficiency of the cooling process for cryogenic propellant boiling in engine cooling channels on space applications is relentlessly affected by the phase change occurs during the boiling. The effectiveness of the cooling process strongly pertains to the type of the boiling regime such as nucleate and film. Geometric constraints like a non-transparent cooling channel unable to use any of visualization methods. The ultrasonic (US) technique as a non-destructive method (NDT) has therefore been applied almost in every engineering field for different purposes. Basically, the discontinuities emerge between mediums like boundaries among different phases. The sound wave emitted by the US transducer is both transmitted and reflected through a gas-liquid interface which makes able to detect different phases. Due to the thermal and structural concerns, it is impractical to sustain a direct contact between the US transducer and working fluid. Hence the transducer should be located outside of the cooling channel which results in additional interfaces and creates ambiguities on the applicability of the present method. In this work, an exploratory research is prompted so as to determine detection ability and applicability of the US technique on the cryogenic boiling process for a cooling cycle where the US transducer is taken place outside of the channel. Boiling of the cryogenics is a complex phenomenon which mainly brings several hindrances for experimental protocol because of thermal properties. Thus substitute materials are purposefully selected based on such parameters to simplify experiments. Aside from that, nucleate and film boiling regimes emerging during the boiling process are simply simulated using non-deformable stainless steel balls, air-bubble injection apparatuses and air clearances instead of conducting a real-time boiling process. A versatile detection algorithm is perennially developed concerning exploratory studies afterward. According to the algorithm developed, the phases can be distinguished 99% as no-phase, air-bubble, and air-film presences. The results show the detection ability and applicability of the US technique for an exploratory purpose.Keywords: Ultrasound, ultrasonic, multiphase flow, boiling, cryogenics, detection algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10052017 Forecasting Direct Normal Irradiation at Djibouti Using Artificial Neural Network
Authors: Ahmed Kayad Abdourazak, Abderafi Souad, Zejli Driss, Idriss Abdoulkader Ibrahim
Abstract:
In this paper Artificial Neural Network (ANN) is used to predict the solar irradiation in Djibouti for the first Time that is useful to the integration of Concentrating Solar Power (CSP) and sites selections for new or future solar plants as part of solar energy development. An ANN algorithm was developed to establish a forward/reverse correspondence between the latitude, longitude, altitude and monthly solar irradiation. For this purpose the German Aerospace Centre (DLR) data of eight Djibouti sites were used as training and testing in a standard three layers network with the back propagation algorithm of Lavenber-Marquardt. Results have shown a very good agreement for the solar irradiation prediction in Djibouti and proves that the proposed approach can be well used as an efficient tool for prediction of solar irradiation by providing so helpful information concerning sites selection, design and planning of solar plants.Keywords: Artificial neural network, solar irradiation, concentrated solar power, Lavenberg-Marquardt.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10832016 Object Speed Estimation by using Fuzzy Set
Authors: Hossein Pazhoumand-Dar, Amir Mohsen Toliyat Abolhassani, Ehsan Saeedi
Abstract:
Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.
Keywords: Blur Analysis, Fuzzy sets, Speed estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18792015 A Comparison between Hybrid and Experimental Extended Polars for the Numerical Prediction of Vertical-Axis Wind Turbine Performance using Blade Element-Momentum Algorithm
Authors: Gabriele Bedon, Marco Raciti Castelli, Ernesto Benini
Abstract:
A dynamic stall-corrected Blade Element-Momentum algorithm based on a hybrid polar is validated through the comparison with Sandia experimental measurements on a 5-m diameter wind turbine of Troposkien shape. Different dynamic stall models are evaluated. The numerical predictions obtained using the extended aerodynamic coefficients provided by both Sheldal and Klimas and Raciti Castelli et al. are compared to experimental data, determining the potential of the hybrid database for the numerical prediction of vertical-axis wind turbine performances.
Keywords: Darrieus wind turbine, Blade Element-Momentum Theory, extended airfoil database, hybrid database, Sandia 5-m wind turbine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25602014 Automatic Detection and Classification of Diabetic Retinopathy Using Retinal Fundus Images
Authors: A. Biran, P. Sobhe Bidari, A. Almazroe V. Lakshminarayanan, K. Raahemifar
Abstract:
Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively.Keywords: Diabetic retinopathy, fundus images, STARE, Gabor filter, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16692013 A Nonoblivious Image Watermarking System Based on Singular Value Decomposition and Texture Segmentation
Authors: Soroosh Rezazadeh, Mehran Yazdi
Abstract:
In this paper, a robust digital image watermarking scheme for copyright protection applications using the singular value decomposition (SVD) is proposed. In this scheme, an entropy masking model has been applied on the host image for the texture segmentation. Moreover, the local luminance and textures of the host image are considered for watermark embedding procedure to increase the robustness of the watermarking scheme. In contrast to all existing SVD-based watermarking systems that have been designed to embed visual watermarks, our system uses a pseudo-random sequence as a watermark. We have tested the performance of our method using a wide variety of image processing attacks on different test images. A comparison is made between the results of our proposed algorithm with those of a wavelet-based method to demonstrate the superior performance of our algorithm.Keywords: Watermarking, copyright protection, singular value decomposition, entropy masking, texture segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17652012 A Method of Planar-Template- Based Camera Self-Calibration for Single-View
Abstract:
Camera calibration is an important step in 3D reconstruction. Camera calibration may be classified into two major types: traditional calibration and self-calibration. However, a calibration method in using a checkerboard is intermediate between traditional calibration and self-calibration. A self is proposed based on a square in this paper. Only a square in the planar template, the camera self-calibration can be completed through the single view. The proposed algorithm is that the virtual circle and straight line are established by a square on planar template, and circular points, vanishing points in straight lines and the relation between them are be used, in order to obtain the image of the absolute conic (IAC) and establish the camera intrinsic parameters. To make the calibration template is simpler, as compared with the Zhang Zhengyou-s method. Through real experiments and experiments, the experimental results show that this algorithm is feasible and available, and has a certain precision and robustness.Keywords: Absolute conic, camera calibration, circle point, vanishing point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18952011 Texture Characterization Based on a Chandrasekhar Fast Adaptive Filter
Authors: Mounir Sayadi, Farhat Fnaiech
Abstract:
In the framework of adaptive parametric modelling of images, we propose in this paper a new technique based on the Chandrasekhar fast adaptive filter for texture characterization. An Auto-Regressive (AR) linear model of texture is obtained by scanning the image row by row and modelling this data with an adaptive Chandrasekhar linear filter. The characterization efficiency of the obtained model is compared with the model adapted with the Least Mean Square (LMS) 2-D adaptive algorithm and with the cooccurrence method features. The comparison criteria is based on the computation of a characterization degree using the ratio of "betweenclass" variances with respect to "within-class" variances of the estimated coefficients. Extensive experiments show that the coefficients estimated by the use of Chandrasekhar adaptive filter give better results in texture discrimination than those estimated by other algorithms, even in a noisy context.
Keywords: Texture analysis, statistical features, adaptive filters, Chandrasekhar algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16152010 Multiple Object Tracking using Particle Swarm Optimization
Authors: Chen-Chien Hsu, Guo-Tang Dai
Abstract:
This paper presents a particle swarm optimization (PSO) based approach for multiple object tracking based on histogram matching. To start with, gray-level histograms are calculated to establish a feature model for each of the target object. The difference between the gray-level histogram corresponding to each particle in the search space and the target object is used as the fitness value. Multiple swarms are created depending on the number of the target objects under tracking. Because of the efficiency and simplicity of the PSO algorithm for global optimization, target objects can be tracked as iterations continue. Experimental results confirm that the proposed PSO algorithm can rapidly converge, allowing real-time tracking of each target object. When the objects being tracked move outside the tracking range, global search capability of the PSO resumes to re-trace the target objects.Keywords: multiple object tracking, particle swarm optimization, gray-level histogram, image
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41002009 Knowledge Discovery and Data Mining Techniques in Textile Industry
Authors: Filiz Ersoz, Taner Ersoz, Erkin Guler
Abstract:
This paper addresses the issues and technique for textile industry using data mining techniques. Data mining has been applied to the stitching of garments products that were obtained from a textile company. Data mining techniques were applied to the data obtained from the CHAID algorithm, CART algorithm, Regression Analysis and, Artificial Neural Networks. Classification technique based analyses were used while data mining and decision model about the production per person and variables affecting about production were found by this method. In the study, the results show that as the daily working time increases, the production per person also decreases. In addition, the relationship between total daily working and production per person shows a negative result and the production per person show the highest and negative relationship.Keywords: Data mining, textile production, decision trees, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15392008 Algorithm for Reconstructing 3D-Binary Matrix with Periodicity Constraints from Two Projections
Authors: V. Masilamani, Kamala Krithivasan
Abstract:
We study the problem of reconstructing a three dimensional binary matrices whose interiors are only accessible through few projections. Such question is prominently motivated by the demand in material science for developing tool for reconstruction of crystalline structures from their images obtained by high-resolution transmission electron microscopy. Various approaches have been suggested to reconstruct 3D-object (crystalline structure) by reconstructing slice of the 3D-object. To handle the ill-posedness of the problem, a priori information such as convexity, connectivity and periodicity are used to limit the number of possible solutions. Formally, 3Dobject (crystalline structure) having a priory information is modeled by a class of 3D-binary matrices satisfying a priori information. We consider 3D-binary matrices with periodicity constraints, and we propose a polynomial time algorithm to reconstruct 3D-binary matrices with periodicity constraints from two orthogonal projections.
Keywords: 3D-Binary Matrix Reconstruction, Computed Tomography, Discrete Tomography, Integral Max Flow Problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 48952007 Applying p-Balanced Energy Technique to Solve Liouville-Type Problems in Calculus
Authors: Lina Wu, Ye Li, Jia Liu
Abstract:
We are interested in solving Liouville-type problems to explore constancy properties for maps or differential forms on Riemannian manifolds. Geometric structures on manifolds, the existence of constancy properties for maps or differential forms, and energy growth for maps or differential forms are intertwined. In this article, we concentrate on discovery of solutions to Liouville-type problems where manifolds are Euclidean spaces (i.e. flat Riemannian manifolds) and maps become real-valued functions. Liouville-type results of vanishing properties for functions are obtained. The original work in our research findings is to extend the q-energy for a function from finite in Lq space to infinite in non-Lq space by applying p-balanced technique where q = p = 2. Calculation skills such as Hölder's Inequality and Tests for Series have been used to evaluate limits and integrations for function energy. Calculation ideas and computational techniques for solving Liouville-type problems shown in this article, which are utilized in Euclidean spaces, can be universalized as a successful algorithm, which works for both maps and differential forms on Riemannian manifolds. This innovative algorithm has a far-reaching impact on research work of solving Liouville-type problems in the general settings involved with infinite energy. The p-balanced technique in this algorithm provides a clue to success on the road of q-energy extension from finite to infinite.
Keywords: Differential Forms, Hölder Inequality, Liouville-type problems, p-balanced growth, p-harmonic maps, q-energy growth, tests for series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8292006 Algorithm for Information Retrieval Optimization
Authors: Kehinde K. Agbele, Kehinde Daniel Aruleba, Eniafe F. Ayetiran
Abstract:
When using Information Retrieval Systems (IRS), users often present search queries made of ad-hoc keywords. It is then up to the IRS to obtain a precise representation of the user’s information need and the context of the information. This paper investigates optimization of IRS to individual information needs in order of relevance. The study addressed development of algorithms that optimize the ranking of documents retrieved from IRS. This study discusses and describes a Document Ranking Optimization (DROPT) algorithm for information retrieval (IR) in an Internet-based or designated databases environment. Conversely, as the volume of information available online and in designated databases is growing continuously, ranking algorithms can play a major role in the context of search results. In this paper, a DROPT technique for documents retrieved from a corpus is developed with respect to document index keywords and the query vectors. This is based on calculating the weight (Keywords: Internet ranking,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14752005 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: Real-Time Spatial Big Data, Quality Of Service, Vertical partitioning, Horizontal partitioning, Matching algorithm, Hamming distance, Stream query.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10562004 A Novel Convergence Accelerator for the LMS Adaptive Algorithm
Authors: Jeng-Shin Sheu, Jenn-Kaie Lain, Tai-Kuo Woo, Jyh-Horng Wen
Abstract:
The least mean square (LMS) algorithmis one of the most well-known algorithms for mobile communication systems due to its implementation simplicity. However, the main limitation is its relatively slow convergence rate. In this paper, a booster using the concept of Markov chains is proposed to speed up the convergence rate of LMS algorithms. The nature of Markov chains makes it possible to exploit the past information in the updating process. Moreover, since the transition matrix has a smaller variance than that of the weight itself by the central limit theorem, the weight transition matrix converges faster than the weight itself. Accordingly, the proposed Markov-chain based booster thus has the ability to track variations in signal characteristics, and meanwhile, it can accelerate the rate of convergence for LMS algorithms. Simulation results show that the LMS algorithm can effectively increase the convergence rate and meantime further approach the Wiener solution, if the Markov-chain based booster is applied. The mean square error is also remarkably reduced, while the convergence rate is improved.Keywords: LMS, Markov chain, convergence rate, accelerator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17642003 The Catalytic Activity of Cu2O Microparticles
Authors: Kanda Wongwailikhit
Abstract:
Copper (I) oxide microparticles with the morphology of cubic and hollow sphere were synthesized with the assistance of surfactant as the shape controller. Both particles were then subjected to study the catalytic activity and observed the results of shape effects of catalysts on rate of catalytic reaction. The decolorizing reaction of crystal violet and sodium hydroxide was chosen and measured the decreasing of reactant with respect to times using spectrophotometer. The result revealed that morphology of crystal had no effect on the catalytic activity for crystal violet reaction but contributed to total surface area predominantly.Keywords: Copper (I) oxide, Catalytic activity, Crystal violet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20572002 Modeling and Implementation of an Oceanic- Robot Glider
Authors: C. Clements, M. Hasenohr, A. Anvar
Abstract:
A glider is in essence an unpowered vehicle and in this project we designed and built an oceanic glider, designed to operate underwater. This Glider was designed to collect ocean data such as temperature, pressure and (in future measures physical dimensions of the operating environment) and output this data to an external source. Development of the Oceanic Glider required research into various actuation systems that control buoyancy, pitch and yaw and the dynamics of these systems. It also involved the design and manufacture of the Glider and the design and implementation of a controller that enabled the Glider to navigate and move in an appropriate manner.
Keywords: Ocean Glider, Robot, Automation, Command, Control, Navigation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17632001 X-Corner Detection for Camera Calibration Using Saddle Points
Authors: Abdulrahman S. Alturki, John S. Loomis
Abstract:
This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations.Keywords: Camera Calibration, Corner Detector, Saddle Points, X-Corners.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31532000 Clustering Unstructured Text Documents Using Fading Function
Authors: Pallav Roxy, Durga Toshniwal
Abstract:
Clustering unstructured text documents is an important issue in data mining community and has a number of applications such as document archive filtering, document organization and topic detection and subject tracing. In the real world, some of the already clustered documents may not be of importance while new documents of more significance may evolve. Most of the work done so far in clustering unstructured text documents overlooks this aspect of clustering. This paper, addresses this issue by using the Fading Function. The unstructured text documents are clustered. And for each cluster a statistics structure called Cluster Profile (CP) is implemented. The cluster profile incorporates the Fading Function. This Fading Function keeps an account of the time-dependent importance of the cluster. The work proposes a novel algorithm Clustering n-ary Merge Algorithm (CnMA) for unstructured text documents, that uses Cluster Profile and Fading Function. Experimental results illustrating the effectiveness of the proposed technique are also included.Keywords: Clustering, Text Mining, Unstructured TextDocuments, Fading Function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19851999 Dynamic Slope Scaling Procedure for Stochastic Integer Programming Problem
Authors: Takayuki Shiina
Abstract:
Mathematical programming has been applied to various problems. For many actual problems, the assumption that the parameters involved are deterministic known data is often unjustified. In such cases, these data contain uncertainty and are thus represented as random variables, since they represent information about the future. Decision-making under uncertainty involves potential risk. Stochastic programming is a commonly used method for optimization under uncertainty. A stochastic programming problem with recourse is referred to as a two-stage stochastic problem. In this study, we consider a stochastic programming problem with simple integer recourse in which the value of the recourse variable is restricted to a multiple of a nonnegative integer. The algorithm of a dynamic slope scaling procedure for solving this problem is developed by using a property of the expected recourse function. Numerical experiments demonstrate that the proposed algorithm is quite efficient. The stochastic programming model defined in this paper is quite useful for a variety of design and operational problems.Keywords: stochastic programming problem with recourse, simple integer recourse, dynamic slope scaling procedure
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16171998 A Neuro-Fuzzy Approach Based Voting Scheme for Fault Tolerant Systems Using Artificial Bee Colony Training
Authors: D. Uma Devi, P. Seetha Ramaiah
Abstract:
Voting algorithms are extensively used to make decisions in fault tolerant systems where each redundant module gives inconsistent outputs. Popular voting algorithms include majority voting, weighted voting, and inexact majority voters. Each of these techniques suffers from scenarios where agreements do not exist for the given voter inputs. This has been successfully overcome in literature using fuzzy theory. Our previous work concentrated on a neuro-fuzzy algorithm where training using the neuro system substantially improved the prediction result of the voting system. Weight training of Neural Network is sub-optimal. This study proposes to optimize the weights of the Neural Network using Artificial Bee Colony algorithm. Experimental results show the proposed system improves the decision making of the voting algorithms.Keywords: Voting algorithms, Fault tolerance, Fault masking, Neuro-Fuzzy System (NFS), Artificial Bee Colony (ABC)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26551997 Relay Node Selection Algorithm for Cooperative Communications in Wireless Networks
Authors: Sunmyeng Kim
Abstract:
IEEE 802.11a/b/g standards support multiple transmission rates. Even though the use of multiple transmission rates increase the WLAN capacity, this feature leads to the performance anomaly problem. Cooperative communication was introduced to relieve the performance anomaly problem. Data packets are delivered to the destination much faster through a relay node with high rate than through direct transmission to the destination at low rate. In the legacy cooperative protocols, a source node chooses a relay node only based on the transmission rate. Therefore, they are not so feasible in multi-flow environments since they do not consider the effect of other flows. To alleviate the effect, we propose a new relay node selection algorithm based on the transmission rate and channel contention level. Performance evaluation is conducted using simulation, and shows that the proposed protocol significantly outperforms the previous protocol in terms of throughput and delay.
Keywords: Cooperative communications, MAC protocol, Relay node, WLAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29341996 Mamdani Model based Adaptive Neural Fuzzy Inference System and its Application
Authors: Yuanyuan Chai, Limin Jia, Zundong Zhang
Abstract:
Hybrid algorithm is the hot issue in Computational Intelligence (CI) study. From in-depth discussion on Simulation Mechanism Based (SMB) classification method and composite patterns, this paper presents the Mamdani model based Adaptive Neural Fuzzy Inference System (M-ANFIS) and weight updating formula in consideration with qualitative representation of inference consequent parts in fuzzy neural networks. M-ANFIS model adopts Mamdani fuzzy inference system which has advantages in consequent part. Experiment results of applying M-ANFIS to evaluate traffic Level of service show that M-ANFIS, as a new hybrid algorithm in computational intelligence, has great advantages in non-linear modeling, membership functions in consequent parts, scale of training data and amount of adjusted parameters.Keywords: Fuzzy neural networks, Mamdani fuzzy inference, M-ANFIS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52451995 Groebner Bases Computation in Boolean Rings is P-SPACE
Authors: Quoc-Nam Tran
Abstract:
The theory of Groebner Bases, which has recently been honored with the ACM Paris Kanellakis Theory and Practice Award, has become a crucial building block to computer algebra, and is widely used in science, engineering, and computer science. It is wellknown that Groebner bases computation is EXP-SPACE in a general polynomial ring setting. However, for many important applications in computer science such as satisfiability and automated verification of hardware and software, computations are performed in a Boolean ring. In this paper, we give an algorithm to show that Groebner bases computation is PSPACE in Boolean rings. We also show that with this discovery, the Groebner bases method can theoretically be as efficient as other methods for automated verification of hardware and software. Additionally, many useful and interesting properties of Groebner bases including the ability to efficiently convert the bases for different orders of variables making Groebner bases a promising method in automated verification.Keywords: Algorithm, Complexity, Groebner basis, Applications of Computer Science.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1960