Search results for: evolutionary algorithm.
1812 Image Adaptive Watermarking with Visual Model in Orthogonal Polynomials based Transformation Domain
Authors: Krishnamoorthi R., Sheba Kezia Malarchelvi P. D.
Abstract:
In this paper, an image adaptive, invisible digital watermarking algorithm with Orthogonal Polynomials based Transformation (OPT) is proposed, for copyright protection of digital images. The proposed algorithm utilizes a visual model to determine the watermarking strength necessary to invisibly embed the watermark in the mid frequency AC coefficients of the cover image, chosen with a secret key. The visual model is designed to generate a Just Noticeable Distortion mask (JND) by analyzing the low level image characteristics such as textures, edges and luminance of the cover image in the orthogonal polynomials based transformation domain. Since the secret key is required for both embedding and extraction of watermark, it is not possible for an unauthorized user to extract the embedded watermark. The proposed scheme is robust to common image processing distortions like filtering, JPEG compression and additive noise. Experimental results show that the quality of OPT domain watermarked images is better than its DCT counterpart.Keywords: Orthogonal Polynomials based Transformation, Digital Watermarking, Copyright Protection, Visual model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16971811 Bibliometric Analysis of the Research Progress on Graphene Inks from 2008 to 2018
Authors: Jean C. A. Sousa, Julio Cesar Maciel Santos, Andressa J. Rubio, Edneia A. S. Paccola, Natália U. Yamaguchi
Abstract:
A bibliometric analysis in the Web of Science database was used to identify overall scientific results of graphene inks to date (2008 to 2018). The objective of this study was to evaluate the evolutionary tendency of graphene inks research and to identify its aspects, aiming to provide data that can guide future work. The contributions of different researches, languages, thematic categories, periodicals, place of publication, institutes, funding agencies, articles cited and applications were analyzed. The results revealed a growing number of annual publications, of 258 papers found, 107 were included because they met the inclusion criteria. Three main applications were identified: synthesis and characterization, electronics and surfaces. The most relevant research on graphene inks has been summarized in this article, and graphene inks for electronic devices presented the most incident theme according to the research trends during the studied period. It is estimated that this theme will remain in evidence and will contribute to the direction of future research in this area.
Keywords: Bibliometric, coating, nanomaterials, scientometrics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10091810 DACS3:Embedding Individual Ant Behavior in Ant Colony System
Authors: Zulaiha Ali Othman, Helmi Md Rais, Abdul Razak Hamdan
Abstract:
Ants are fascinating creatures that demonstrate the ability to find food and bring it back to their nest. Their ability as a colony, to find paths to food sources has inspired the development of algorithms known as Ant Colony Systems (ACS). The principle of cooperation forms the backbone of such algorithms, commonly used to find solutions to problems such as the Traveling Salesman Problem (TSP). Ants communicate to each other through chemical substances called pheromones. Modeling individual ants- ability to manipulate this substance can help an ACS find the best solution. This paper introduces a Dynamic Ant Colony System with threelevel updates (DACS3) that enhance an existing ACS. Experiments were conducted to observe single ant behavior in a colony of Malaysian House Red Ants. Such behavior was incorporated into the DACS3 algorithm. We benchmark the performance of DACS3 versus DACS on TSP instances ranging from 14 to 100 cities. The result shows that the DACS3 algorithm can achieve shorter distance in most cases and also performs considerably faster than DACS.Keywords: Dynamic Ant Colony System (DACS), Traveling Salesmen Problem (TSP), Optimization, Swarm Intelligent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16141809 MATLAB/SIMULINK Based Model of Single- Machine Infinite-Bus with TCSC for Stability Studies and Tuning Employing GA
Authors: Sidhartha Panda, Narayana Prasad Padhy
Abstract:
With constraints on data availability and for study of power system stability it is adequate to model the synchronous generator with field circuit and one equivalent damper on q-axis known as the model 1.1. This paper presents a systematic procedure for modelling and simulation of a single-machine infinite-bus power system installed with a thyristor controlled series compensator (TCSC) where the synchronous generator is represented by model 1.1, so that impact of TCSC on power system stability can be more reasonably evaluated. The model of the example power system is developed using MATLAB/SIMULINK which can be can be used for teaching the power system stability phenomena, and also for research works especially to develop generator controllers using advanced technologies. Further, the parameters of the TCSC controller are optimized using genetic algorithm. The non-linear simulation results are presented to validate the effectiveness of the proposed approach.
Keywords: Genetic algorithm, MATLAB/SIMULINK, modelling and simulation, power system stability, single-machineinfinite-bus power system, thyristor controlled series compensator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165131808 Efficient Large Numbers Karatsuba-Ofman Multiplier Designs for Embedded Systems
Authors: M.Machhout, M.Zeghid, W.El hadj youssef, B.Bouallegue, A.Baganne, R.Tourki
Abstract:
Long number multiplications (n ≥ 128-bit) are a primitive in most cryptosystems. They can be performed better by using Karatsuba-Ofman technique. This algorithm is easy to parallelize on workstation network and on distributed memory, and it-s known as the practical method of choice. Multiplying long numbers using Karatsuba-Ofman algorithm is fast but is highly recursive. In this paper, we propose different designs of implementing Karatsuba-Ofman multiplier. A mixture of sequential and combinational system design techniques involving pipelining is applied to our proposed designs. Multiplying large numbers can be adapted flexibly to time, area and power criteria. Computationally and occupation constrained in embedded systems such as: smart cards, mobile phones..., multiplication of finite field elements can be achieved more efficiently. The proposed designs are compared to other existing techniques. Mathematical models (Area (n), Delay (n)) of our proposed designs are also elaborated and evaluated on different FPGAs devices.Keywords: finite field, Karatsuba-Ofman, long numbers, multiplication, mathematical model, recursivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25311807 Estimation of Systolic and Diastolic Pressure using the Pulse Transit Time
Authors: Soo-young Ye, Gi-Ryon Kim, Dong-Keun Jung, Seong-wan Baik, Gye-rok Jeon
Abstract:
In this paper, algorithm estimating the blood pressure was proposed using the pulse transit time (PTT) as a more convenient method of measuring the blood pressure. After measuring ECG and pressure pulse, and photoplethysmography, the PTT was calculated from the acquired signals. Thereafter, the system to indirectly measure the systolic pressure and the diastolic pressure was composed using the statistic method. In comparison between the blood pressure indirectly measured by proposed algorithm estimating the blood pressure and real blood pressure measured by conventional sphygmomanometer, the systolic pressure indicates the mean error of ±3.24mmHg and the standard deviation of 2.53mmHg, while the diastolic pressure indicates the satisfactory result, that is, the mean error of ±1.80mmHg and the standard deviation of 1.39mmHg. These results are satisfied with the regulation of ANSI/AAMI for certification of sphygmomanometer that real measurement error value should be within the mean error of ±5mmHg and the standard deviation of 8mmHg. These results are suggest the possibility of applying to portable and long time blood pressure monitoring system hereafter.Keywords: Blood pressure, Systolic, Diastolic, Pulse transit time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 65791806 Optimized Data Fusion in an Intelligent Integrated GPS/INS System Using Genetic Algorithm
Authors: Ali Asadian, Behzad Moshiri, Ali Khaki Sedigh, Caro Lucas
Abstract:
Most integrated inertial navigation systems (INS) and global positioning systems (GPS) have been implemented using the Kalman filtering technique with its drawbacks related to the need for predefined INS error model and observability of at least four satellites. Most recently, a method using a hybrid-adaptive network based fuzzy inference system (ANFIS) has been proposed which is trained during the availability of GPS signal to map the error between the GPS and the INS. Then it will be used to predict the error of the INS position components during GPS signal blockage. This paper introduces a genetic optimization algorithm that is used to update the ANFIS parameters with respect to the INS/GPS error function used as the objective function to be minimized. The results demonstrate the advantages of the genetically optimized ANFIS for INS/GPS integration in comparison with conventional ANFIS specially in the cases of satellites- outages. Coping with this problem plays an important role in assessment of the fusion approach in land navigation.Keywords: Adaptive Network based Fuzzy Inference System (ANFIS), Genetic optimization, Global Positioning System (GPS), Inertial Navigation System (INS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19091805 Low Energy Method for Data Delivery in Ubiquitous Network
Authors: Tae Kyung Kim, Hee Suk Seo
Abstract:
Recent advances in wireless sensor networks have led to many routing methods designed for energy-efficiency in wireless sensor networks. Despite that many routing methods have been proposed in USN, a single routing method cannot be energy-efficient if the environment of the ubiquitous sensor network varies. We present the controlling network access to various hosts and the services they offer, rather than on securing them one by one with a network security model. When ubiquitous sensor networks are deployed in hostile environments, an adversary may compromise some sensor nodes and use them to inject false sensing reports. False reports can lead to not only false alarms but also the depletion of limited energy resource in battery powered networks. The interleaved hop-by-hop authentication scheme detects such false reports through interleaved authentication. This paper presents a LMDD (Low energy method for data delivery) algorithm that provides energy-efficiency by dynamically changing protocols installed at the sensor nodes. The algorithm changes protocols based on the output of the fuzzy logic which is the fitness level of the protocols for the environment.Keywords: Data delivery, routing, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13451804 The Tag Authentication Scheme using Self-Shrinking Generator on RFID System
Authors: HangRok Lee, DoWon Hong
Abstract:
Since communications between tag and reader in RFID system are by radio, anyone can access the tag and obtain its any information. And a tag always replies with the same ID so that it is hard to distinguish between a real and a fake tag. Thus, there are many security problems in today-s RFID System. Firstly, unauthorized reader can easily read the ID information of any Tag. Secondly, Adversary can easily cheat the legitimate reader using the collected Tag ID information, such as the any legitimate Tag. These security problems can be typically solved by encryption of messages transmitted between Tag and Reader and by authentication for Tag. In this paper, to solve these security problems on RFID system, we propose the Tag Authentication Scheme based on self shrinking generator (SSG). SSG Algorithm using in our scheme is proposed by W.Meier and O.Staffelbach in EUROCRYPT-94. This Algorithm is organized that only one LFSR and selection logic in order to generate random stream. Thus it is optimized to implement the hardware logic on devices with extremely limited resource, and the output generating from SSG at each time do role as random stream so that it is allow our to design the light-weight authentication scheme with security against some network attacks. Therefore, we propose the novel tag authentication scheme which use SSG to encrypt the Tag-ID transmitted from tag to reader and achieve authentication of tag.Keywords: RFID system, RFID security, self shrinkinggeneratior, authentication, protocol.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16281803 Fast Intra Prediction Algorithm for H.264/AVC Based on Quadratic and Gradient Model
Authors: A. Elyousfi, A. Tamtaoui, E. Bouyakhf
Abstract:
The H.264/AVC standard uses an intra prediction, 9 directional modes for 4x4 luma blocks and 8x8 luma blocks, 4 directional modes for 16x16 macroblock and 8x8 chroma blocks, respectively. It means that, for a macroblock, it has to perform 736 different RDO calculation before a best RDO modes is determined. With this Multiple intra-mode prediction, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards, but computational complexity is increased significantly. This paper presents a fast intra prediction algorithm for H.264/AVC intra prediction based a characteristic of homogeneity information. In this study, the gradient prediction method used to predict the homogeneous area and the quadratic prediction function used to predict the nonhomogeneous area. Based on the correlation between the homogeneity and block size, the smaller block is predicted by gradient prediction and quadratic prediction, so the bigger block is predicted by gradient prediction. Experimental results are presented to show that the proposed method reduce the complexity by up to 76.07% maintaining the similar PSNR quality with about 1.94%bit rate increase in average.Keywords: Intra prediction, H.264/AVC, video coding, encodercomplexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18951802 Beam Orientation Optimization Using Ant Colony Optimization in Intensity Modulated Radiation Therapy
Authors: Xi Pei, Ruifen Cao, Hui Liu, Chufeng Jin, Mengyun Cheng, Huaqing Zheng, Yican Wu, FDS Team
Abstract:
In intensity modulated radiation therapy (IMRT) treatment planning, beam angles are usually preselected on the basis of experience and intuition. Therefore, getting an appropriate beam configuration needs a very long time. Based on the present situation, the paper puts forward beam orientation optimization using ant colony optimization (ACO). We use ant colony optimization to select the beam configurations, after getting the beam configuration using Conjugate Gradient (CG) algorithm to optimize the intensity profiles. Combining with the information of the effect of pencil beam, we can get the global optimal solution accelerating. In order to verify the feasibility of the presented method, a simulated and clinical case was tested, compared with dose-volume histogram and isodose line between target area and organ at risk. The results showed that the effect was improved after optimizing beam configurations. The optimization approach could make treatment planning meet clinical requirements more efficiently, so it had extensive application perspective.Keywords: intensity modulated radiation therapy, ant colonyoptimization, Conjugate Gradient algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20171801 Optimal Control Strategy for High Performance EV Interior Permanent Magnet Synchronous Motor
Authors: Mehdi Karbalaye Zadeh, Ehsan M. Siavashi
Abstract:
The controllable electrical loss which consists of the copper loss and iron loss can be minimized by the optimal control of the armature current vector. The control algorithm of current vector minimizing the electrical loss is proposed and the optimal current vector can be decided according to the operating speed and the load conditions. The proposed control algorithm is applied to the experimental PM motor drive system and this paper presents a modern approach of speed control for permanent magnet synchronous motor (PMSM) applied for Electric Vehicle using a nonlinear control. The regulation algorithms are based on the feedback linearization technique. The direct component of the current is controlled to be zero which insures the maximum torque operation. The near unity power factor operation is also achieved. More over, among EV-s motor electric propulsion features, the energy efficiency is a basic characteristic that is influenced by vehicle dynamics and system architecture. For this reason, the EV dynamics are taken into account.Keywords: PMSM, Electric Vehicle, Optimal control, Traction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17681800 Hybrid Anomaly Detection Using Decision Tree and Support Vector Machine
Authors: Elham Serkani, Hossein Gharaee Garakani, Naser Mohammadzadeh, Elaheh Vaezpour
Abstract:
Intrusion detection systems (IDS) are the main components of network security. These systems analyze the network events for intrusion detection. The design of an IDS is through the training of normal traffic data or attack. The methods of machine learning are the best ways to design IDSs. In the method presented in this article, the pruning algorithm of C5.0 decision tree is being used to reduce the features of traffic data used and training IDS by the least square vector algorithm (LS-SVM). Then, the remaining features are arranged according to the predictor importance criterion. The least important features are eliminated in the order. The remaining features of this stage, which have created the highest level of accuracy in LS-SVM, are selected as the final features. The features obtained, compared to other similar articles which have examined the selected features in the least squared support vector machine model, are better in the accuracy, true positive rate, and false positive. The results are tested by the UNSW-NB15 dataset.
Keywords: Intrusion detection system, decision tree, support vector machine, feature selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12411799 Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise
Authors: Hyunsup Yoon, Youngjoon Han, Hernsoo Hahn
Abstract:
In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.
Keywords: Contrast Enhancement, Histogram Equalization, Histogram Region Equalization, Equalization Noise
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34191798 Two States Mapping Based Neural Network Model for Decreasing of Prediction Residual Error
Authors: Insung Jung, lockjo Koo, Gi-Nam Wang
Abstract:
The objective of this paper is to design a model of human vital sign prediction for decreasing prediction error by using two states mapping based time series neural network BP (back-propagation) model. Normally, lot of industries has been applying the neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has a residual error between real value and prediction output. Therefore, we designed two states of neural network model for compensation of residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We found that most of simulations cases were satisfied by the two states mapping based time series prediction model compared to normal BP. In particular, small sample size of times series were more accurate than the standard MLP model. We expect that this algorithm can be available to sudden death prevention and monitoring AGENT system in a ubiquitous homecare environment.
Keywords: Neural network, U-healthcare, prediction, timeseries, computer aided prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19841797 Design and Implementation of Secure Electronic Payment System (Client)
Authors: Pyae Pyae Hun
Abstract:
Secure electronic payment system is presented in this paper. This electronic payment system is to be secure for clients such as customers and shop owners. The security architecture of the system is designed by RC5 encryption / decryption algorithm. This eliminates the fraud that occurs today with stolen credit card numbers. The symmetric key cryptosystem RC5 can protect conventional transaction data such as account numbers, amount and other information. This process can be done electronically using RC5 encryption / decryption program written by Microsoft Visual Basic 6.0. There is no danger of any data sent within the system being intercepted, and replaced. The alternative is to use the existing network, and to encrypt all data transmissions. The system with encryption is acceptably secure, but that the level of encryption has to be stepped up, as computing power increases. Results In order to be secure the system the communication between modules is encrypted using symmetric key cryptosystem RC5. The system will use simple user name, password, user ID, user type and cipher authentication mechanism for identification, when the user first enters the system. It is the most common method of authentication in most computer system.Keywords: A 128-bit block cipher, Microsoft visual basic 6.0, RC5 encryption /decryption algorithm and TCP/IP protocol.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23711796 Analysis of Genotype Size for an Evolvable Hardware System
Authors: Emanuele Stomeo, Tatiana Kalganova, Cyrille Lambert
Abstract:
The evolution of logic circuits, which falls under the heading of evolvable hardware, is carried out by evolutionary algorithms. These algorithms are able to automatically configure reconfigurable devices. One of main difficulties in developing evolvable hardware with the ability to design functional electrical circuits is to choose the most favourable EA features such as fitness function, chromosome representations, population size, genetic operators and individual selection. Until now several researchers from the evolvable hardware community have used and tuned these parameters and various rules on how to select the value of a particular parameter have been proposed. However, to date, no one has presented a study regarding the size of the chromosome representation (circuit layout) to be used as a platform for the evolution in order to increase the evolvability, reduce the number of generations and optimize the digital logic circuits through reducing the number of logic gates. In this paper this topic has been thoroughly investigated and the optimal parameters for these EA features have been proposed. The evolution of logic circuits has been carried out by an extrinsic evolvable hardware system which uses (1+λ) evolution strategy as the core of the evolution.
Keywords: Evolvable hardware, genotype size, computational intelligence, design of logic circuits.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16621795 An Energy-Efficient Distributed Unequal Clustering Protocol for Wireless Sensor Networks
Authors: Sungju Lee, Jangsoo Lee , Hongjoong Sin, Seunghwan Yoo, Sanghyuck Lee, Jaesik Lee, Yongjun Lee, Sungchun Kim
Abstract:
The wireless sensor networks have been extensively deployed and researched. One of the major issues in wireless sensor networks is a developing energy-efficient clustering protocol. Clustering algorithm provides an effective way to prolong the lifetime of a wireless sensor networks. In the paper, we compare several clustering protocols which significantly affect a balancing of energy consumption. And we propose an Energy-Efficient Distributed Unequal Clustering (EEDUC) algorithm which provides a new way of creating distributed clusters. In EEDUC, each sensor node sets the waiting time. This waiting time is considered as a function of residual energy, number of neighborhood nodes. EEDUC uses waiting time to distribute cluster heads. We also propose an unequal clustering mechanism to solve the hot-spot problem. Simulation results show that EEDUC distributes the cluster heads, balances the energy consumption well among the cluster heads and increases the network lifetime.Keywords: Wireless Sensor Network, Distributed UnequalClustering, Multi-hop, Lifetime.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24891794 A New Source Code Auditing Algorithm for Detecting LFI and RFI in PHP Programs
Authors: Seyed Ali Mir Heydari, Mohsen Sayadiharikandeh
Abstract:
Static analysis of source code is used for auditing web applications to detect the vulnerabilities. In this paper, we propose a new algorithm to analyze the PHP source code for detecting LFI and RFI potential vulnerabilities. In our approach, we first define some patterns for finding some functions which have potential to be abused because of unhandled user inputs. More precisely, we use regular expression as a fast and simple method to define some patterns for detection of vulnerabilities. As inclusion functions could be also used in a safe way, there could occur many false positives (FP). The first cause of these FP-s could be that the function does not use a usersupplied variable as an argument. So, we extract a list of usersupplied variables to be used for detecting vulnerable lines of code. On the other side, as vulnerability could spread among the variables like by multi-level assignment, we also try to extract the hidden usersupplied variables. We use the resulted list to decrease the false positives of our method. Finally, as there exist some ways to prevent the vulnerability of inclusion functions, we define also some patterns to detect them and decrease our false positives.Keywords: User-supplied Variables, hidden user-supplied variables, PHP vulnerabilities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25071793 Join and Meet Block Based Default Definite Decision Rule Mining from IDT and an Incremental Algorithm
Authors: Chen Wu, Jingyu Yang
Abstract:
Using maximal consistent blocks of tolerance relation on the universe in incomplete decision table, the concepts of join block and meet block are introduced and studied. Including tolerance class, other blocks such as tolerant kernel and compatible kernel of an object are also discussed at the same time. Upper and lower approximations based on those blocks are also defined. Default definite decision rules acquired from incomplete decision table are proposed in the paper. An incremental algorithm to update default definite decision rules is suggested for effective mining tasks from incomplete decision table into which data is appended. Through an example, we demonstrate how default definite decision rules based on maximal consistent blocks, join blocks and meet blocks are acquired and how optimization is done in support of discernibility matrix and discernibility function in the incomplete decision table.Keywords: rough set, incomplete decision table, maximalconsistent block, default definite decision rule, join and meet block.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12881792 Automatic 3D Reconstruction of Coronary Artery Centerlines from Monoplane X-ray Angiogram Images
Authors: Ali Zifan, Panos Liatsis, Panagiotis Kantartzis, Manolis Gavaises, Nicos Karcanias, Demosthenes Katritsis
Abstract:
We present a new method for the fully automatic 3D reconstruction of the coronary artery centerlines, using two X-ray angiogram projection images from a single rotating monoplane acquisition system. During the first stage, the input images are smoothed using curve evolution techniques. Next, a simple yet efficient multiscale method, based on the information of the Hessian matrix, for the enhancement of the vascular structure is introduced. Hysteresis thresholding using different image quantiles, is used to threshold the arteries. This stage is followed by a thinning procedure to extract the centerlines. The resulting skeleton image is then pruned using morphological and pattern recognition techniques to remove non-vessel like structures. Finally, edge-based stereo correspondence is solved using a parallel evolutionary optimization method based on f symbiosis. The detected 2D centerlines combined with disparity map information allow the reconstruction of the 3D vessel centerlines. The proposed method has been evaluated on patient data sets for evaluation purposes.Keywords: Vessel enhancement, centerline extraction, symbiotic reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22721791 A Survey: Bandwidth Management in an IP Based Network
Authors: M. Kassim, M. Ismail, K. Jumari, M.I Yusof
Abstract:
this paper presented a survey analysis subjected on network bandwidth management from published papers referred in IEEE Explorer database in three years from 2009 to 2011. Network Bandwidth Management is discussed in today-s issues for computer engineering applications and systems. Detailed comparison is presented between published papers to look further in the IP based network critical research area for network bandwidth management. Important information such as the network focus area, a few modeling in the IP Based Network and filtering or scheduling used in the network applications layer is presented. Many researches on bandwidth management have been done in the broad network area but fewer are done in IP Based network specifically at the applications network layer. A few researches has contributed new scheme or enhanced modeling but still the issue of bandwidth management still arise at the applications network layer. This survey is taken as a basic research towards implementations of network bandwidth management technique, new framework model and scheduling scheme or algorithm in an IP Based network which will focus in a control bandwidth mechanism in prioritizing the network traffic the applications layer.Keywords: Bandwidth Management (BM), IP Based network, modeling, algorithm, internet traffic, network Management, Quality of Service (QoS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33331790 Prediction of the Solubility of Benzoic Acid in Supercritical CO2 Using the PC-SAFT EoS
Authors: Hamidreza Bagheri, Alireza Shariati
Abstract:
There are many difficulties in the purification of raw components and products. However, researchers are seeking better ways for purification. One of the recent methods is extraction using supercritical fluids. In this study, the phase equilibria of benzoic acid -supercritical carbon dioxide system were investigated. Regarding the phase equilibria of this system, the modeling of solid-supercritical fluid behavior was performed using the Perturbed-Chain Statistical Association Fluid Theory (PC-SAFT) and Peng-Robinson equations of state (PR EoS). For this purpose, five PC-SAFT EoS parameters for pure benzoic acid were obtained using its experimental vapor pressure. Benzoic acid has association sites and the behavior of the benzoic acid-supercritical fluid system was well predicted using both equations of state, while the binary interaction parameter values for PR EoS were negative. Genetic algorithm, which is one of the most accurate global optimization algorithms, was also used to optimize the pure benzoic acid parameters and the binary interaction parameters. The AAD% value for the PC-SAFT EoS, were 0.22 for the carbon dioxide-benzoic acid system.
Keywords: Supercritical fluids, Solubility, Solid, PC-SAFT EoS, Genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26681789 Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map
Authors: Anurag Sharma, Christian W. Omlin
Abstract:
Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.Keywords: cluster boundaries, clustering, code vectors, data mining, particle swarm optimization, self-organizing maps, U-matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19101788 Random Projections for Dimensionality Reduction in ICA
Authors: Sabrina Gaito, Andrea Greppi, Giuliano Grossi
Abstract:
In this paper we present a technique to speed up ICA based on the idea of reducing the dimensionality of the data set preserving the quality of the results. In particular we refer to FastICA algorithm which uses the Kurtosis as statistical property to be maximized. By performing a particular Johnson-Lindenstrauss like projection of the data set, we find the minimum dimensionality reduction rate ¤ü, defined as the ratio between the size k of the reduced space and the original one d, which guarantees a narrow confidence interval of such estimator with high confidence level. The derived dimensionality reduction rate depends on a system control parameter β easily computed a priori on the basis of the observations only. Extensive simulations have been done on different sets of real world signals. They show that actually the dimensionality reduction is very high, it preserves the quality of the decomposition and impressively speeds up FastICA. On the other hand, a set of signals, on which the estimated reduction rate is greater than 1, exhibits bad decomposition results if reduced, thus validating the reliability of the parameter β. We are confident that our method will lead to a better approach to real time applications.Keywords: Independent Component Analysis, FastICA algorithm, Higher-order statistics, Johnson-Lindenstrauss lemma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18901787 Optimal Allocation of DG Units for Power Loss Reduction and Voltage Profile Improvement of Distribution Networks using PSO Algorithm
Authors: K. Varesi
Abstract:
This paper proposes a Particle Swarm Optimization (PSO) based technique for the optimal allocation of Distributed Generation (DG) units in the power systems. In this paper our aim is to decide optimal number, type, size and location of DG units for voltage profile improvement and power loss reduction in distribution network. Two types of DGs are considered and the distribution load flow is used to calculate exact loss. Load flow algorithm is combined appropriately with PSO till access to acceptable results of this operation. The suggested method is programmed under MATLAB software. Test results indicate that PSO method can obtain better results than the simple heuristic search method on the 30-bus and 33- bus radial distribution systems. It can obtain maximum loss reduction for each of two types of optimally placed multi-DGs. Moreover, voltage profile improvement is achieved.Keywords: Distributed Generation (DG), Optimal Allocation, Particle Swarm Optimization (PSO), Power Loss Minimization, Voltage Profile Improvement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31691786 Sinusoidal Roughness Elements in a Square Cavity
Abstract:
Numerical studies were conducted using Lattice Boltzmann Method (LBM) to study the natural convection in a square cavity in the presence of roughness. An algorithm based on a single relaxation time Bhatnagar-Gross-Krook (BGK) model of Lattice Boltzmann Method (LBM) was developed. Roughness was introduced on both the hot and cold walls in the form of sinusoidal roughness elements. The study was conducted for a Newtonian fluid of Prandtl number (Pr) 1.0. The range of Ra number was explored from 10^3 to 10^6 in a laminar region. Thermal and hydrodynamic behavior of fluid was analyzed using a differentially heated square cavity with roughness elements present on both the hot and cold wall. Neumann boundary conditions were introduced on horizontal walls with vertical walls as isothermal. The roughness elements were at the same boundary condition as corresponding walls. Computational algorithm was validated against previous benchmark studies performed with different numerical methods, and a good agreement was found to exist. Results indicate that the maximum reduction in the average heat transfer was 16.66 percent at Ra number 10^5.
Keywords: Lattice Boltzmann Method Natural convection, Nusselt Number Rayleigh number, Roughness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21531785 Optimal Design of Airfoil Platform Shapes with High Aspect Ratio Using Genetic Algorithm
Authors: Kyoungwoo Park, Byeong-Sam Kim
Abstract:
Unmanned aerial vehicles (UAVs) performing their operations for a long time have been attracting much attention in military and civil aviation industries for the past decade. The applicable field of UAV is changing from the military purpose only to the civil one. Because of their low operation cost, high reliability and the necessity of various application areas, numerous development programs have been initiated around the world. To obtain the optimal solutions of the design variable (i.e., sectional airfoil profile, wing taper ratio and sweep) for high performance of UAVs, both the lift and lift-to-drag ratio are maximized whereas the pitching moment should be minimized, simultaneously. It is found that the lift force and lift-to-drag ratio are linearly dependent and a unique and dominant solution are existed. However, a trade-off phenomenon is observed between the lift-to-drag ratio and pitching moment. As the result of optimization, sixty-five (65) non-dominated Pareto individuals at the cutting edge of design spaces that are decided by airfoil shapes can be obtained.Keywords: Unmanned aerial vehicle (UAV), Airfoil, CFD, Shape optimization, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19591784 Image Dehazing Using Dark Channel Prior and Fast Guided Filter in Daubechies Lifting Wavelet Transform Domain
Authors: Harpreet Kaur, Sudipta Majumdar
Abstract:
In this paper a method for image dehazing is proposed in lifting wavelet transform domain. Lifting Daubechies (D4) wavelet has been used to obtain the approximate image and detail images. As the haze is contained in low frequency part, only the approximate image is used for further processing. This region is processed by dehazing algorithm based on dark channel prior (DCP). The dehazed approximate image is then recombined with the detail images using inverse lifting wavelet transform. Implementation of lifting wavelet transform has the advantage of auxiliary memory saving, fast implementation and simplicity. Also, the proposed method deals with near white scene problem, blue horizon issue and localized light sources in a way to enhance image quality and makes the algorithm robust. Simulation results present improvement in terms of visual quality, parameters such as root mean square (RMS) contrast, structural similarity index (SSIM), entropy and execution time.
Keywords: Dark channel prior, image dehazing, lifting wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11251783 Super Resolution Blind Reconstruction of Low Resolution Images using Wavelets based Fusion
Authors: Liyakathunisa, V. K. Ananthashayana
Abstract:
Crucial information barely visible to the human eye is often embedded in a series of low resolution images taken of the same scene. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. The ideal algorithm should be fast, and should add sharpness and details, both at edges and in regions without adding artifacts. In this paper we propose a super resolution blind reconstruction technique for linearly degraded images. In our proposed technique the algorithm is divided into three parts an image registration, wavelets based fusion and an image restoration. In this paper three low resolution images are considered which may sub pixels shifted, rotated, blurred or noisy, the sub pixel shifted images are registered using affine transformation model; A wavelet based fusion is performed and the noise is removed using soft thresolding. Our proposed technique reduces blocking artifacts and also smoothens the edges and it is also able to restore high frequency details in an image. Our technique is efficient and computationally fast having clear perspective of real time implementation.Keywords: Affine Transforms, Denoiseing, DWT, Fusion, Image registration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2670