Search results for: Chaos Optimization Algorithm
3108 Performance Analysis of Proprietary and Non-Proprietary Tools for Regression Testing Using Genetic Algorithm
Authors: K. Hema Shankari, R. Thirumalaiselvi, N. V. Balasubramanian
Abstract:
The present paper addresses to the research in the area of regression testing with emphasis on automated tools as well as prioritization of test cases. The uniqueness of regression testing and its cyclic nature is pointed out. The difference in approach between industry, with business model as basis, and academia, with focus on data mining, is highlighted. Test Metrics are discussed as a prelude to our formula for prioritization; a case study is further discussed to illustrate this methodology. An industrial case study is also described in the paper, where the number of test cases is so large that they have to be grouped as Test Suites. In such situations, a genetic algorithm proposed by us can be used to reconfigure these Test Suites in each cycle of regression testing. The comparison is made between a proprietary tool and an open source tool using the above-mentioned metrics. Our approach is clarified through several tables.Keywords: APFD metric, genetic algorithm, regression testing, RFT tool, test case prioritization, selenium tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9173107 Big Bang – Big Crunch Learning Method for Fuzzy Cognitive Maps
Authors: Engin Yesil, Leon Urbas
Abstract:
Modeling of complex dynamic systems, which are very complicated to establish mathematical models, requires new and modern methodologies that will exploit the existing expert knowledge, human experience and historical data. Fuzzy cognitive maps are very suitable, simple, and powerful tools for simulation and analysis of these kinds of dynamic systems. However, human experts are subjective and can handle only relatively simple fuzzy cognitive maps; therefore, there is a need of developing new approaches for an automated generation of fuzzy cognitive maps using historical data. In this study, a new learning algorithm, which is called Big Bang-Big Crunch, is proposed for the first time in literature for an automated generation of fuzzy cognitive maps from data. Two real-world examples; namely a process control system and radiation therapy process, and one synthetic model are used to emphasize the effectiveness and usefulness of the proposed methodology.Keywords: Big Bang-Big Crunch optimization, Dynamic Systems, Fuzzy Cognitive Maps, Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18413106 Evaluation of the MCFLIRT Correction Algorithm in Head Motion from Resting State fMRI Data
Authors: V. Sacca, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
In the last few years, resting-state functional MRI (rs-fMRI) was widely used to investigate the architecture of brain networks by investigating the Blood Oxygenation Level Dependent response. This technique represented an interesting, robust and reliable approach to compare pathologic and healthy subjects in order to investigate neurodegenerative diseases evolution. On the other hand, the elaboration of rs-fMRI data resulted to be very prone to noise due to confounding factors especially the head motion. Head motion has long been known to be a source of artefacts in task-based functional MRI studies, but it has become a particularly challenging problem in recent studies using rs-fMRI. The aim of this work was to evaluate in MS patients a well-known motion correction algorithm from the FMRIB's Software Library - MCFLIRT - that could be applied to minimize the head motion distortions, allowing to correctly interpret rs-fMRI results.
Keywords: Head motion correction, MCFLIRT algorithm, multiple sclerosis, resting state fMRI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11833105 Parallel Algorithm for Numerical Solution of Three-Dimensional Poisson Equation
Authors: Alibek Issakhov
Abstract:
In this paper developed and realized absolutely new algorithm for solving three-dimensional Poisson equation. This equation used in research of turbulent mixing, computational fluid dynamics, atmospheric front, and ocean flows and so on. Moreover in the view of rising productivity of difficult calculation there was applied the most up-to-date and the most effective parallel programming technology - MPI in combination with OpenMP direction, that allows to realize problems with very large data content. Resulted products can be used in solving of important applications and fundamental problems in mathematics and physics.Keywords: MPI, OpenMP, three dimensional Poisson equation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16943104 A Parallel Implementation of k-Means in MATLAB
Authors: Dimitris Varsamis, Christos Talagkozis, Alkiviadis Tsimpiris, Paris Mastorocostas
Abstract:
The aim of this work is the parallel implementation of k-means in MATLAB, in order to reduce the execution time. Specifically, a new function in MATLAB for serial k-means algorithm is developed, which meets all the requirements for the conversion to a function in MATLAB with parallel computations. Additionally, two different variants for the definition of initial values are presented. In the sequel, the parallel approach is presented. Finally, the performance tests for the computation times respect to the numbers of features and classes are illustrated.Keywords: K-means algorithm, clustering, parallel computations, MATLAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11573103 Fingerprint Image Encryption Using a 2D Chaotic Map and Elliptic Curve Cryptography
Authors: D. M. S. Bandara, Yunqi Lei, Ye Luo
Abstract:
Fingerprints are suitable as long-term markers of human identity since they provide detailed and unique individual features which are difficult to alter and durable over life time. In this paper, we propose an algorithm to encrypt and decrypt fingerprint images by using a specially designed Elliptic Curve Cryptography (ECC) procedure based on block ciphers. In addition, to increase the confusing effect of fingerprint encryption, we also utilize a chaotic-behaved method called Arnold Cat Map (ACM) for a 2D scrambling of pixel locations in our method. Experimental results are carried out with various types of efficiency and security analyses. As a result, we demonstrate that the proposed fingerprint encryption/decryption algorithm is advantageous in several different aspects including efficiency, security and flexibility. In particular, using this algorithm, we achieve a margin of about 0.1% in the test of Number of Pixel Changing Rate (NPCR) values comparing to the-state-of-the-art performances.Keywords: Arnold cat map, biometric encryption, block cipher, elliptic curve cryptography, fingerprint encryption, Koblitz’s Encoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10983102 Reliability Assessment of Bangladesh Power System Using Recursive Algorithm
Authors: Nahid-Al-Masood, Jubaer Ahmed, Amina Hasan Abedin, S. R. Deeba, Faeza Hafiz, Mahmuda Begum
Abstract:
An electric utility-s main concern is to plan, design, operate and maintain its power supply to provide an acceptable level of reliability to its users. This clearly requires that standards of reliability be specified and used in all three sectors of the power system, i.e., generation, transmission and distribution. That is why reliability of a power system is always a major concern to power system planners. This paper presents the reliability analysis of Bangladesh Power System (BPS). Reliability index, loss of load probability (LOLP) of BPS is evaluated using recursive algorithm and considering no de-rated states of generators. BPS has sixty one generators and a total installed capacity of 5275 MW. The maximum demand of BPS is about 5000 MW. The relevant data of the generators and hourly load profiles are collected from the National Load Dispatch Center (NLDC) of Bangladesh and reliability index 'LOLP' is assessed for the period of last ten years.
Keywords: Recursive algorithm, LOLP, forced outage rate, cumulative probability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23543101 Design and Operation of a Multicarrier Energy System Based On Multi Objective Optimization Approach
Authors: Azadeh Maroufmashat, Sourena Sattari Khavas, Halle Bakhteeyar
Abstract:
Multi-energy systems will enhance the system reliability and power quality. This paper presents an integrated approach for the design and operation of distributed energy resources (DER) systems, based on energy hub modeling. A multi-objective optimization model is developed by considering an integrated view of electricity and natural gas network to analyze the optimal design and operating condition of DER systems, by considering two conflicting objectives, namely, minimization of total cost and the minimization of environmental impact which is assessed in terms of CO2 emissions. The mathematical model considers energy demands of the site, local climate data, and utility tariff structure, as well as technical and financial characteristics of the candidate DER technologies. To provide energy demands, energy systems including photovoltaic, and co-generation systems, boiler, central power grid are considered. As an illustrative example, a hotel in Iran demonstrates potential applications of the proposed method. The results prove that increasing the satisfaction degree of environmental objective leads to increased total cost.
Keywords: Multi objective optimization, DER systems, Energy hub, Cost, CO2 emission.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24663100 Efficient Feature Fusion for Noise Iris in Unconstrained Environment
Authors: Yao-Hong Tsai
Abstract:
This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition.
Keywords: Image fusion, iris recognition, local binary pattern, wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22173099 Determination of the Proper Quality Costs Parameters via Variable Step Size Steepest Descent Algorithm
Authors: Danupun Visuwan, Pongchanun Luangpaiboon
Abstract:
This paper presents the determination of the proper quality costs parameters which provide the optimum return. The system dynamics simulation was applied. The simulation model was constructed by the real data from a case of the electronic devices manufacturer in Thailand. The Steepest Descent algorithm was employed to optimise. The experimental results show that the company should spend on prevention and appraisal activities for 850 and 10 Baht/day respectively. It provides minimum cumulative total quality cost, which is 258,000 Baht in twelve months. The effect of the step size in the stage of improving the variables to the optimum was also investigated. It can be stated that the smaller step size provided a better result with more experimental runs. However, the different yield in this case is not significant in practice. Therefore, the greater step size is recommended because the region of optima could be reached more easily and rapidly.Keywords: Quality costs, Steepest Descent Algorithm, StepSize, System Dynamics Simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12683098 Multiple Moving Talker Tracking by Integration of Two Successive Algorithms
Authors: Kenji Suyama, Masahiro Oshida, Noboru Owada
Abstract:
In this paper, an estimation accuracy of multiple moving talker tracking using a microphone array is improved. The tracking can be achieved by the adaptive method in which two algorithms are integrated, namely, the PAST (Projection Approximation Subspace Tracking) algorithm and the IPLS (Interior Point Least Square) algorithm. When either talker begins to speak again after a silent period, an appropriate feasible region for an evaluation function of the IPLS algorithm might not be set. Then, the tracking fails due to the incorrect updating. Therefore, if an increment of the number of active talkers is detected, the feasible region must be reset. Then, a low cost realization is required for the high speed tracking and a high accuracy realization is desired for the precise tracking. In this paper, the directions roughly estimated using the delayed-sum-array method are used for the resetting. Several results of experiments performed in an actual room environment show the effectiveness of the proposed method.Keywords: moving talkers tracking, microphone array, signal subspace
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13373097 Application of Feed-Forward Neural Networks Autoregressive Models in Gross Domestic Product Prediction
Authors: Ε. Giovanis
Abstract:
In this paper we present an autoregressive model with neural networks modeling and standard error backpropagation algorithm training optimization in order to predict the gross domestic product (GDP) growth rate of four countries. Specifically we propose a kind of weighted regression, which can be used for econometric purposes, where the initial inputs are multiplied by the neural networks final optimum weights from input-hidden layer after the training process. The forecasts are compared with those of the ordinary autoregressive model and we conclude that the proposed regression-s forecasting results outperform significant those of autoregressive model in the out-of-sample period. The idea behind this approach is to propose a parametric regression with weighted variables in order to test for the statistical significance and the magnitude of the estimated autoregressive coefficients and simultaneously to estimate the forecasts.Keywords: Autoregressive model, Error back-propagation Feed-Forward neural networks, , Gross Domestic Product
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14203096 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation
Authors: Aicha Majda, Abdelhamid El Hassani
Abstract:
Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7433095 Parallel Computation of Data Summation for Multiple Problem Spaces on Partitioned Optical Passive Stars Network
Authors: Khin Thida Latt, Mineo Kaneko, Yoichi Shinoda
Abstract:
In Partitioned Optical Passive Stars POPS network,nodes and couplers become free after slot to slot in some computation.It is necessary to efficiently utilize free couplers and nodes to be cost effective. Improving parallelism, we present the fast data summation algorithm for multiple problem spaces on P OP S(g, g) with smaller number of nodes for the case of d =n = g. For the case of d >n > g, we simulate the calculation of large number of data items dedicated to larger system with many nodes on smaller system with smaller number of nodes. The algorithm is faster than the best know algorithm and using smaller number of nodes and groups make the system low cost and practical.Keywords: Partitioned optical passive stars network, parallelcomputing, optical computing, data sum
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11793094 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing
Authors: S. Aziz, B. Alexander, C. Gengnagel, S. Weinzierl
Abstract:
This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the Building Information Modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.
Keywords: Acoustical design, additive manufacturing, computational design, multimodal optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6033093 A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting
Authors: Analise Borg, Paul Micallef
Abstract:
Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organise the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that nonparametric analysis offer potential results as the ones mentioned in the literature.
Keywords: Audio fingerprinting, mapping algorithm, Gaussian Mixture Models, MFCC, MPEG-7.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22853092 A Memetic Algorithm for an Energy-Costs-Aware Flexible Job-Shop Scheduling Problem
Authors: Christian Böning, Henrik Prinzhorn, Eric C. Hund, Malte Stonis
Abstract:
In this article, the flexible job-shop scheduling problem is extended by consideration of energy costs which arise owing to the power peak, and further decision variables such as work in process and throughput time are incorporated into the objective function. This enables a production plan to be simultaneously optimized in respect of the real arising energy and logistics costs. The energy-costs-aware flexible job-shop scheduling problem (EFJSP) which arises is described mathematically, and a memetic algorithm (MA) is presented as a solution. In the MA, the evolutionary process is supplemented with a local search. Furthermore, repair procedures are used in order to rectify any infeasible solutions that have arisen in the evolutionary process. The potential for lowering the real arising costs of a production plan through consideration of energy consumption levels is highlighted.
Keywords: Energy costs, flexible job-shop scheduling, memetic algorithm, power peak.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11173091 High-Speed Pipeline Implementation of Radix-2 DIF Algorithm
Authors: Christos Meletis, Paul Bougas, George Economakos , Paraskevas Kalivas, Kiamal Pekmestzi
Abstract:
In this paper, we propose a new architecture for the implementation of the N-point Fast Fourier Transform (FFT), based on the Radix-2 Decimation in Frequency algorithm. This architecture is based on a pipeline circuit that can process a stream of samples and produce two FFT transform samples every clock cycle. Compared to existing implementations the architecture proposed achieves double processing speed using the same circuit complexity.
Keywords: Digital signal processing, systolic circuits, FFTalgorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22153090 An Improved Face Recognition Algorithm Using Histogram-Based Features in Spatial and Frequency Domains
Authors: Qiu Chen, Koji Kotani, Feifei Lee, Tadahiro Ohmi
Abstract:
In this paper, we propose an improved face recognition algorithm using histogram-based features in spatial and frequency domains. For adding spatial information of the face to improve recognition performance, a region-division (RD) method is utilized. The facial area is firstly divided into several regions, then feature vectors of each facial part are generated by Binary Vector Quantization (BVQ) histogram using DCT coefficients in low frequency domains, as well as Local Binary Pattern (LBP) histogram in spatial domain. Recognition results with different regions are first obtained separately and then fused by weighted averaging. Publicly available ORL database is used for the evaluation of our proposed algorithm, which is consisted of 40 subjects with 10 images per subject containing variations in lighting, posing, and expressions. It is demonstrated that face recognition using RD method can achieve much higher recognition rate.
Keywords: Face recognition, Binary vector quantization (BVQ), Local Binary Patterns (LBP), DCT coefficients.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16193089 MCDM Spectrum Handover Models for Cognitive Wireless Networks
Authors: Cesar Hernández, Diego Giral, Fernando Santa
Abstract:
Spectrum handover is a significant topic in the cognitive radio networks to assure an efficient data transmission in the cognitive radio user’s communications. This paper proposes a comparison between three spectrum handover models: VIKOR, SAW and MEW. Four evaluation metrics are used. These metrics are, accumulative average of failed handover, accumulative average of handover performed, accumulative average of transmission bandwidth and, accumulative average of the transmission delay. As a difference with related work, the performance of the three spectrum handover models was validated with captured data of spectrum occupancy in experiments performed at the GSM frequency band (824 MHz - 849 MHz). These data represent the actual behavior of the licensed users for this wireless frequency band. The results of the comparison show that VIKOR Algorithm provides a 15.8% performance improvement compared to SAW Algorithm and, it is 12.1% better than the MEW Algorithm.Keywords: Cognitive radio, decision making, MEW, SAW, spectrum handover, VIKOR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21553088 Bit Error Rate Monitoring for Automatic Bias Control of Quadrature Amplitude Modulators
Authors: Naji Ali Albakay, Abdulrahman Alothaim, Isa Barshushi
Abstract:
The most common quadrature amplitude modulator (QAM) applies two Mach-Zehnder Modulators (MZM) and one phase shifter to generate high order modulation format. The bias of MZM changes over time due to temperature, vibration, and aging factors. The change in the biasing causes distortion to the generated QAM signal which leads to deterioration of bit error rate (BER) performance. Therefore, it is critical to be able to lock MZM’s Q point to the required operating point for good performance. We propose a technique for automatic bias control (ABC) of QAM transmitter using BER measurements and gradient descent optimization algorithm. The proposed technique is attractive because it uses the pertinent metric, BER, which compensates for bias drifting independently from other system variations such as laser source output power. The proposed scheme performance and its operating principles are simulated using OptiSystem simulation software for 4-QAM and 16-QAM transmitters.
Keywords: Automatic bias control, optical fiber communication, optical modulation, optical devices.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5643087 A New Secure Communication Model Based on Synchronization of Coupled Multidelay Feedback Systems
Authors: Thang Manh Hoang
Abstract:
Recent research result has shown that two multidelay feedback systems can synchronize each other under different schemes, i.e. lag, projective-lag, anticipating, or projectiveanticipating synchronization. There, the driving signal is significantly complex due that it is constituted by multiple nonlinear transformations of delayed state variable. In this paper, a secure communication model is proposed based on synchronization of coupled multidelay feedback systems, in which the plain signal is mixed with a complex signal at the transmitter side and it is precisely retrieved at the receiver side. The effectiveness of the proposed model is demonstrated and verified in the specific example, where the message signal is masked directly by the complex signal and security is examined under the breaking method of power spectrum analysis.Keywords: chaos synchronization, time-delayed system, chaosbasedsecure communications
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19943086 Optimization of Deglet-Nour Date (Phoenix dactylifera L.) Phenol Extraction Conditions
Authors: Lekbir Adel, Alloui-Lombarkia Ourida, Mekentichi Sihem, Noui Yassine, Baississe Salima
Abstract:
The objective of this study was to optimize the extraction conditions for phenolic compounds, total flavonoids, and antioxidant activity from Deglet-Nour variety. The extraction of active components from natural sources depends on different factors. The knowledge of the effects of different extraction parameters is useful for the optimization of the process, as well for the ability to predict the extraction yield. The effects of extraction variables, namely types of solvent (methanol, ethanol and acetone) and extraction time (1h, 6h, 12h and 24h) on phenolics extraction yield were evaluated. It has been shown that the time of extraction and types of solvent have a statistically significant influence on the extraction of phenolic compounds from Deglet-Nour variety. The optimised conditions yielded values of 80.19 ± 6.37 mg GAE/100 g FW for TPC, 2.34 ± 0.27 mg QE/100 g FW for TFC and 90.20 ± 1.29% for antioxidant activity were methanol solvent and 6 hours of time. According to the results obtained in this study, Deglet-Nour variety can be considered as a natural source of phenolic compounds with good antioxidant capacity.
Keywords: Deglet-Nour variety, Date palm Fruit, Phenolic compounds, Total flavonoids, Antioxidant activity, Extraction, Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26773085 A Novel FFT-Based Frequency Offset Estimator for OFDM Systems
Authors: Mahdi Masoumi, Mehrdad Ardebilipoor, Seyed Aidin Bassam
Abstract:
This paper proposes a novel frequency offset (FO) estimator for orthogonal frequency division multiplexing. Simplicity is most significant feature of this algorithm and can be repeated to achieve acceptable accuracy. Also fractional and integer part of FO is estimated jointly with use of the same algorithm. To do so, instead of using conventional algorithms that usually use correlation function, we use DFT of received signal. Therefore, complexity will be reduced and we can do synchronization procedure by the same hardware that is used to demodulate OFDM symbol. Finally, computer simulation shows that the accuracy of this method is better than other conventional methods.
Keywords: DFT, Estimator, Frequency Offset, IEEE802.11a, OFDM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14973084 Optimal Placement and Sizing of Distributed Generation in Microgrid for Power Loss Reduction and Voltage Profile Improvement
Authors: Ferinar Moaidi, Mahdi Moaidi
Abstract:
Environmental issues and the ever-increasing in demand of electrical energy make it necessary to have distributed generation (DG) resources in the power system. In this research, in order to realize the goals of reducing losses and improving the voltage profile in a microgrid, the allocation and sizing of DGs have been used. The proposed Genetic Algorithm (GA) is described from the array of artificial intelligence methods for solving the problem. The algorithm is implemented on the IEEE 33 buses network. This study is presented in two scenarios, primarily to illustrate the effect of location and determination of DGs has been done to reduce losses and improve the voltage profile. On the other hand, decisions made with the one-level assumptions of load are not universally accepted for all levels of load. Therefore, in this study, load modelling is performed and the results are presented for multi-levels load state.Keywords: Distributed generation, genetic algorithm, microgrid, load modelling, loss reduction, voltage improvement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10583083 Lego Mindstorms as a Simulation of Robotic Systems
Authors: Miroslav Popelka, Jakub Nožička
Abstract:
In this paper we deal with using Lego Mindstorms in simulation of robotic systems with respect to cost reduction. Lego Mindstorms kit contains broad variety of hardware components which are required to simulate, program and test the robotics systems in practice. Algorithm programming went in development environment supplied together with Lego kit as in programming language C# as well. Algorithm following the line, which we dealt with in this paper, uses theoretical findings from area of controlling circuits. PID controller has been chosen as controlling circuit whose individual components were experimentally adjusted for optimal motion of robot tracking the line. Data which are determined to process by algorithm are collected by sensors which scan the interface between black and white surfaces followed by robot. Based on discovered facts Lego Mindstorms can be considered for low-cost and capable kit to simulate real robotics systems.
Keywords: LEGO Mindstorms, PID controller, low-cost robotics systems, line follower, sensors, programming language C#, EV3 Home Edition Software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38353082 Transmitter Macrodiversity in Multihopping- SFN Based Algorithm for Improved Node Reachability and Robust Routing
Authors: Magnus Eriksson, Arif Mahmud
Abstract:
A novel idea presented in this paper is to combine multihop routing with single-frequency networks (SFNs) for a broadcasting scenario. An SFN is a set of multiple nodes that transmit the same data simultaneously, resulting in transmitter macrodiversity. Two of the most important performance factors of multihop networks, node reachability and routing robustness, are analyzed. Simulation results show that our proposed SFN-D routing algorithm improves the node reachability by 37 percentage points as compared to non-SFN multihop routing. It shows a diversity gain of 3.7 dB, meaning that 3.7 dB lower transmission powers are required for the same reachability. Even better results are possible for larger networks. If an important node becomes inactive, this algorithm can find new routes that a non-SFN scheme would not be able to find. Thus, two of the major problems in multihopping are addressed; achieving robust routing as well as improving node reachability or reducing transmission power.Keywords: OFDM, single-frequency networks (SFN), DSFN, MANET; multihop routing, transmitter macrodiversity, broadcasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19273081 Maximum Common Substructure Extraction in RNA Secondary Structures Using Clique Detection Approach
Authors: Shih-Yi Chao
Abstract:
The similarity comparison of RNA secondary structures is important in studying the functions of RNAs. In recent years, most existing tools represent the secondary structures by tree-based presentation and calculate the similarity by tree alignment distance. Different to previous approaches, we propose a new method based on maximum clique detection algorithm to extract the maximum common structural elements in compared RNA secondary structures. A new graph-based similarity measurement and maximum common subgraph detection procedures for comparing purely RNA secondary structures is introduced. Given two RNA secondary structures, the proposed algorithm consists of a process to determine the score of the structural similarity, followed by comparing vertices labelling, the labelled edges and the exact degree of each vertex. The proposed algorithm also consists of a process to extract the common structural elements between compared secondary structures based on a proposed maximum clique detection of the problem. This graph-based model also can work with NC-IUB code to perform the pattern-based searching. Therefore, it can be used to identify functional RNA motifs from database or to extract common substructures between complex RNA secondary structures. We have proved the performance of this proposed algorithm by experimental results. It provides a new idea of comparing RNA secondary structures. This tool is helpful to those who are interested in structural bioinformatics.Keywords: Clique detection, labeled vertices, RNA secondary structures, subgraph, similarity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14573080 Using Genetic Algorithm to Improve Information Retrieval Systems
Authors: Ahmed A. A. Radwan, Bahgat A. Abdel Latef, Abdel Mgeid A. Ali, Osman A. Sadek
Abstract:
This study investigates the use of genetic algorithms in information retrieval. The method is shown to be applicable to three well-known documents collections, where more relevant documents are presented to users in the genetic modification. In this paper we present a new fitness function for approximate information retrieval which is very fast and very flexible, than cosine similarity fitness function.Keywords: Cosine similarity, Fitness function, Genetic Algorithm, Information Retrieval, Query learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27563079 Embedding a Large Amount of Information Using High Secure Neural Based Steganography Algorithm
Authors: Nameer N. EL-Emam
Abstract:
In this paper, we construct and implement a new Steganography algorithm based on learning system to hide a large amount of information into color BMP image. We have used adaptive image filtering and adaptive non-uniform image segmentation with bits replacement on the appropriate pixels. These pixels are selected randomly rather than sequentially by using new concept defined by main cases with sub cases for each byte in one pixel. According to the steps of design, we have been concluded 16 main cases with their sub cases that covere all aspects of the input information into color bitmap image. High security layers have been proposed through four layers of security to make it difficult to break the encryption of the input information and confuse steganalysis too. Learning system has been introduces at the fourth layer of security through neural network. This layer is used to increase the difficulties of the statistical attacks. Our results against statistical and visual attacks are discussed before and after using the learning system and we make comparison with the previous Steganography algorithm. We show that our algorithm can embed efficiently a large amount of information that has been reached to 75% of the image size (replace 18 bits for each pixel as a maximum) with high quality of the output.Keywords: Adaptive image segmentation, hiding with high capacity, hiding with high security, neural networks, Steganography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989