Search results for: elliptic curve digital signature algorithm
2352 Gray Level Image Encryption
Authors: Roza Afarin, Saeed Mozaffari
Abstract:
The aim of this paper is image encryption using Genetic Algorithm (GA). The proposed encryption method consists of two phases. In modification phase, pixels locations are altered to reduce correlation among adjacent pixels. Then, pixels values are changed in the diffusion phase to encrypt the input image. Both phases are performed by GA with binary chromosomes. For modification phase, these binary patterns are generated by Local Binary Pattern (LBP) operator while for diffusion phase binary chromosomes are obtained by Bit Plane Slicing (BPS). Initial population in GA includes rows and columns of the input image. Instead of subjective selection of parents from this initial population, a random generator with predefined key is utilized. It is necessary to decrypt the coded image and reconstruct the initial input image. Fitness function is defined as average of transition from 0 to 1 in LBP image and histogram uniformity in modification and diffusion phases, respectively. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method is fast enough and can be used effectively for image encryption.
Keywords: Correlation coefficients, Genetic algorithm, Image encryption, Image entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22382351 Daemon- Based Distributed Deadlock Detection and Resolution
Authors: Z. RahimAlipour, A. T. Haghighat
Abstract:
detecting the deadlock is one of the important problems in distributed systems and different solutions have been proposed for it. Among the many deadlock detection algorithms, Edge-chasing has been the most widely used. In Edge-chasing algorithm, a special message called probe is made and sent along dependency edges. When the initiator of a probe receives the probe back the existence of a deadlock is revealed. But these algorithms are not problem-free. One of the problems associated with them is that they cannot detect some deadlocks and they even identify false deadlocks. A key point not mentioned in the literature is that when the process is waiting to obtain the required resources and its execution has been blocked, how it can actually respond to probe messages in the system. Also the question of 'which process should be victimized in order to achieve a better performance when multiple cycles exist within one single process in the system' has received little attention. In this paper, one of the basic concepts of the operating system - daemon - will be used to solve the problems mentioned. The proposed Algorithm becomes engaged in sending probe messages to the mandatory daemons and collects enough information to effectively identify and resolve multi-cycle deadlocks in distributed systems.Keywords: Distributed system, distributed deadlock detectionand resolution, daemon, false deadlock.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19362350 Gaze Patterns of Skilled and Unskilled Sight Readers Focusing on the Cognitive Processes Involved in Reading Key and Time Signatures
Authors: J. F. Viljoen, Catherine Foxcroft
Abstract:
Expert sight readers rely on their ability to recognize patterns in scores, their inner hearing and prediction skills in order to perform complex sight reading exercises. They also have the ability to observe deviations from expected patterns in musical scores. This increases the “Eye-hand span” (reading ahead of the point of playing) in order to process the elements in the score. The study aims to investigate the gaze patterns of expert and non-expert sight readers focusing on key and time signatures. 20 musicians were tasked with playing 12 sight reading examples composed for one hand and five examples composed for two hands to be performed on a piano keyboard. These examples were composed in different keys and time signatures and included accidentals and changes of time signature to test this theory. Results showed that the experts fixate more and for longer on key and time signatures as well as deviations in examples for two hands than the non-expert group. The inverse was true for the examples for one hand, where expert sight readers showed fewer and shorter fixations on key and time signatures as well as deviations. This seems to suggest that experts focus more on the key and time signatures as well as deviations in complex scores to facilitate sight reading. The examples written for one appeared to be too easy for the expert sight readers, compromising gaze patterns.
Keywords: Cognition, eye tracking, musical notation, sight reading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6082349 Cooperative Sensing for Wireless Sensor Networks
Authors: Julien Romieux, Fabio Verdicchio
Abstract:
Wireless Sensor Networks (WSNs), which sense environmental data with battery-powered nodes, require multi-hop communication. This power-demanding task adds an extra workload that is unfairly distributed across the network. As a result, nodes run out of battery at different times: this requires an impractical individual node maintenance scheme. Therefore we investigate a new Cooperative Sensing approach that extends the WSN operational life and allows a more practical network maintenance scheme (where all nodes deplete their batteries almost at the same time). We propose a novel cooperative algorithm that derives a piecewise representation of the sensed signal while controlling approximation accuracy. Simulations show that our algorithm increases WSN operational life and spreads communication workload evenly. Results convey a counterintuitive conclusion: distributing workload fairly amongst nodes may not decrease the network power consumption and yet extend the WSN operational life. This is achieved as our cooperative approach decreases the workload of the most burdened cluster in the network.Keywords: Cooperative signal processing, power management, signal representation, signal approximation, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17862348 A Review on Comparative Analysis of Path Planning and Collision Avoidance Algorithms
Authors: Divya Agarwal, Pushpendra S. Bharti
Abstract:
Autonomous mobile robots (AMR) are expected as smart tools for operations in every automation industry. Path planning and obstacle avoidance is the backbone of AMR as robots have to reach their goal location avoiding obstacles while traversing through optimized path defined according to some criteria such as distance, time or energy. Path planning can be classified into global and local path planning where environmental information is known and unknown/partially known, respectively. A number of sensors are used for data collection. A number of algorithms such as artificial potential field (APF), rapidly exploring random trees (RRT), bidirectional RRT, Fuzzy approach, Purepursuit, A* algorithm, vector field histogram (VFH) and modified local path planning algorithm, etc. have been used in the last three decades for path planning and obstacle avoidance for AMR. This paper makes an attempt to review some of the path planning and obstacle avoidance algorithms used in the field of AMR. The review includes comparative analysis of simulation and mathematical computations of path planning and obstacle avoidance algorithms using MATLAB 2018a. From the review, it could be concluded that different algorithms may complete the same task (i.e. with a different set of instructions) in less or more time, space, effort, etc.
Keywords: Autonomous mobile robots, obstacle avoidance, path planning, and processing time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16942347 Unsupervised Feature Learning by Pre-Route Simulation of Auto-Encoder Behavior Model
Authors: Youngjae Jin, Daeshik Kim
Abstract:
This paper describes a cycle accurate simulation results of weight values learned by an auto-encoder behavior model in terms of pre-route simulation. Given the results we visualized the first layer representations with natural images. Many common deep learning threads have focused on learning high-level abstraction of unlabeled raw data by unsupervised feature learning. However, in the process of handling such a huge amount of data, the learning method’s computation complexity and time limited advanced research. These limitations came from the fact these algorithms were computed by using only single core CPUs. For this reason, parallel-based hardware, FPGAs, was seen as a possible solution to overcome these limitations. We adopted and simulated the ready-made auto-encoder to design a behavior model in VerilogHDL before designing hardware. With the auto-encoder behavior model pre-route simulation, we obtained the cycle accurate results of the parameter of each hidden layer by using MODELSIM. The cycle accurate results are very important factor in designing a parallel-based digital hardware. Finally this paper shows an appropriate operation of behavior model based pre-route simulation. Moreover, we visualized learning latent representations of the first hidden layer with Kyoto natural image dataset.
Keywords: Auto-encoder, Behavior model simulation, Digital hardware design, Pre-route simulation, Unsupervised feature learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26902346 A Hidden Markov Model-Based Isolated and Meaningful Hand Gesture Recognition
Authors: Mahmoud Elmezain, Ayoub Al-Hamadi, Jörg Appenrodt, Bernd Michaelis
Abstract:
Gesture recognition is a challenging task for extracting meaningful gesture from continuous hand motion. In this paper, we propose an automatic system that recognizes isolated gesture, in addition meaningful gesture from continuous hand motion for Arabic numbers from 0 to 9 in real-time based on Hidden Markov Models (HMM). In order to handle isolated gesture, HMM using Ergodic, Left-Right (LR) and Left-Right Banded (LRB) topologies is applied over the discrete vector feature that is extracted from stereo color image sequences. These topologies are considered to different number of states ranging from 3 to 10. A new system is developed to recognize the meaningful gesture based on zero-codeword detection with static velocity motion for continuous gesture. Therefore, the LRB topology in conjunction with Baum-Welch (BW) algorithm for training and forward algorithm with Viterbi path for testing presents the best performance. Experimental results show that the proposed system can successfully recognize isolated and meaningful gesture and achieve average rate recognition 98.6% and 94.29% respectively.Keywords: Computer Vision & Image Processing, Gesture Recognition, Pattern Recognition, Application
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22502345 Adaptive Motion Estimator Based on Variable Block Size Scheme
Authors: S. Dhahri, A. Zitouni, H. Chaouch, R. Tourki
Abstract:
This paper presents an adaptive motion estimator that can be dynamically reconfigured by the best algorithm depending on the variation of the video nature during the lifetime of an application under running. The 4 Step Search (4SS) and the Gradient Search (GS) algorithms are integrated in the estimator in order to be used in the case of rapid and slow video sequences respectively. The Full Search Block Matching (FSBM) algorithm has been also integrated in order to be used in the case of the video sequences which are not real time oriented. In order to efficiently reduce the computational cost while achieving better visual quality with low cost power, the proposed motion estimator is based on a Variable Block Size (VBS) scheme that uses only the 16x16, 16x8, 8x16 and 8x8 modes. Experimental results show that the adaptive motion estimator allows better results in term of Peak Signal to Noise Ratio (PSNR), computational cost, FPGA occupied area, and dissipated power relatively to the most popular variable block size schemes presented in the literature.Keywords: H264, Configurable Motion Estimator, VariableBlock Size, PSNR, Dissipated power.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16552344 Aerodynamics and Optimization of Airfoil Under Ground Effect
Authors: Kyoungwoo Park, Byeong Sam Kim, Juhee Lee, Kwang Soo Kim
Abstract:
The Prediction of aerodynamic characteristics and shape optimization of airfoil under the ground effect have been carried out by integration of computational fluid dynamics and the multiobjective Pareto-based genetic algorithm. The main flow characteristics around an airfoil of WIG craft are lift force, lift-to-drag ratio and static height stability (H.S). However, they show a strong trade-off phenomenon so that it is not easy to satisfy the design requirements simultaneously. This difficulty can be resolved by the optimal design. The above mentioned three characteristics are chosen as the objective functions and NACA0015 airfoil is considered as a baseline model in the present study. The profile of airfoil is constructed by Bezier curves with fourteen control points and these control points are adopted as the design variables. For multi-objective optimization problems, the optimal solutions are not unique but a set of non-dominated optima and they are called Pareto frontiers or Pareto sets. As the results of optimization, forty numbers of non- dominated Pareto optima can be obtained at thirty evolutions.Keywords: Aerodynamics, Shape optimization, Airfoil on WIGcraft, Genetic algorithm, Computational fluid dynamics (CFD).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32312343 Optimisation of Intermodal Transport Chain of Supermarkets on Isle of Wight, UK
Authors: Jingya Liu, Yue Wu, Jiabin Luo
Abstract:
This work investigates an intermodal transportation system for delivering goods from a Regional Distribution Centre to supermarkets on the Isle of Wight (IOW) via the port of Southampton or Portsmouth in the UK. We consider this integrated logistics chain as a 3-echelon transportation system. In such a system, there are two types of transport methods used to deliver goods across the Solent Channel: one is accompanied transport, which is used by most supermarkets on the IOW, such as Spar, Lidl and Co-operative food; the other is unaccompanied transport, which is used by Aldi. Five transport scenarios are studied based on different transport modes and ferry routes. The aim is to determine an optimal delivery plan for supermarkets of different business scales on IOW, in order to minimise the total running cost, fuel consumptions and carbon emissions. The problem is modelled as a vehicle routing problem with time windows and solved by genetic algorithm. The computing results suggested that accompanied transport is more cost efficient for small and medium business-scale supermarket chains on IOW, while unaccompanied transport has the potential to improve the efficiency and effectiveness of large business scale supermarket chains.
Keywords: Genetic algorithm, intermodal transport system, Isle of Wight, optimization, supermarket.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10052342 High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform
Authors: Hanaa M. Abdelgawad, Mona Safar, Ayman M. Wahba
Abstract:
Real time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Thus, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Edge detection is one of the basic building blocks of video and image processing applications. It is a common block in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream.
Keywords: High Level Synthesis, Canny edge detection, Hardware accelerators, and Computer Vision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 54312341 An Ontology Based Question Answering System on Software Test Document Domain
Authors: Meltem Serhatli, Ferda N. Alpaslan
Abstract:
Processing the data by computers and performing reasoning tasks is an important aim in Computer Science. Semantic Web is one step towards it. The use of ontologies to enhance the information by semantically is the current trend. Huge amount of domain specific, unstructured on-line data needs to be expressed in machine understandable and semantically searchable format. Currently users are often forced to search manually in the results returned by the keyword-based search services. They also want to use their native languages to express what they search. In this paper, an ontology-based automated question answering system on software test documents domain is presented. The system allows users to enter a question about the domain by means of natural language and returns exact answer of the questions. Conversion of the natural language question into the ontology based query is the challenging part of the system. To be able to achieve this, a new algorithm regarding free text to ontology based search engine query conversion is proposed. The algorithm is based on investigation of suitable question type and parsing the words of the question sentence.Keywords: Description Logics, ontology, question answering, reasoning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21492340 Fault Detection and Diagnosis of Broken Bar Problem in Induction Motors Base Wavelet Analysis and EMD Method: Case Study of Mobarakeh Steel Company in Iran
Authors: M. Ahmadi, M. Kafil, H. Ebrahimi
Abstract:
Nowadays, induction motors have a significant role in industries. Condition monitoring (CM) of this equipment has gained a remarkable importance during recent years due to huge production losses, substantial imposed costs and increases in vulnerability, risk, and uncertainty levels. Motor current signature analysis (MCSA) is one of the most important techniques in CM. This method can be used for rotor broken bars detection. Signal processing methods such as Fast Fourier transformation (FFT), Wavelet transformation and Empirical Mode Decomposition (EMD) are used for analyzing MCSA output data. In this study, these signal processing methods are used for broken bar problem detection of Mobarakeh steel company induction motors. Based on wavelet transformation method, an index for fault detection, CF, is introduced which is the variation of maximum to the mean of wavelet transformation coefficients. We find that, in the broken bar condition, the amount of CF factor is greater than the healthy condition. Based on EMD method, the energy of intrinsic mode functions (IMF) is calculated and finds that when motor bars become broken the energy of IMFs increases.
Keywords: Broken bar, condition monitoring, diagnostics, empirical mode decomposition, Fourier transform, wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8002339 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers
Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice
Abstract:
In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.
Keywords: Churn prediction, data mining, decision-theoretic rough set, feature selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17632338 A 3D Approach for Extraction of the Coronaryartery and Quantification of the Stenosis
Authors: Mahdi Mazinani, S. D. Qanadli, Rahil Hosseini, Tim Ellis, Jamshid Dehmeshki
Abstract:
Segmentation and quantification of stenosis is an important task in assessing coronary artery disease. One of the main challenges is measuring the real diameter of curved vessels. Moreover, uncertainty in segmentation of different tissues in the narrow vessel is an important issue that affects accuracy. This paper proposes an algorithm to extract coronary arteries and measure the degree of stenosis. Markovian fuzzy clustering method is applied to model uncertainty arises from partial volume effect problem. The algorithm employs: segmentation, centreline extraction, estimation of orthogonal plane to centreline, measurement of the degree of stenosis. To evaluate the accuracy and reproducibility, the approach has been applied to a vascular phantom and the results are compared with real diameter. The results of 10 patient datasets have been visually judged by a qualified radiologist. The results reveal the superiority of the proposed method compared to the Conventional thresholding Method (CTM) on both datasets.Keywords: 3D coronary artery tree extraction, segmentation, quantification, fuzzy clustering, and Markov random field
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15822337 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.
Keywords: Multi-carrier frequency diverse array, adaptive beamforming, correction index, limited snapshot, robust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6772336 Dynamic-Stochastic Influence Diagrams: Integrating Time-Slices IDs and Discrete Event Systems Modeling
Authors: Xin Zhao, Yin-fan Zhu, Wei-ping Wang, Qun Li
Abstract:
The Influence Diagrams (IDs) is a kind of Probabilistic Belief Networks for graphic modeling. The usage of IDs can improve the communication among field experts, modelers, and decision makers, by showing the issue frame discussed from a high-level point of view. This paper enhances the Time-Sliced Influence Diagrams (TSIDs, or called Dynamic IDs) based formalism from a Discrete Event Systems Modeling and Simulation (DES M&S) perspective, for Exploring Analysis (EA) modeling. The enhancements enable a modeler to specify times occurred of endogenous events dynamically with stochastic sampling as model running and to describe the inter- influences among them with variable nodes in a dynamic situation that the existing TSIDs fails to capture. The new class of model is named Dynamic-Stochastic Influence Diagrams (DSIDs). The paper includes a description of the modeling formalism and the hiberarchy simulators implementing its simulation algorithm, and shows a case study to illustrate its enhancements.
Keywords: Time-sliced influence diagrams, discrete event systems, dynamic-stochastic influence diagrams, modeling formalism, simulation algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14322335 Complex Network Approach to International Trade of Fossil Fuel
Authors: Semanur Soyyiğit Kaya, Ercan Eren
Abstract:
Energy has a prominent role for development of nations. Countries which have energy resources also have strategic power in the international trade of energy since it is essential for all stages of production in the economy. Thus, it is important for countries to analyze the weaknesses and strength of the system. On the other side, international trade is one of the fields that are analyzed as a complex network via network analysis. Complex network is one of the tools to analyze complex systems with heterogeneous agents and interaction between them. A complex network consists of nodes and the interactions between these nodes. Total properties which emerge as a result of these interactions are distinct from the sum of small parts (more or less) in complex systems. Thus, standard approaches to international trade are superficial to analyze these systems. Network analysis provides a new approach to analyze international trade as a network. In this network, countries constitute nodes and trade relations (export or import) constitute edges. It becomes possible to analyze international trade network in terms of high degree indicators which are specific to complex networks such as connectivity, clustering, assortativity/disassortativity, centrality, etc. In this analysis, international trade of crude oil and coal which are types of fossil fuel has been analyzed from 2005 to 2014 via network analysis. First, it has been analyzed in terms of some topological parameters such as density, transitivity, clustering etc. Afterwards, fitness to Pareto distribution has been analyzed via Kolmogorov-Smirnov test. Finally, weighted HITS algorithm has been applied to the data as a centrality measure to determine the real prominence of countries in these trade networks. Weighted HITS algorithm is a strong tool to analyze the network by ranking countries with regards to prominence of their trade partners. We have calculated both an export centrality and an import centrality by applying w-HITS algorithm to the data. As a result, impacts of the trading countries have been presented in terms of high-degree indicators.Keywords: Complex network approach, fossil fuel, international trade, network theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23862334 Feedrate Optimization for Ball-end milling of Sculptured Surfaces using Fuzzy Logic Controller
Authors: Njiri J. G., Ikua B. W., Nyakoe G. N.
Abstract:
Optimization of cutting parameters important in precision machining in regards to efficiency and surface integrity of the machined part. Usually productivity and precision in machining is limited by the forces emanating from the cutting process. Due to the inherent varying nature of the workpiece in terms of geometry and material composition, the peak cutting forces vary from point to point during machining process. In order to increase productivity without compromising on machining accuracy, it is important to control these cutting forces. In this paper a fuzzy logic control algorithm is developed that can be applied in the control of peak cutting forces in milling of spherical surfaces using ball end mills. The controller can adaptively vary the feedrate to maintain allowable cutting force on the tool. This control algorithm is implemented in a computer numerical control (CNC) machine. It has been demonstrated that the controller can provide stable machining and improve the performance of the CNC milling process by varying feedrate.
Keywords: Ball-end mill, feedrate, fuzzy logic controller, machining optimization, spherical surface.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24842333 Educational use of Interactive Multimedia based on Museum Collection
Authors: Ji-Hye Lee, Jongdeok Kim
Abstract:
This research investigates the use of digital technology namely interactive multimedia in effective art education provided by museum. Several multimedia experience examples created for art education are study case subjected to assistance audiences- learning within the context of existing theory in the field of interactive multimedia.Keywords: E-learning, Fine Arts, Interactivity, Multimedia
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15522332 Seismic Control of Tall Building Using a New Optimum Controller Based on GA
Authors: A. Shayeghi, H. Eimani Kalasar, H. Shayeghi
Abstract:
This paper emphasizes on the application of genetic algorithm (GA) to optimize the parameters of the TMD for achieving the best results in the reduction of the building response under earthquake excitations. The Integral of the Time multiplied Absolute value of the Error (ITAE) based on relative displacement of all floors in the building is taken as a performance index of the optimization criterion. The problem of robustly TMD controller design is formatted as an optimization problem based on the ITAE performance index to be solved using GA that has a story ability to find the most optimistic results. An 11–story realistic building, located in the city of Rasht, Iran is considered as a test system to demonstrate effectiveness of the proposed GA based TMD (GATMD) controller without specifying which mode should be controlled. The results of the proposed GATMD controller are compared with the uncontrolled structure through timedomain simulation and some performance indices. The results analysis reveals that the designed GA based TMD controller has an excellent capability in reduction of the seismically excited example building and the ITAE performance, that is so for remains as unknown, can be introduced a new criteria - method for structural dynamic design.
Keywords: Tuned Mass Damper, Genetic Algorithm, TallBuildings, Structural Dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17982331 Optimizing Electrospinning Parameters for Finest Diameter of Nano Fibers
Authors: M. Maleki, M. Latifi, M. Amani-Tehran
Abstract:
Nano fibers produced by electrospinning are of industrial and scientific attention due to their special characteristics such as long length, small diameter and high surface area. Applications of electrospun structures in nanotechnology are included tissue scaffolds, fibers for drug delivery, composite reinforcement, chemical sensing, enzyme immobilization, membrane-based filtration, protective clothing, catalysis, solar cells, electronic devices and others. Many polymer and ceramic precursor nano fibers have been successfully electrospun with diameters in the range from 1 nm to several microns. The process is complex so that fiber diameter is influenced by various material, design and operating parameters. The objective of this work is to apply genetic algorithm on the parameters of electrospinning which have the most significant effect on the nano fiber diameter to determine the optimum parameter values before doing experimental set up. Effective factors including initial polymer concentration, initial jet radius, electrical potential, relaxation time, initial elongation, viscosity and distance between nozzle and collector are considered to determine finest diameter which is selected by user.
Keywords: Electrospinning, genetic algorithm, nano fiber diameter, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20332330 Performance Evaluation of Discrete Fourier Transform Algorithm Based PMU for Wide Area Measurement System
Authors: Alpesh Adeshara, Rajendrasinh Jadeja, Praghnesh Bhatt
Abstract:
Implementation of advanced technologies requires sophisticated instruments that deal with the operation, control, restoration and protection of rapidly growing power system network under normal and abnormal conditions. Presently, the applications of Phasor Measurement Unit (PMU) are widely found in real time operation, monitoring, controlling and analysis of power system network as it eliminates the various limitations of supervisory control and data acquisition system (SCADA) conventionally used in power system. The use of PMU data is very rapidly increasing its importance for online and offline analysis. Wide area measurement system (WAMS) is developed as new technology by use of multiple PMUs in power system. The present paper proposes a model of Matlab based PMU using Discrete Fourier Transform (DFT) algorithm and evaluation of its operation under different contingencies. In this paper, PMU based two bus system having WAMS network is presented as a case study.Keywords: DFT-Discrete Fourier Transform, GPS-Global Positioning System, PMU-Phasor Measurement System, WAMS-Wide Area Monitoring System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27262329 Spectral Amplitude Coding Optical CDMA: Performance Analysis of PIIN Reduction Using VC Code Family
Authors: Hassan Yousif Ahmed, Ibrahima Faye, N.M.Saad, S.A. Aljined
Abstract:
Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.
Keywords: FBG, MUI, PIIN, SAC-OCDMA, VCC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22102328 Main Control Factors of Fluid Loss in Drilling and Completion in Shunbei Oilfield by Unmanned Intervention Algorithm
Authors: Peng Zhang, Lihui Zheng, Xiangchun Wang, Xiaopan Kou
Abstract:
Quantitative research on the main control factors of lost circulation has few considerations and single data source. Using Unmanned Intervention Algorithm to find the main control factors of lost circulation adopts all measurable parameters. The degree of lost circulation is characterized by the loss rate as the objective function. Geological, engineering and fluid data are used as layers, and 27 factors such as wellhead coordinates and Weight on Bit (WOB) used as dimensions. Data classification is implemented to determine function independent variables. The mathematical equation of loss rate and 27 influencing factors is established by multiple regression method, and the undetermined coefficient method is used to solve the undetermined coefficient of the equation. Only three factors in t-test are greater than the test value 40, and the F-test value is 96.557%, indicating that the correlation of the model is good. The funnel viscosity, final shear force and drilling time were selected as the main control factors by elimination method, contribution rate method and functional method. The calculated values of the two wells used for verification differ from the actual values by -3.036 m3/h and -2.374 m3/h, with errors of 7.21% and 6.35%. The influence of engineering factors on the loss rate is greater than that of funnel viscosity and final shear force, and the influence of the three factors is less than that of geological factors. The best combination of funnel viscosity, final shear force and drilling time is obtained through quantitative calculation. The minimum loss rate of lost circulation wells in Shunbei area is 10 m3/h. It can be seen that man-made main control factors can only slow down the leakage, but cannot fundamentally eliminate it. This is more in line with the characteristics of karst caves and fractures in Shunbei fault solution oil and gas reservoir.
Keywords: Drilling fluid, loss rate, main controlling factors, Unmanned Intervention Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4012327 Batch and Continuous Packed Column Studies Biosorption by Yeast Supported onto Granular Pozzolana
Authors: A. Djafer, S. Kouadri Moustefai, A. Idou, M. Douani
Abstract:
The removal of chromium by living yeast biomass immobilized onto pozzolana was studied. The results obtained in batch experiments indicate that the immobilized yeast on to pozzolana is a excellent biosorbent of Cr(V) with a good removal rates of 85–90%. The initial concentration solution and agitation speed affected Cr(V) removal. The batch studies data were described using the Freundlich and Langmuir models, but the best fit was obtained with Langmuir model. The breakthrough curve from the continuous flow studies shows that immobilized yeast in the fixed-bed column is capable of decreasing Cr(VI) concentration from 15mg/l to a adequate level.
Keywords: Biosorption, yeast, chromium, kinetic biosorption, fixed biomass
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30662326 Establishing a New Simple Formula for Buckling Length Factor (K) of Rigid Frames Columns
Authors: Ehab Hasan Ahmed Hasan Ali
Abstract:
The calculation of buckling length factor (K) for steel frames columns is a major and governing processes to determine the dimensions steel frame columns cross sections during design. The buckling length of steel frames columns has a direct effect on the cost (weight) of using cross section. A new formula is required to determine buckling length factor (K) by simplified way. In this research a new formula for buckling length factor (K) was established to determine by accurate method for a limited interval of columns ends rigidity (GA, GB). The new formula can be used ease to evaluate the buckling length factor without needing to complicated equations or difficult charts.Keywords: Buckling length, New formula, Curve fitting, Simplification, Steel column design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22592325 Effect of Peak-to-Average Power Ratio Reduction on the Multicarrier Communication System Performance Parameters
Authors: Sanjay Singh, M Sathish Kumar, H. S Mruthyunjaya
Abstract:
Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication system. OFDM is a spectrally efficient modulation technique that can achieve high speed data transmission over multipath fading channels without the need for powerful equalization techniques. However the price paid for this high spectral efficiency and less intensive equalization is low power efficiency. OFDM signals are very sensitive to nonlinear effects due to the high Peak-to-Average Power Ratio (PAPR), which leads to the power inefficiency in the RF section of the transmitter. This paper investigates the effect of PAPR reduction on the performance parameter of multicarrier communication system. Performance parameters considered are power consumption of Power Amplifier (PA) and Digital-to-Analog Converter (DAC), power amplifier efficiency, SNR of DAC and BER performance of the system. From our analysis it is found that irrespective of PAPR reduction technique being employed, the power consumption of PA and DAC reduces and power amplifier efficiency increases due to reduction in PAPR. Moreover, it has been shown that for a given BER performance the requirement of Input-Backoff (IBO) reduces with reduction in PAPR.Keywords: BER, Crest Factor (CF), Digital-to-Analog Converter(DAC), Input-Backoff (IBO), Orthogonal Frequency Division Multiplexing(OFDM), Peak-to-Average Power Ratio (PAPR), PowerAmplifier efficiency, SNR
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32862324 Optimal Supplementary Damping Controller Design for TCSC Employing RCGA
Authors: S. Panda, S. C. Swain, A. K. Baliarsingh, C. Ardil
Abstract:
Optimal supplementary damping controller design for Thyristor Controlled Series Compensator (TCSC) is presented in this paper. For the proposed controller design, a multi-objective fitness function consisting of both damping factors and real part of system electromachanical eigenvalue is used and Real- Coded Genetic Algorithm (RCGA) is employed for the optimal supplementary controller parameters. The performance of the designed supplementary TCSC-based damping controller is tested on a weakly connected power system with different disturbances and loading conditions with parameter variations. Simulation results are presented and compared with a conventional power system stabilizer and also with the TCSC-based supplementary controller when the controller parameters are not optimized to show the effectiveness and robustness of the proposed approach over a wide range of loading conditions and disturbances.
Keywords: Power System Oscillations, Real-Coded Genetic Algorithm (RCGA), Thyristor Controlled Series Compensator (TCSC), Damping Controller, Power System Stabilizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22242323 A Strategy for a Robust Design of Cracked Stiffened Panels
Authors: Francesco Caputo, Giuseppe Lamanna, Alessandro Soprano
Abstract:
This work is focused on the numerical prediction of the fracture resistance of a flat stiffened panel made of the aluminium alloy 2024 T3 under a monotonic traction condition. The performed numerical simulations have been based on the micromechanical Gurson-Tvergaard (GT) model for ductile damage. The applicability of the GT model to this kind of structural problems has been studied and assessed by comparing numerical results, obtained by using the WARP 3D finite element code, with experimental data available in literature. In the sequel a home-made procedure is presented, which aims to increase the residual strength of a cracked stiffened aluminum panel and which is based on the stochastic design improvement (SDI) technique; a whole application example is then given to illustrate the said technique.
Keywords: Residual strength, R-Curve, Gurson model, SDI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541