Search results for: Bimodal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1595

Search results for: Bimodal processing

725 GPU Implementation for Solving in Compressible Two-Phase Flows

Authors: Sheng-Hsiu Kuo, Pao-Hsiung Chiu, Reui-Kuo Lin, Yan-Ting Lin

Abstract:

A one-step conservative level set method, combined with a global mass correction method, is developed in this study to simulate the incompressible two-phase flows. The present framework do not need to solve the conservative level set scheme at two separated steps, and the global mass can be exactly conserved. The present method is then more efficient than two-step conservative level set scheme. The dispersion-relation-preserving schemes are utilized for the advection terms. The pressure Poisson equation solver is applied to GPU computation using the pCDR library developed by National Center for High-Performance Computing, Taiwan. The SMP parallelization is used to accelerate the rest of calculations. Three benchmark problems were done for the performance evaluation. Good agreements with the referenced solutions are demonstrated for all the investigated problems.

Keywords: Conservative level set method, two-phase flow, dispersion-relation-preserving, graphics processing unit (GPU), multi-threading.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2065
724 Optical Flow Based Moving Object Detection and Tracking for Traffic Surveillance

Authors: Sepehr Aslani, Homayoun Mahdavi-Nasab

Abstract:

Automated motion detection and tracking is a challenging task in traffic surveillance. In this paper, a system is developed to gather useful information from stationary cameras for detecting moving objects in digital videos. The moving detection and tracking system is developed based on optical flow estimation together with application and combination of various relevant computer vision and image processing techniques to enhance the process. To remove noises, median filter is used and the unwanted objects are removed by applying thresholding algorithms in morphological operations. Also the object type restrictions are set using blob analysis. The results show that the proposed system successfully detects and tracks moving objects in urban videos.

Keywords: Optical flow estimation, moving object detection, tracking, morphological operation, blob analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10156
723 Incremental Learning of Independent Topic Analysis

Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda

Abstract:

In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.

Keywords: Text mining, topic extraction, independent, incremental, independent component analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1059
722 New Multisensor Data Fusion Method Based on Probabilistic Grids Representation

Authors: Zhichao Zhao, Yi Liu, Shunping Xiao

Abstract:

A new data fusion method called joint probability density matrix (JPDM) is proposed, which can associate and fuse measurements from spatially distributed heterogeneous sensors to identify the real target in a surveillance region. Using the probabilistic grids representation, we numerically combine the uncertainty regions of all the measurements in a general framework. The NP-hard multisensor data fusion problem has been converted to a peak picking problem in the grids map. Unlike most of the existing data fusion method, the JPDM method dose not need association processing, and will not lead to combinatorial explosion. Its convergence to the CRLB with a diminishing grid size has been proved. Simulation results are presented to illustrate the effectiveness of the proposed technique.

Keywords: Cramer-Rao lower bound (CRLB), data fusion, probabilistic grids, joint probability density matrix, localization, sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803
721 Single Frame Supercompression of Still Images,Video, High Definition TV and Digital Cinema

Authors: Mario Mastriani

Abstract:

Super-resolution is nowadays used for a high-resolution image produced from several low-resolution noisy frames. In this work, we consider the problem of high-quality interpolation of a single noise-free image. Such images may come from different sources, i.e., they may be frames of videos, individual pictures, etc. On the other hand, in the encoder we apply a downsampling via bidimen-sional interpolation of each frame, and in the decoder we apply a upsampling by which we restore the original size of the image. If the compression ratio is very high, then we use a convolutive mask that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. In fact, the mentioned mask is coded inside texture memory of a GPGPU.

Keywords: General-Purpose computation on Graphics ProcessingUnits, Image Compression, Interpolation, Super-resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
720 Development and Optimization of Automated Dry-Wafer Separation

Authors: Tim Giesen, Christian Fischmann, Fabian Böttinger, Alexander Ehm, Alexander Verl

Abstract:

In a state-of-the-art industrial production line of photovoltaic products the handling and automation processes are of particular importance and implication. While processing a fully functional crystalline solar cell an as-cut photovoltaic wafer is subject to numerous repeated handling steps. With respect to stronger requirements in productivity and decreasing rejections due to defects the mechanical stress on the thin wafers has to be reduced to a minimum as the fragility increases by decreasing wafer thicknesses. In relation to the increasing wafer fragility, researches at the Fraunhofer Institutes IPA and CSP showed a negative correlation between multiple handling processes and the wafer integrity. Recent work therefore focused on the analysis and optimization of the dry wafer stack separation process with compressed air. The achievement of a wafer sensitive process capability and a high production throughput rate is the basic motivation in this research.

Keywords: Automation, Photovoltaic Manufacturing, Thin Wafer, Material Handling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671
719 Digital Watermarking Based on Visual Cryptography and Histogram

Authors: R. Rama Kishore, Sunesh

Abstract:

Nowadays, robust and secure watermarking algorithm and its optimization have been need of the hour. A watermarking algorithm is presented to achieve the copy right protection of the owner based on visual cryptography, histogram shape property and entropy. In this, both host image and watermark are preprocessed. Host image is preprocessed by using Butterworth filter, and watermark is with visual cryptography. Applying visual cryptography on water mark generates two shares. One share is used for embedding the watermark, and the other one is used for solving any dispute with the aid of trusted authority. Usage of histogram shape makes the process more robust against geometric and signal processing attacks. The combination of visual cryptography, Butterworth filter, histogram, and entropy can make the algorithm more robust, imperceptible, and copy right protection of the owner.

Keywords: Butterworth filter, digital watermarking, histogram, visual cryptography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
718 Prediction of Tool and Nozzle Flow Behavior in Ultrasonic Machining Process

Authors: Vinod Kumar, Jatinder Kumar

Abstract:

The use of hard and brittle material has become increasingly more extensive in recent years. Therefore processing of these materials for the parts fabrication has become a challenging problem. However, it is time-consuming to machine the hard brittle materials with the traditional metal-cutting technique that uses abrasive wheels. In addition, the tool would suffer excessive wear as well. However, if ultrasonic energy is applied to the machining process and coupled with the use of hard abrasive grits, hard and brittle materials can be effectively machined. Ultrasonic machining process is mostly used for the brittle materials. The present research work has developed models using finite element approach to predict the mechanical stresses sand strains produced in the tool during ultrasonic machining process. Also the flow behavior of abrasive slurry coming out of the nozzle has been studied for simulation using ANSYS CFX module. The different abrasives of different grit sizes have been used for the experimentation work.

Keywords: Stress, MRR, Flow, Ultrasonic Machining

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2810
717 High Performance VLSI Architecture of 2D Discrete Wavelet Transform with Scalable Lattice Structure

Authors: Juyoung Kim, Taegeun Park

Abstract:

In this paper, we propose a fully-utilized, block-based 2D DWT (discrete wavelet transform) architecture, which consists of four 1D DWT filters with two-channel QMF lattice structure. The proposed architecture requires about 2MN-3N registers to save the intermediate results for higher level decomposition, where M and N stand for the filter length and the row width of the image respectively. Furthermore, the proposed 2D DWT processes in horizontal and vertical directions simultaneously without an idle period, so that it computes the DWT for an N×N image in a period of N2(1-2-2J)/3. Compared to the existing approaches, the proposed architecture shows 100% of hardware utilization and high throughput rates. To mitigate the long critical path delay due to the cascaded lattices, we can apply the pipeline technique with four stages, while retaining 100% of hardware utilization. The proposed architecture can be applied in real-time video signal processing.

Keywords: discrete wavelet transform, VLSI architecture, QMF lattice filter, pipelining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782
716 Classifier Combination Approach in Motion Imagery Signals Processing for Brain Computer Interface

Authors: Homayoon Zarshenas, Mahdi Bamdad, Hadi Grailu, Akbar A. Shakoori

Abstract:

In this study we focus on improvement performance of a cue based Motor Imagery Brain Computer Interface (BCI). For this purpose, data fusion approach is used on results of different classifiers to make the best decision. At first step Distinction Sensitive Learning Vector Quantization method is used as a feature selection method to determine most informative frequencies in recorded signals and its performance is evaluated by frequency search method. Then informative features are extracted by packet wavelet transform. In next step 5 different types of classification methods are applied. The methodologies are tested on BCI Competition II dataset III, the best obtained accuracy is 85% and the best kappa value is 0.8. At final step ordered weighted averaging (OWA) method is used to provide a proper aggregation classifiers outputs. Using OWA enhanced system accuracy to 95% and kappa value to 0.9. Applying OWA just uses 50 milliseconds for performing calculation.

Keywords: BCI, EEG, Classifier, Fuzzy operator, OWA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1876
715 Water Vapor Plasma Torch: Design, Characteristics and Applications

Authors: A. Tamošiūnas, P. Valatkevičius, V. Grigaitiene, V. Valinčius

Abstract:

The atmospheric pressure plasma torch with a direct current arc discharge stabilized by water vapor vortex was experimentally investigated. Overheated up to 450K water vapor was used as plasma forming gas. Plasma torch design is one of the most important factors leading to a stable operation of the device. The electrical and thermal characteristics of the plasma torch were determined during the experimental investigations. The design and the basic characteristics of the water vapor plasma torch are presented in the paper. Plasma torches with the electric arc stabilized by water vapor vortex provide special performance characteristics in some plasma processing applications such as thermal plasma neutralization and destruction of organic wastes enabling to extract high caloric value synthesis gas as by-product of the process. Syngas could be used as a surrogate fuel partly replacing the dependence on the fossil fuels or used as a feedstock for hydrogen, methanol production.

Keywords: Arc discharge, atmospheric pressure thermal plasma, plasma torch, water vapor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4482
714 Renovation of Industrial Zones in Ho Chi Minh City: An Approach from Changing Function of Processing to Urban Warehousing

Authors: Thu Le Thi Bao

Abstract:

Industrial parks have both active roles in promoting economic development, and source of appearance of boarding houses and slums in the adjacent area, lacking infrastructure, causing many social matters. The context of recent pandemic and climate change on a global scale pose issues that need to be resolved for sustainable development. Ho Chi Minh city aims to develop housing for migrant workers to stabilize human resources and at the same time, solve problems of social evils caused by poor living conditions. The paper focuses on the content of renovating existing industrial parks and worker accommodation in Ho Chi Minh city to propose appropriate models, contributing to the goal of urban embellishment and solutions for industrial parks to adapt to decree 29/2008/ND-CP abnormal impact conditions such as pandemics, climate change, crises.

Keywords: Industrial park, social housing, accommodation, distribution centre.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 194
713 Color Image Segmentation and Multi-Level Thresholding by Maximization of Conditional Entropy

Authors: R.Sukesh Kumar, Abhisek Verma, Jasprit Singh

Abstract:

In this work a novel approach for color image segmentation using higher order entropy as a textural feature for determination of thresholds over a two dimensional image histogram is discussed. A similar approach is applied to achieve multi-level thresholding in both grayscale and color images. The paper discusses two methods of color image segmentation using RGB space as the standard processing space. The threshold for segmentation is decided by the maximization of conditional entropy in the two dimensional histogram of the color image separated into three grayscale images of R, G and B. The features are first developed independently for the three ( R, G, B ) spaces, and combined to get different color component segmentation. By considering local maxima instead of the maximum of conditional entropy yields multiple thresholds for the same image which forms the basis for multilevel thresholding.

Keywords: conditional entropy, multi-level thresholding, segmentation, two dimensional image histogram

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2998
712 Arabic Word Semantic Similarity

Authors: Faaza A, Almarsoomi, James D, O'Shea, Zuhair A, Bandar, Keeley A, Crockett

Abstract:

This paper is concerned with the production of an Arabic word semantic similarity benchmark dataset. It is the first of its kind for Arabic which was particularly developed to assess the accuracy of word semantic similarity measurements. Semantic similarity is an essential component to numerous applications in fields such as natural language processing, artificial intelligence, linguistics, and psychology. Most of the reported work has been done for English. To the best of our knowledge, there is no word similarity measure developed specifically for Arabic. In this paper, an Arabic benchmark dataset of 70 word pairs is presented. New methods and best possible available techniques have been used in this study to produce the Arabic dataset. This includes selecting and creating materials, collecting human ratings from a representative sample of participants, and calculating the overall ratings. This dataset will make a substantial contribution to future work in the field of Arabic WSS and hopefully it will be considered as a reference basis from which to evaluate and compare different methodologies in the field.

Keywords: Arabic categories, benchmark dataset, semantic similarity, word pair, stimulus Arabic words

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3107
711 Nature Inspired Metaheuristic Algorithms for Multilevel Thresholding Image Segmentation - A Survey

Authors: C. Deepika, J. Nithya

Abstract:

Segmentation is one of the essential tasks in image processing. Thresholding is one of the simplest techniques for performing image segmentation. Multilevel thresholding is a simple and effective technique. The primary objective of bi-level or multilevel thresholding for image segmentation is to determine a best thresholding value. To achieve multilevel thresholding various techniques has been proposed. A study of some nature inspired metaheuristic algorithms for multilevel thresholding for image segmentation is conducted. Here, we study about Particle swarm optimization (PSO) algorithm, artificial bee colony optimization (ABC), Ant colony optimization (ACO) algorithm and Cuckoo search (CS) algorithm.

Keywords: Ant colony optimization, Artificial bee colony optimization, Cuckoo search algorithm, Image segmentation, Multilevel thresholding, Particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3523
710 A Bionic Approach to Dynamic, Multimodal Scene Perception and Interpretation in Buildings

Authors: Rosemarie Velik, Dietmar Bruckner

Abstract:

Today, building automation is advancing from simple monitoring and control tasks of lightning and heating towards more and more complex applications that require a dynamic perception and interpretation of different scenes occurring in a building. Current approaches cannot handle these newly upcoming demands. In this article, a bionically inspired approach for multimodal, dynamic scene perception and interpretation is presented, which is based on neuroscientific and neuro-psychological research findings about the perceptual system of the human brain. This approach bases on data from diverse sensory modalities being processed in a so-called neuro-symbolic network. With its parallel structure and with its basic elements being information processing and storing units at the same time, a very efficient method for scene perception is provided overcoming the problems and bottlenecks of classical dynamic scene interpretation systems.

Keywords: building automation, biomimetrics, dynamic scene interpretation, human-like perception, neuro-symbolic networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1617
709 Remote Sensing, GIS, and AHP for Assessing Physical Vulnerability to Tsunami Hazard

Authors: Abu Bakar Sambah, Fusanori Miura

Abstract:

Remote sensing image processing, spatial data analysis through GIS approach, and analytical hierarchy process were introduced in this study for assessing the vulnerability area and inundation area due to tsunami hazard in the area of Rikuzentakata, Iwate Prefecture, Japan. Appropriate input parameters were derived from GSI DEM data, ALOS AVNIR-2, and field data. We used the parameters of elevation, slope, shoreline distance, and vegetation density. Five classes of vulnerability were defined and weighted via pairwise comparison matrix. The assessment results described that 14.35km2 of the study area was under tsunami vulnerability zone. Inundation areas are those of high and slightly high vulnerability. The farthest area reached by a tsunami was about 7.50km from the shoreline and shows that rivers act as flooding strips that transport tsunami waves into the hinterland. This study can be used for determining a priority for land-use planning in the scope of tsunami hazard risk management.

Keywords: AHP, GIS, remote sensing, tsunami vulnerability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3335
708 Real Time Object Tracking in H.264/ AVC Using Polar Vector Median and Block Coding Modes

Authors: T. Kusuma, K. Ashwini

Abstract:

This paper presents a real time video surveillance system which is capable of tracking multiple real time objects using Polar Vector Median (PVM) and Block Coding Modes (BCM) with Global Motion Compensation (GMC). This strategy works in the packed area and furthermore utilizes the movement vectors and BCM from the compressed bit stream to perform real time object tracking. We propose to do this in view of the neighboring Motion Vectors (MVs) using a method called PVM. Since GM adds to the object’s native motion, for accurate tracking, it is important to remove GM from the MV field prior to further processing. The proposed method is tested on a number of standard sequences and the results show its advantages over some of the current modern methods.

Keywords: Block coding mode, global motion compensation, object tracking, polar vector median, video surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 748
707 Parallel Double Splicing on Iso-Arrays

Authors: V. Masilamani, D.K. Sheena Christy, D.G. Thomas

Abstract:

Image synthesis is an important area in image processing. To synthesize images various systems are proposed in the literature. In this paper, we propose a bio-inspired system to synthesize image and to study the generating power of the system, we define the class of languages generated by our system. We call image as array in this paper. We use a primitive called iso-array to synthesize image/array. The operation is double splicing on iso-arrays. The double splicing operation is used in DNA computing and we use this to synthesize image. A comparison of the family of languages generated by the proposed self restricted double splicing systems on iso-arrays with the existing family of local iso-picture languages is made. Certain closure properties such as union, concatenation and rotation are studied for the family of languages generated by the proposed model.

Keywords: DNA computing, splicing system, iso-picture languages, iso-array double splicing system, iso-array self splicing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
706 Edge Detection Using Multi-Agent System: Evaluation on Synthetic and Medical MR Images

Authors: A. Nachour, L. Ouzizi, Y. Aoura

Abstract:

Recent developments on multi-agent system have brought a new research field on image processing. Several algorithms are used simultaneously and improved in deferent applications while new methods are investigated. This paper presents a new automatic method for edge detection using several agents and many different actions. The proposed multi-agent system is based on parallel agents that locally perceive their environment, that is to say, pixels and additional environmental information. This environment is built using Vector Field Convolution that attract free agent to the edges. Problems of partial, hidden or edges linking are solved with the cooperation between agents. The presented method was implemented and evaluated using several examples on different synthetic and medical images. The obtained experimental results suggest that this approach confirm the efficiency and accuracy of detected edge.

Keywords: Edge detection, medical MR images, multi-agent systems, vector field convolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904
705 Tree-on-DAG for Data Aggregation in Sensor Networks

Authors: Prakash G L, Thejaswini M, S H Manjula, K R Venugopal, L M Patnaik

Abstract:

Computing and maintaining network structures for efficient data aggregation incurs high overhead for dynamic events where the set of nodes sensing an event changes with time. Moreover, structured approaches are sensitive to the waiting time that is used by nodes to wait for packets from their children before forwarding the packet to the sink. An optimal routing and data aggregation scheme for wireless sensor networks is proposed in this paper. We propose Tree on DAG (ToD), a semistructured approach that uses Dynamic Forwarding on an implicitly constructed structure composed of multiple shortest path trees to support network scalability. The key principle behind ToD is that adjacent nodes in a graph will have low stretch in one of these trees in ToD, thus resulting in early aggregation of packets. Based on simulations on a 2,000-node Mica2- based network, we conclude that efficient aggregation in large-scale networks can be achieved by our semistructured approach.

Keywords: Aggregation, Packet Merging, Query Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931
704 Sub-Image Detection Using Fast Neural Processors and Image Decomposition

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

In this paper, an approach to reduce the computation steps required by fast neural networksfor the searching process is presented. The principle ofdivide and conquer strategy is applied through imagedecomposition. Each image is divided into small in sizesub-images and then each one is tested separately usinga fast neural network. The operation of fast neuralnetworks based on applying cross correlation in thefrequency domain between the input image and theweights of the hidden neurons. Compared toconventional and fast neural networks, experimentalresults show that a speed up ratio is achieved whenapplying this technique to locate human facesautomatically in cluttered scenes. Furthermore, fasterface detection is obtained by using parallel processingtechniques to test the resulting sub-images at the sametime using the same number of fast neural networks. Incontrast to using only fast neural networks, the speed upratio is increased with the size of the input image whenusing fast neural networks and image decomposition.

Keywords: Fast Neural Networks, 2D-FFT, CrossCorrelation, Image decomposition, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2179
703 CSOLAP (Continuous Spatial On-Line Analytical Processing)

Authors: Taher Omran Ahmed, Abdullatif Mihdi Buras

Abstract:

Decision support systems are usually based on multidimensional structures which use the concept of hypercube. Dimensions are the axes on which facts are analyzed and form a space where a fact is located by a set of coordinates at the intersections of members of dimensions. Conventional multidimensional structures deal with discrete facts linked to discrete dimensions. However, when dealing with natural continuous phenomena the discrete representation is not adequate. There is a need to integrate spatiotemporal continuity within multidimensional structures to enable analysis and exploration of continuous field data. Research issues that lead to the integration of spatiotemporal continuity in multidimensional structures are numerous. In this paper, we discuss research issues related to the integration of continuity in multidimensional structures, present briefly a multidimensional model for continuous field data. We also define new aggregation operations. The model and the associated operations and measures are validated by a prototype.

Keywords: Continuous Data, Data warehousing, DecisionSupport, SOLAP

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
702 Wireless Distributed Load-Shedding Management System for Non-Emergency Cases

Authors: Taha Landolsi, A. R. Al-Ali, Tarik Ozkul, Mohammad A. Al-Rousan

Abstract:

In this paper, we present a cost-effective wireless distributed load shedding system for non-emergency scenarios. In power transformer locations where SCADA system cannot be used, the proposed solution provides a reasonable alternative that combines the use of microcontrollers and existing GSM infrastructure to send early warning SMS messages to users advising them to proactively reduce their power consumption before system capacity is reached and systematic power shutdown takes place. A novel communication protocol and message set have been devised to handle the messaging between the transformer sites, where the microcontrollers are located and where the measurements take place, and the central processing site where the database server is hosted. Moreover, the system sends warning messages to the endusers mobile devices that are used as communication terminals. The system has been implemented and tested via different experimental results.

Keywords: Smart Grid, Load shedding, Demand SideManagement, GSM Wireless Networks, SCADA systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2628
701 Laboratory Scale Extraction of Sugar Cane using High Electric Field Pulses

Authors: M. N. Eshtiaghi, N. Yoswathana

Abstract:

The aim of this study was to extract sugar from sugarcane using high electric field pulse (HELP) as a non-thermal cell permeabilization method. The result of this study showed that it is possible to permeablize sugar cane cells using HELP at very short times (less than 10 sec.) and at room temperature. Increasing the field strength (from 0.5kV/cm to 2kV/cm) and pulse number (1 to 12) led to increasing the permeabilization of sugar cane cells. The energy consumption during HELP treatment of sugar cane (2.4 kJ/kg) was about 100 times less compared to thermal cell disintegration at 85 <=C (about 271.7 kJ/kg). In addition, it was possible to extract sugar cane at a moderate temperature (45 <=C) using HELP pretreatment. With combination of HELP pretreatment followed by thermal extraction at 75 <=C, extraction resulted in up to 3% more sugar (on the basis of total extractable sugar) compared to samples without HELP pretreatment.

Keywords: Cell permeabilization, High electric field pulses, Non-thermal processing, Sugar cane extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2748
700 Unsupervised Feature Selection Using Feature Density Functions

Authors: Mina Alibeigi, Sattar Hashemi, Ali Hamzeh

Abstract:

Since dealing with high dimensional data is computationally complex and sometimes even intractable, recently several feature reductions methods have been developed to reduce the dimensionality of the data in order to simplify the calculation analysis in various applications such as text categorization, signal processing, image retrieval, gene expressions and etc. Among feature reduction techniques, feature selection is one the most popular methods due to the preservation of the original features. In this paper, we propose a new unsupervised feature selection method which will remove redundant features from the original feature space by the use of probability density functions of various features. To show the effectiveness of the proposed method, popular feature selection methods have been implemented and compared. Experimental results on the several datasets derived from UCI repository database, illustrate the effectiveness of our proposed methods in comparison with the other compared methods in terms of both classification accuracy and the number of selected features.

Keywords: Feature, Feature Selection, Filter, Probability Density Function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2077
699 Analytical and Finite Element Analysis of Hydroforming Deep Drawing Process

Authors: Maziar Ramezani, Thomas Neitzert

Abstract:

This paper gives an overview of a deep drawing process by pressurized liquid medium separated from the sheet by a rubber diaphragm. Hydroforming deep drawing processing of sheet metal parts provides a number of advantages over conventional techniques. It generally increases the depth to diameter ratio possible in cup drawing and minimizes the thickness variation of the drawn cup. To explore the deformation mechanism, analytical and numerical simulations are used for analyzing the drawing process of an AA6061-T4 blank. The effects of key process parameters such as coefficient of friction, initial thickness of the blank and radius between cup wall and flange are investigated analytically and numerically. The simulated results were in good agreement with the results of the analytical model. According to finite element simulations, the hydroforming deep drawing method provides a more uniform thickness distribution compared to conventional deep drawing and decreases the risk of tearing during the process.

Keywords: Deep drawing, Hydroforming, Rubber diaphragm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2908
698 Fabrication and Characterization of Gelatin Nanofibers Dissolved in Concentrated Acetic Acid

Authors: Kooshina Koosha, Sima Habibi, Azam Talebian

Abstract:

Electrospinning is a simple, versatile and widely accepted technique to produce ultra-fine fibers ranging from nanometer to micron. Recently there has been great interest in developing this technique to produce nanofibers with novel properties and functionalities. The electrospinning field is extremely broad, and consequently there have been many useful reviews discussing various aspects from detailed fiber formation mechanism to the formation of nanofibers and to discussion on a wide range of applications. On the other hand, the focus of this study is quite narrow, highlighting electrospinning parameters. This work will briefly cover the solution and processing parameters (for instance; concentration, solvent type, voltage, flow rate, distance between the collector and the tip of the needle) impacting the morphological characteristics of nanofibers, such as diameter. In this paper, a comprehensive work would be presented on the research of producing nanofibers from natural polymer entitled Gelatin.

Keywords: Electro spinning, solution parameters, process parameters, natural fiber.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1348
697 Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: 'Reddit'

Authors: Yasmeen Bassas, Sandra Kuebler, Allen Riddell

Abstract:

Native Language Identification is one of the growing subfields in Natural Language Processing (NLP). The task of Native Language Identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL) and then the trained models are evaluated on a different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and Logistic Regression. Results show that content-based features are more accurate and robust than content independent ones when tested within corpus and across corpus.

Keywords: NLI, NLP, content-based features, content independent features, social media corpus, ML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 416
696 Behavioral Signature Generation using Shadow Honeypot

Authors: Maros Barabas, Michal Drozd, Petr Hanacek

Abstract:

A novel behavioral detection framework is proposed to detect zero day buffer overflow vulnerabilities (based on network behavioral signatures) using zero-day exploits, instead of the signature-based or anomaly-based detection solutions currently available for IDPS techniques. At first we present the detection model that uses shadow honeypot. Our system is used for the online processing of network attacks and generating a behavior detection profile. The detection profile represents the dataset of 112 types of metrics describing the exact behavior of malware in the network. In this paper we present the examples of generating behavioral signatures for two attacks – a buffer overflow exploit on FTP server and well known Conficker worm. We demonstrated the visualization of important aspects by showing the differences between valid behavior and the attacks. Based on these metrics we can detect attacks with a very high probability of success, the process of detection is however very expensive.

Keywords: behavioral signatures, metrics, network, security design

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2054