Search results for: Search Algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2095

Search results for: Search Algorithms

295 Automatic Detection of Defects in Ornamental Limestone Using Wavelets

Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas

Abstract:

A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.

Keywords: Automatic detection, wavelets, defects, fracture lines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1123
294 Porul: Option Generation and Selection and Scoring Algorithms for a Tamil Flash Card Game

Authors: Anitha Narasimhan, Aarthy Anandan, Madhan Karky, C. N. Subalalitha

Abstract:

Games can be the excellent tools for teaching a language. There are few e-learning games in Indian languages like word scrabble, cross word, quiz games etc., which were developed mainly for educational purposes. This paper proposes a Tamil word game called, “Porul”, which focuses on education as well as on players’ thinking and decision-making skills. Porul is a multiple choice based quiz game, in which the players attempt to answer questions correctly from the given multiple options that are generated using a unique algorithm called the Option Selection algorithm which explores the semantics of the question in various dimensions namely, synonym, rhyme and Universal Networking Language semantic category. This kind of semantic exploration of the question not only increases the complexity of the game but also makes it more interesting. The paper also proposes a Scoring Algorithm which allots a score based on the popularity score of the question word. The proposed game has been tested using 20,000 Tamil words.

Keywords: Porul game, Tamil word game, option selection, flash card, scoring, algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1119
293 ISC–Intelligent Subspace Clustering, A Density Based Clustering Approach for High Dimensional Dataset

Authors: Sunita Jahirabadkar, Parag Kulkarni

Abstract:

Many real-world data sets consist of a very high dimensional feature space. Most clustering techniques use the distance or similarity between objects as a measure to build clusters. But in high dimensional spaces, distances between points become relatively uniform. In such cases, density based approaches may give better results. Subspace Clustering algorithms automatically identify lower dimensional subspaces of the higher dimensional feature space in which clusters exist. In this paper, we propose a new clustering algorithm, ISC – Intelligent Subspace Clustering, which tries to overcome three major limitations of the existing state-of-art techniques. ISC determines the input parameter such as є – distance at various levels of Subspace Clustering which helps in finding meaningful clusters. The uniform parameters approach is not suitable for different kind of databases. ISC implements dynamic and adaptive determination of Meaningful clustering parameters based on hierarchical filtering approach. Third and most important feature of ISC is the ability of incremental learning and dynamic inclusion and exclusions of subspaces which lead to better cluster formation.

Keywords: Density based clustering, high dimensional data, subspace clustering, dynamic parameter setting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1975
292 Constructing a Bayesian Network for Solar Energy in Egypt Using Life Cycle Analysis and Machine Learning Algorithms

Authors: Rawaa H. El-Bidweihy, Hisham M. Abdelsalam, Ihab A. El-Khodary

Abstract:

In an era where machines run and shape our world, the need for a stable, non-ending source of energy emerges. In this study, the focus was on the solar energy in Egypt as a renewable source, the most important factors that could affect the solar energy’s market share throughout its life cycle production were analyzed and filtered, the relationships between them were derived before structuring a Bayesian network. Also, forecasted models were built for multiple factors to predict the states in Egypt by 2035, based on historical data and patterns, to be used as the nodes’ states in the network. 37 factors were found to might have an impact on the use of solar energy and then were deducted to 12 factors that were chosen to be the most effective to the solar energy’s life cycle in Egypt, based on surveying experts and data analysis, some of the factors were found to be recurring in multiple stages. The presented Bayesian network could be used later for scenario and decision analysis of using solar energy in Egypt, as a stable renewable source for generating any type of energy needed.

Keywords: ARIMA, auto correlation, Bayesian network, forecasting models, life cycle, partial correlation, renewable energy, SARIMA, solar energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 714
291 Statistical Distributions of the Lapped Transform Coefficients for Images

Authors: Vijay Kumar Nath, Deepika Hazarika, Anil Mahanta,

Abstract:

Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.

Keywords: Lapped orthogonal transform, Lapped biorthogonal transform, Image compression, KS test,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1559
290 Scenarios for a Sustainable Energy Supply Results of a Case Study for Austria

Authors: Petra Wächter

Abstract:

A comprehensive discussion of feasible strategies for sustainable energy supply is urgently needed to achieve a turnaround of the current energy situation. The necessary fundamentals required for the development of a long term energy vision are lacking to a great extent due to the absence of reasonable long term scenarios that fulfill the requirements of climate protection and sustainable energy use. The contribution of the study is based on a search for sustainable energy paths in the long run for Austria. The analysis makes use of secondary data predominantly. The measures developed to avoid CO2 emissions and other ecological risk factors vary to a great extent among all economic sectors. This is shown by the calculation of CO2 cost of abatement curves. In this study it is demonstrated that the most effective technical measures with the lowest CO2 abatement costs yield solutions to the current energy problems. Various scenarios are presented concerning the question how the technological and environmental options for a sustainable energy system for Austria could look like in the long run. It is shown how sustainable energy can be supplied even with today-s technological knowledge and options available. The scenarios developed include an evaluation of the economic costs and ecological impacts. The results are not only applicable to Austria but demonstrate feasible and cost efficient ways towards a sustainable future.

Keywords: Cost of CO2 Abatement, Energy Economics, Energy Efficiency, Renewable Energy Technologies, Sustainable Energy and Development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
289 Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency

Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami

Abstract:

Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases.

Keywords: Clustering, k-means, categorical datasets, pattern recognition, unsupervised learning, knowledge discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3492
288 Multi-objective Optimization with Fuzzy Based Ranking for TCSC Supplementary Controller to Improve Rotor Angle and Voltage Stability

Authors: S. Panda, S. C. Swain, A. K. Baliarsingh, A. K. Mohanty, C. Ardil

Abstract:

Many real-world optimization problems involve multiple conflicting objectives and the use of evolutionary algorithms to solve the problems has attracted much attention recently. This paper investigates the application of multi-objective optimization technique for the design of a Thyristor Controlled Series Compensator (TCSC)-based controller to enhance the performance of a power system. The design objective is to improve both rotor angle stability and system voltage profile. A Genetic Algorithm (GA) based solution technique is applied to generate a Pareto set of global optimal solutions to the given multi-objective optimisation problem. Further, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto solution set. Simulation results are presented to show the effectiveness and robustness of the proposed approach.

Keywords: Multi-objective optimisation, thyristor controlled series compensator, power system stability, genetic algorithm, pareto solution set, fuzzy ranking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898
287 Influence of Ambiguity Cluster on Quality Improvement in Image Compression

Authors: Safaa Al-Ali, Ahmad Shahin, Fadi Chakik

Abstract:

Image coding based on clustering provides immediate access to targeted features of interest in a high quality decoded image. This approach is useful for intelligent devices, as well as for multimedia content-based description standards. The result of image clustering cannot be precise in some positions especially on pixels with edge information which produce ambiguity among the clusters. Even with a good enhancement operator based on PDE, the quality of the decoded image will highly depend on the clustering process. In this paper, we introduce an ambiguity cluster in image coding to represent pixels with vagueness properties. The presence of such cluster allows preserving some details inherent to edges as well for uncertain pixels. It will also be very useful during the decoding phase in which an anisotropic diffusion operator, such as Perona-Malik, enhances the quality of the restored image. This work also offers a comparative study to demonstrate the effectiveness of a fuzzy clustering technique in detecting the ambiguity cluster without losing lot of the essential image information. Several experiments have been carried out to demonstrate the usefulness of ambiguity concept in image compression. The coding results and the performance of the proposed algorithms are discussed in terms of the peak signal-tonoise ratio and the quantity of ambiguous pixels.

Keywords: Ambiguity Cluster, Anisotropic Diffusion, Fuzzy Clustering, Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
286 An Investigation into the Impact of Techno-Entrepreneurship Education on Self-Employment

Authors: F. Farzin

Abstract:

Research has shown that techno-entrepreneurship is economically significant. Therefore, it is suggested that teaching techno-entrepreneurship may be important because such programmes would prepare current and future generations of learners to recognise and act on high-technology opportunities. Education in technoentrepreneurship may increase the knowledge of how to start one’s own enterprise and recognise the technological opportunities for commercialisation to improve decision-making about starting a new venture; also it influence decisions about capturing the business opportunities and turning them into successful ventures. Universities can play a main role in connecting and networking technoentrepreneurship students towards a cooperative attitude with real business practice and industry knowledge. To investigate and answer whether education for techno-entrepreneurs really helps, this paper choses a comparison of literature reviews as its method of research. After reviewing literature related to the impact of technoentrepreneurship education on self-employment 6 studies which had similar aim and objective to this paper were. These particular papers were selected based on a keywords search and as their aim, objectives, and gaps were close to the current research. In addition, they were all based on the influence of techno-entrepreneurship education in self-employment and intention of students to start new ventures. The findings showed that teaching techno-entrepreneurship education may have an influence on students’ intention and their future self-employment, but which courses should be covered and the duration of programmes, needs further investigation.

Keywords: Techno-entrepreneurship education, training, higher education, intention, self-employment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1928
285 Genetic Algorithm Optimization of the Economical, Ecological and Self-Consumption Impact of the Energy Production of a Single Building

Authors: Ludovic Favre, Thibaut M. Schafer, Jean-Luc Robyr, Elena-Lavinia Niederhäuser

Abstract:

This paper presents an optimization method based on genetic algorithm for the energy management inside buildings developed in the frame of the project Smart Living Lab (SLL) in Fribourg (Switzerland). This algorithm optimizes the interaction between renewable energy production, storage systems and energy consumers. In comparison with standard algorithms, the innovative aspect of this project is the extension of the smart regulation over three simultaneous criteria: the energy self-consumption, the decrease of greenhouse gas emissions and operating costs. The genetic algorithm approach was chosen due to the large quantity of optimization variables and the non-linearity of the optimization function. The optimization process includes also real time data of the building as well as weather forecast and users habits. This information is used by a physical model of the building energy resources to predict the future energy production and needs, to select the best energetic strategy, to combine production or storage of energy in order to guarantee the demand of electrical and thermal energy. The principle of operation of the algorithm as well as typical output example of the algorithm is presented.

Keywords: Building’s energy, control system, energy management, modelling, genetic optimization algorithm, renewable energy, greenhouse gases, energy storage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 726
284 Multi-VSS Scheme by Shifting Random Grids

Authors: Joy Jo-Yi Chang, Justie Su-Tzu Juan

Abstract:

Visual secret sharing (VSS) was proposed by Naor and Shamir in 1995. Visual secret sharing schemes encode a secret image into two or more share images, and single share image can’t obtain any information about the secret image. When superimposes the shares, it can restore the secret by human vision. Due to the traditional VSS have some problems like pixel expansion and the cost of sophisticated. And this method only can encode one secret image. The schemes of encrypting more secret images by random grids into two shares were proposed by Chen et al. in 2008. But when those restored secret images have much distortion, those schemes are almost limited in decoding. In the other words, if there is too much distortion, we can’t encrypt too much information. So, if we can adjust distortion to very small, we can encrypt more secret images. In this paper, four new algorithms which based on Chang et al.’s scheme be held in 2010 are proposed. First algorithm can adjust distortion to very small. Second algorithm distributes the distortion into two restored secret images. Third algorithm achieves no distortion for special secret images. Fourth algorithm encrypts three secret images, which not only retain the advantage of VSS but also improve on the problems of decoding.

Keywords: Visual cryptography, visual secret sharing, random grids, multiple, secret image sharing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
283 On-line Recognition of Isolated Gestures of Flight Deck Officers (FDO)

Authors: Deniz T. Sodiri, Venkat V S S Sastry

Abstract:

The paper presents an on-line recognition machine (RM) for continuous/isolated, dynamic and static gestures that arise in Flight Deck Officer (FDO) training. RM is based on generic pattern recognition framework. Gestures are represented as templates using summary statistics. The proposed recognition algorithm exploits temporal and spatial characteristics of gestures via dynamic programming and Markovian process. The algorithm predicts corresponding index of incremental input data in the templates in an on-line mode. Accumulated consistency in the sequence of prediction provides a similarity measurement (Score) between input data and the templates. The algorithm provides an intuitive mechanism for automatic detection of start/end frames of continuous gestures. In the present paper, we consider isolated gestures. The performance of RM is evaluated using four datasets - artificial (W TTest), hand motion (Yang) and FDO (tracker, vision-based ). RM achieves comparable results which are in agreement with other on-line and off-line algorithms such as hidden Markov model (HMM) and dynamic time warping (DTW). The proposed algorithm has the additional advantage of providing timely feedback for training purposes.

Keywords: On-line Recognition Algorithm, IsolatedDynamic/Static Gesture Recognition, On-line Markovian/DynamicProgramming, Training in Virtual Environments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1291
282 Modeling and Simulating Reaction-Diffusion Systems with State-Dependent Diffusion Coefficients

Authors: Paola Lecca, Lorenzo Dematte, Corrado Priami

Abstract:

The present models and simulation algorithms of intracellular stochastic kinetics are usually based on the premise that diffusion is so fast that the concentrations of all the involved species are homogeneous in space. However, recents experimental measurements of intracellular diffusion constants indicate that the assumption of a homogeneous well-stirred cytosol is not necessarily valid even for small prokaryotic cells. In this work a mathematical treatment of diffusion that can be incorporated in a stochastic algorithm simulating the dynamics of a reaction-diffusion system is presented. The movement of a molecule A from a region i to a region j of the space is represented as a first order reaction Ai k- ! Aj , where the rate constant k depends on the diffusion coefficient. The diffusion coefficients are modeled as function of the local concentration of the solutes, their intrinsic viscosities, their frictional coefficients and the temperature of the system. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the intrinsic reaction kinetics and diffusion dynamics. To demonstrate the method the simulation results of the reaction-diffusion system of chaperoneassisted protein folding in cytoplasm are shown.

Keywords: Reaction-diffusion systems, diffusion coefficient, stochastic simulation algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1484
281 Proxisch: An Optimization Approach of Large-Scale Unstable Proxy Servers Scheduling

Authors: Xiaoming Jiang, Jinqiao Shi, Qingfeng Tan, Wentao Zhang, Xuebin Wang, Muqian Chen

Abstract:

Nowadays, big companies such as Google, Microsoft, which have adequate proxy servers, have perfectly implemented their web crawlers for a certain website in parallel. But due to lack of expensive proxy servers, it is still a puzzle for researchers to crawl large amounts of information from a single website in parallel. In this case, it is a good choice for researchers to use free public proxy servers which are crawled from the Internet. In order to improve efficiency of web crawler, the following two issues should be considered primarily: (1) Tasks may fail owing to the instability of free proxy servers; (2) A proxy server will be blocked if it visits a single website frequently. In this paper, we propose Proxisch, an optimization approach of large-scale unstable proxy servers scheduling, which allow anyone with extremely low cost to run a web crawler efficiently. Proxisch is designed to work efficiently by making maximum use of reliable proxy servers. To solve second problem, it establishes a frequency control mechanism which can ensure the visiting frequency of any chosen proxy server below the website’s limit. The results show that our approach performs better than the other scheduling algorithms.

Keywords: Proxy server, priority queue, optimization approach, distributed web crawling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2862
280 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic

Authors: Diogen Babuc

Abstract:

The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigen`ere. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e. shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b + 1, it will return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character is not used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it is questionable if it works better than the other methods, from the point of view of execution time and storage space.

Keywords: Ciphering and deciphering, Authentic Algorithm, Polyalphabetic Cipher, Random Key, methods comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 133
279 Sensor Network Based Emergency Response and Navigation Support Architecture

Authors: Dilusha Weeraddana, Ashanie Gunathillake, Samiru Gayan

Abstract:

In an emergency, combining Wireless Sensor Network's data with the knowledge gathered from various other information sources and navigation algorithms, could help safely guide people to a building exit while avoiding the risky areas. This paper presents an emergency response and navigation support architecture for data gathering, knowledge manipulation, and navigational support in an emergency situation. At normal state, the system monitors the environment. When an emergency event detects, the system sends messages to first responders and immediately identifies the risky areas from safe areas to establishing escape paths. The main functionalities of the system include, gathering data from a wireless sensor network which is deployed in a multi-story indoor environment, processing it with information available in a knowledge base, and sharing the decisions made, with first responders and people in the building. The proposed architecture will act to reduce risk of losing human lives by evacuating people much faster with least congestion in an emergency environment. 

Keywords: Emergency response, Firefighters, Navigation, Wireless sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
278 The Effects of Negative Electronic Word-of-Mouth and Webcare on Thai Online Consumer Behavior

Authors: Pongsatorn Tantrabundit, Lersak Phothong, Ong-art Chanprasitchai

Abstract:

Due to the emergence of the Internet, it has extended the traditional Word-of-Mouth (WOM) to a new form called “Electronic Word-of-Mouth (eWOM).” Unlike traditional WOM, eWOM is able to present information in various ways by applying different components. Each eWOM component generates different effects on online consumer behavior. This research investigates the effects of Webcare (responding message) from product/ service providers on negative eWOM by applying two types of products (search and experience). The proposed conceptual model was developed based on the combination of the stages in consumer decision-making process, theory of reasoned action (TRA), theory of planned behavior (TPB), the technology acceptance model (TAM), the information integration theory and the elaboration likelihood model. The methodology techniques used in this study included multivariate analysis of variance (MANOVA) and multiple regression analysis. The results suggest that Webcare does slightly increase Thai online consumer’s perceptions on perceived eWOM trustworthiness, information diagnosticity and quality. For negative eWOM, we also found that perceived eWOM Trustworthiness, perceived eWOM diagnosticity and quality have a positive relationship with eWOM influence whereas perceived valence has a negative relationship with eWOM influence in Thai online consumers.

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
277 Riemannian Manifolds for Brain Extraction on Multi-modal Resonance Magnetic Images

Authors: Mohamed Gouskir, Belaid Bouikhalene, Hicham Aissaoui, Benachir Elhadadi

Abstract:

In this paper, we present an application of Riemannian geometry for processing non-Euclidean image data. We consider the image as residing in a Riemannian manifold, for developing a new method to brain edge detection and brain extraction. Automating this process is a challenge due to the high diversity in appearance brain tissue, among different patients and sequences. The main contribution, in this paper, is the use of an edge-based anisotropic diffusion tensor for the segmentation task by integrating both image edge geometry and Riemannian manifold (geodesic, metric tensor) to regularize the convergence contour and extract complex anatomical structures. We check the accuracy of the segmentation results on simulated brain MRI scans of single T1-weighted, T2-weighted and Proton Density sequences. We validate our approach using two different databases: BrainWeb database, and MRI Multiple sclerosis Database (MRI MS DB). We have compared, qualitatively and quantitatively, our approach with the well-known brain extraction algorithms. We show that using a Riemannian manifolds to medical image analysis improves the efficient results to brain extraction, in real time, outperforming the results of the standard techniques.

Keywords: Riemannian manifolds, Riemannian Tensor, Brain Segmentation, Non-Euclidean data, Brain Extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
276 Robotic End-Effector Impedance Control without Expensive Torque/Force Sensor

Authors: Shiuh-Jer Huang, Yu-Chi Liu, Su-Hai Hsiang

Abstract:

A novel low-cost impedance control structure is proposed for monitoring the contact force between end-effector and environment without installing an expensive force/torque sensor. Theoretically, the end-effector contact force can be estimated from the superposition of each joint control torque. There have a nonlinear matrix mapping function between each joint motor control input and end-effector actuating force/torques vector. This new force control structure can be implemented based on this estimated mapping matrix. First, the robot end-effector is manipulated to specified positions, then the force controller is actuated based on the hall sensor current feedback of each joint motor. The model-free fuzzy sliding mode control (FSMC) strategy is employed to design the position and force controllers, respectively. All the hardware circuits and software control programs are designed on an Altera Nios II embedded development kit to constitute an embedded system structure for a retrofitted Mitsubishi 5 DOF robot. Experimental results show that PI and FSMC force control algorithms can achieve reasonable contact force monitoring objective based on this hardware control structure.

Keywords: Robot, impedance control, fuzzy sliding mode control, contact force estimator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3941
275 Dominating Set Algorithm and Trust Evaluation Scheme for Secured Cluster Formation and Data Transferring

Authors: Y. Harold Robinson, M. Rajaram, E. Golden Julie, S. Balaji

Abstract:

This paper describes the proficient way of choosing the cluster head based on dominating set algorithm in a wireless sensor network (WSN). The algorithm overcomes the energy deterioration problems by this selection process of cluster heads. Clustering algorithms such as LEACH, EEHC and HEED enhance scalability in WSNs. Dominating set algorithm keeps the first node alive longer than the other protocols previously used. As the dominating set of cluster heads are directly connected to each node, the energy of the network is saved by eliminating the intermediate nodes in WSN. Security and trust is pivotal in network messaging. Cluster head is secured with a unique key. The member can only connect with the cluster head if and only if they are secured too. The secured trust model provides security for data transmission in the dominated set network with the group key. The concept can be extended to add a mobile sink for each or for no of clusters to transmit data or messages between cluster heads and to base station. Data security id preferably high and data loss can be prevented. The simulation demonstrates the concept of choosing cluster heads by dominating set algorithm and trust evaluation using DSTE. The research done is rationalized.

Keywords: Wireless Sensor Networks, LEECH, EEHC, HEED, DSTE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
274 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean Functions, Simplification, KarnoughMap, Implementation of Logic Functions, Modular NeuralNetworks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
273 Optimization of Reaction Rate Parameters in Modeling of Heavy Paraffins Dehydrogenation

Authors: Leila Vafajoo, Farhad Khorasheh, Mehrnoosh Hamzezadeh Nakhjavani, Moslem Fattahi

Abstract:

In the present study, a procedure was developed to determine the optimum reaction rate constants in generalized Arrhenius form and optimized through the Nelder-Mead method. For this purpose, a comprehensive mathematical model of a fixed bed reactor for dehydrogenation of heavy paraffins over Pt–Sn/Al2O3 catalyst was developed. Utilizing appropriate kinetic rate expressions for the main dehydrogenation reaction as well as side reactions and catalyst deactivation, a detailed model for the radial flow reactor was obtained. The reactor model composed of a set of partial differential equations (PDE), ordinary differential equations (ODE) as well as algebraic equations all of which were solved numerically to determine variations in components- concentrations in term of mole percents as a function of time and reactor radius. It was demonstrated that most significant variations observed at the entrance of the bed and the initial olefin production obtained was rather high. The aforementioned method utilized a direct-search optimization algorithm along with the numerical solution of the governing differential equations. The usefulness and validity of the method was demonstrated by comparing the predicted values of the kinetic constants using the proposed method with a series of experimental values reported in the literature for different systems.

Keywords: Dehydrogenation, Pt-Sn/Al2O3 Catalyst, Modeling, Nelder-Mead, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2695
272 An Optimal Load Shedding Approach for Distribution Networks with DGs considering Capacity Deficiency Modelling of Bulked Power Supply

Authors: A. R. Malekpour, A.R. Seifi

Abstract:

This paper discusses a genetic algorithm (GA) based optimal load shedding that can apply for electrical distribution networks with and without dispersed generators (DG). Also, the proposed method has the ability for considering constant and variable capacity deficiency caused by unscheduled outages in the bulked generation and transmission system of bulked power supply. The genetic algorithm (GA) is employed to search for the optimal load shedding strategy in distribution networks considering DGs in two cases of constant and variable modelling of bulked power supply of distribution networks. Electrical power distribution systems have a radial network and unidirectional power flows. With the advent of dispersed generations, the electrical distribution system has a locally looped network and bidirectional power flows. Therefore, installed DG in the electrical distribution systems can cause operational problems and impact on existing operational schemes. Introduction of DGs in electrical distribution systems has introduced many new issues in operational and planning level. Load shedding as one of operational issue has no exempt. The objective is to minimize the sum of curtailed load and also system losses within the frame-work of system operational and security constraints. The proposed method is tested on a radial distribution system with 33 load points for more practical applications.

Keywords: DG, Load shedding, Optimization, Capacity Deficiency Modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
271 Cultivating Individuality and Equality in Education: Ideas on Respecting Dimensions of Diversity within the Classroom

Authors: Melissa C. LaDuke

Abstract:

This systematic literature review sought to explore the dimensions of diversity that can affect classroom learning. This review is significant as it can aid educators in reaching more of their diverse student population and creating supportive classrooms for teachers and students. For this study, peer-reviewed articles were found and compiled using Google Scholar. Key terms used in the search include student individuality, classroom equality, student development, teacher development, and teacher individuality. Relevant educational standards such as Common Core and Partnership for the 21st Century were also included as part of this review. Student and teacher individuality and equality is discussed as well as methods to grow both within educational settings. Embracing student and teacher individuality was found to be key as it may affect how each person interacts with given information. One method to grow individuality and equality in educational settings included drafting and employing revised teaching standards which include various Common Core and US State standards. Another was to use educational theories such as constructivism, cognitive learning, and Experiential Learning Theory. However, barriers to growing individuality, such as not acknowledging differences in a population’s dimensions of diversity, still exist. Studies found preserving the dimensions of diversity owned by both teachers and students yielded more positive and beneficial classroom experiences.

Keywords: Classroom equality, student development, student individuality, teacher development, teacher individuality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 605
270 Nine-Level Shunt Active Power Filter Associated with a Photovoltaic Array Coupled to the Electrical Distribution Network

Authors: Zahzouh Zoubir, Bouzaouit Azzeddine, Gahgah Mounir

Abstract:

The use of more and more electronic power switches with a nonlinear behavior generates non-sinusoidal currents in distribution networks, which causes damage to domestic and industrial equipment. The multi-level shunt power active filter is subsequently shown to be an adequate solution to the problem raised. Nevertheless, the difficulty of adjusting the active filter DC supply voltage requires another technology to ensure it. In this article, a photovoltaic generator is associated with the DC bus power terminals of the active filter. The proposed system consists of a field of solar panels, three multi-level voltage inverters connected to the power grid and a non-linear load consisting of a six-diode rectifier bridge supplying a resistive-inductive load. Current control techniques of active and reactive power are used to compensate for both harmonic currents and reactive power as well as to inject active solar power into the distribution network. An algorithm of the search method of the maximum power point of type Perturb and observe is applied. Simulation results of the system proposed under the MATLAB/Simulink environment shows that the performance of control commands that reassure the solar power injection in the network, harmonic current compensation and power factor correction.

Keywords: MPPT, active power filter, PV array, perturb and observe algorithm, PWM-control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 716
269 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean functions, simplification, Karnough map, implementation of logic functions, modular neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027
268 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode provides good sources of needed information to classify living species. The classification problem has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use the similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. However, all the used methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. In fact, our method permits to avoid the complex problem of form and structure in different classes of organisms. The empirical data and their classification performances are compared with other methods. Evenly, in this study, we present our system which is consisted of three phases. The first one, is called transformation, is composed of three sub steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. Moreover, the second phase step is an approximation; it is empowered by the use of Multi Library Wavelet Neural Networks (MLWNN). Finally, the third one, is called the classification of DNA Barcodes, is realized by applying the algorithm of hierarchical classification.

Keywords: DNA Barcode, Electron-Ion Interaction Pseudopotential, Multi Library Wavelet Neural Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1925
267 Relevance Feedback within CBIR Systems

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

We present here the results for a comparative study of some techniques, available in the literature, related to the relevance feedback mechanism in the case of a short-term learning. Only one method among those considered here is belonging to the data mining field which is the K-nearest neighbors algorithm (KNN) while the rest of the methods is related purely to the information retrieval field and they fall under the purview of the following three major axes: Shifting query, Feature Weighting and the optimization of the parameters of similarity metric. As a contribution, and in addition to the comparative purpose, we propose a new version of the KNN algorithm referred to as an incremental KNN which is distinct from the original version in the sense that besides the influence of the seeds, the rate of the actual target image is influenced also by the images already rated. The results presented here have been obtained after experiments conducted on the Wang database for one iteration and utilizing color moments on the RGB space. This compact descriptor, Color Moments, is adequate for the efficiency purposes needed in the case of interactive systems. The results obtained allow us to claim that the proposed algorithm proves good results; it even outperforms a wide range of techniques available in the literature.

Keywords: CBIR, Category Search, Relevance Feedback (RFB), Query Point Movement, Standard Rocchio’s Formula, Adaptive Shifting Query, Feature Weighting, Optimization of the Parameters of Similarity Metric, Original KNN, Incremental KNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2255
266 Predicting Application Layer DDoS Attacks Using Machine Learning Algorithms

Authors: S. Umarani, D. Sharmila

Abstract:

A Distributed Denial of Service (DDoS) attack is a major threat to cyber security. It originates from the network layer or the application layer of compromised/attacker systems which are connected to the network. The impact of this attack ranges from the simple inconvenience to use a particular service to causing major failures at the targeted server. When there is heavy traffic flow to a target server, it is necessary to classify the legitimate access and attacks. In this paper, a novel method is proposed to detect DDoS attacks from the traces of traffic flow. An access matrix is created from the traces. As the access matrix is multi dimensional, Principle Component Analysis (PCA) is used to reduce the attributes used for detection. Two classifiers Naive Bayes and K-Nearest neighborhood are used to classify the traffic as normal or abnormal. The performance of the classifier with PCA selected attributes and actual attributes of access matrix is compared by the detection rate and False Positive Rate (FPR).

Keywords: Distributed Denial of Service (DDoS) attack, Application layer DDoS, DDoS Detection, K- Nearest neighborhood classifier, Naive Bayes Classifier, Principle Component Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5223