Search results for: Recognition based graphical user authentication
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12297

Search results for: Recognition based graphical user authentication

9897 Recursive Algorithms for Image Segmentation Based on a Discriminant Criterion

Authors: Bing-Fei Wu, Yen-Lin Chen, Chung-Cheng Chiu

Abstract:

In this study, a new criterion for determining the number of classes an image should be segmented is proposed. This criterion is based on discriminant analysis for measuring the separability among the segmented classes of pixels. Based on the new discriminant criterion, two algorithms for recursively segmenting the image into determined number of classes are proposed. The proposed methods can automatically and correctly segment objects with various illuminations into separated images for further processing. Experiments on the extraction of text strings from complex document images demonstrate the effectiveness of the proposed methods.1

Keywords: image segmentation, multilevel thresholding, clustering, discriminant analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012
9896 Complementary Energy Path Adiabatic Logic based Full Adder Circuit

Authors: Shipra Upadhyay , R. K. Nagaria, R. A. Mishra

Abstract:

In this paper, we present the design and experimental evaluation of complementary energy path adiabatic logic (CEPAL) based 1 bit full adder circuit. A simulative investigation on the proposed full adder has been done using VIRTUOSO SPECTRE simulator of cadence in 0.18μm UMC technology and its performance has been compared with the conventional CMOS full adder circuit. The CEPAL based full adder circuit exhibits the energy saving of 70% to the conventional CMOS full adder circuit, at 100 MHz frequency and 1.8V operating voltage.

Keywords: Adiabatic, CEPAL, full adder, power clock

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2420
9895 Efficient Frontier - Comparing Different Volatility Estimators

Authors: Tea Poklepović, Zdravka Aljinović, Mario Matković

Abstract:

Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that range-based volatility estimator outperforms both mean-variance and lower semi-variance model.

Keywords: Variance, lower semi-variance, range-based volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2562
9894 Measuring the Development Level of Chinese Regional Service Industry: An Empirical Analysis based on Entropy Weight and TOPSIS

Authors: Nan Li, Ying Wang

Abstract:

Using entropy weight and TOPSIS method, a comprehensive evaluation is done on the development level of Chinese regional service industry in this paper. Firstly, based on existing research results, an evaluation index system is constructed from the scale of development, the industrial structure and the economic benefits. An evaluation model is then built up based on entropy weight and TOPSIS, and an empirical analysis is conducted on the development level of service industries in 31 Chinese provinces during 2006 and 2009 from the two dimensions or time series and cross section, which provides new idea for assessing regional service industry. Furthermore, the 31 provinces are classified into four categories based on the evaluation results, and deep analysis is carried out on the evaluation results.

Keywords: Chinese regional service industry, Development level, Entropy weight, TOPSIS Evaluation Method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
9893 Knowledge Management Model for Modern Retail Business: A Conceptual Framework

Authors: M. W. Yip, H. H. Ng, S. Din, N. Abu Bakar

Abstract:

This paper reviewed the relationships between the Knowledge Management (KM) activities and its perceived benefits in the knowledge based organisations. KM activities include: knowledge identification, knowledge acquisition, knowledge application, knowledge sharing, knowledge creation and knowledge preservation. And the perceived benefits of KM are fast customer responsiveness, operation excellence and high innovative intensity.  Based on the above review, a conceptual framework for KM implementation in retail business organisations has been proposed. Finally the paper forwarded some limitations of the framework and based on which, directions for future research had been suggested.

Keywords: Knowledge Management, Knowledge Management Activities, Retail Business, Knowledge Economy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4823
9892 Detection and Classification of Faults on Parallel Transmission Lines Using Wavelet Transform and Neural Network

Authors: V.S.Kale, S.R.Bhide, P.P.Bedekar, G.V.K.Mohan

Abstract:

The protection of parallel transmission lines has been a challenging task due to mutual coupling between the adjacent circuits of the line. This paper presents a novel scheme for detection and classification of faults on parallel transmission lines. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the type of the fault. MATLAB/Simulink is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed scheme for fault diagnosis is able to classify all the faults on the parallel transmission line rapidly and correctly.

Keywords: Artificial neural network, fault detection and classification, parallel transmission lines, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2987
9891 Examining the Performance of Three Multiobjective Evolutionary Algorithms Based on Benchmarking Problems

Authors: Konstantinos Metaxiotis, Konstantinos Liagkouras

Abstract:

The objective of this study is to examine the performance of three well-known multiobjective evolutionary algorithms for solving optimization problems. The first algorithm is the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), the second one is the Strength Pareto Evolutionary Algorithm 2 (SPEA-2), and the third one is the Multiobjective Evolutionary Algorithms based on decomposition (MOEA/D). The examined multiobjective algorithms are analyzed and tested on the ZDT set of test functions by three performance metrics. The results indicate that the NSGA-II performs better than the other two algorithms based on three performance metrics.

Keywords: MOEAs, Multiobjective optimization, ZDT test functions, performance metrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 931
9890 Data Mining Using Learning Automata

Authors: M. R. Aghaebrahimi, S. H. Zahiri, M. Amiri

Abstract:

In this paper a data miner based on the learning automata is proposed and is called LA-miner. The LA-miner extracts classification rules from data sets automatically. The proposed algorithm is established based on the function optimization using learning automata. The experimental results on three benchmarks indicate that the performance of the proposed LA-miner is comparable with (sometimes better than) the Ant-miner (a data miner algorithm based on the Ant Colony optimization algorithm) and CNZ (a well-known data mining algorithm for classification).

Keywords: Data mining, Learning automata, Classification rules, Knowledge discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
9889 Small Signal Stability Assessment Employing PSO Based TCSC Controller with Comparison to GA Based Design

Authors: D. Mondal, A. Chakrabarti, A. Sengupta

Abstract:

This paper aims to select the optimal location and setting parameters of TCSC (Thyristor Controlled Series Compensator) controller using Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) to mitigate small signal oscillations in a multimachine power system. Though Power System Stabilizers (PSSs) are prime choice in this issue, installation of FACTS device has been suggested here in order to achieve appreciable damping of system oscillations. However, performance of any FACTS devices highly depends upon its parameters and suitable location in the power network. In this paper PSO as well as GA based techniques are used separately and compared their performances to investigate this problem. The results of small signal stability analysis have been represented employing eigenvalue as well as time domain response in face of two common power system disturbances e.g., varying load and transmission line outage. It has been revealed that the PSO based TCSC controller is more effective than GA based controller even during critical loading condition.

Keywords: Genetic Algorithm, Particle Swarm Optimization, Small Signal Stability, Thyristor Controlled Series Compensator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937
9888 Characterization of an Almond Shell Composite Based on PHBH

Authors: J. Ivorra-Martinez, L. Quiles-Carrillo, J. Gomez-Caturla, T. Boronat, R. Balart

Abstract:

The utilization of almond crop by-products to obtain Poly(3-hydroxybutyrat-co-3-hydroxyhexanoat) (PHBH)-based composites was carried out by using an extrusion process followed by an injection to obtain test samples. To improve the properties of the resulting composite, the incorporation of Oligomer Lactic Acid (OLA 8) as a coupling agent and plasticizer was additionally considered. A characterization process was carried out by the measurement of mechanical properties, thermal properties, surface morphology, and water absorption ability. The use of the almond residue allows obtaining composites based on PHBH with a higher environmental interest and lower cost.

Keywords: Almond shell, PHBH, composite, polymer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 355
9887 Genetic Algorithm Based Design of Fuzzy Logic Power System Stabilizers in Multimachine Power System

Authors: Manisha Dubey, Aalok Dubey

Abstract:

This paper presents an approach for the design of fuzzy logic power system stabilizers using genetic algorithms. In the proposed fuzzy expert system, speed deviation and its derivative have been selected as fuzzy inputs. In this approach the parameters of the fuzzy logic controllers have been tuned using genetic algorithm. Incorporation of GA in the design of fuzzy logic power system stabilizer will add an intelligent dimension to the stabilizer and significantly reduces computational time in the design process. It is shown in this paper that the system dynamic performance can be improved significantly by incorporating a genetic-based searching mechanism. To demonstrate the robustness of the genetic based fuzzy logic power system stabilizer (GFLPSS), simulation studies on multimachine system subjected to small perturbation and three-phase fault have been carried out. Simulation results show the superiority and robustness of GA based power system stabilizer as compare to conventionally tuned controller to enhance system dynamic performance over a wide range of operating conditions.

Keywords: Dynamic stability, Fuzzy logic power systemstabilizer, Genetic Algorithms, Genetic based power systemstabilizer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2714
9886 Ontology-Based Systemizing of the Science Information Devoted to Waste Utilizing by Methanogenesis

Authors: Ye. Shapovalov, V. Shapovalov, O. Stryzhak, A. Salyuk

Abstract:

Over the past decades, amount of scientific information has been growing exponentially. It became more complicated to process and systemize this amount of data. The approach to systematization of scientific information on the production of biogas based on the ontological IT platform “T.O.D.O.S.” has been developed. It has been proposed to select semantic characteristics of each work for their further introduction into the IT platform “T.O.D.O.S.”. An ontological graph with a ranking function for previous scientific research and for a system of selection of microorganisms has been worked out. These systems provide high performance of information management of scientific information.

Keywords: Ontology-based analysis, analysis of scientific data, methanogenesys, microorganism hierarchy, T.O.D.O.S.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 707
9885 Quantification of Technology Innovation Usinga Risk-Based Framework

Authors: Gerard E. Sleefe

Abstract:

There is significant interest in achieving technology innovation through new product development activities. It is recognized, however, that traditional project management practices focused only on performance, cost, and schedule attributes, can often lead to risk mitigation strategies that limit new technology innovation. In this paper, a new approach is proposed for formally managing and quantifying technology innovation. This approach uses a risk-based framework that simultaneously optimizes innovation attributes along with traditional project management and system engineering attributes. To demonstrate the efficacy of the new riskbased approach, a comprehensive product development experiment was conducted. This experiment simultaneously managed the innovation risks and the product delivery risks through the proposed risk-based framework. Quantitative metrics for technology innovation were tracked and the experimental results indicate that the risk-based approach can simultaneously achieve both project deliverable and innovation objectives.

Keywords: innovation, risk assessment, product development, technology management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
9884 Play in College: Shifting Perspectives and Creative Problem-Based Play

Authors: Agni Stylianou-Georgiou, Eliza Pitri

Abstract:

This study is a design narrative that discusses researchers’ new learning based on changes made in pedagogies and learning opportunities in the context of a Cognitive Psychology and an Art History undergraduate course. The purpose of this study was to investigate how to encourage creative problem-based play in tertiary education engaging instructors and student-teachers in designing educational games. Course instructors modified content to encourage flexible thinking during game design problem-solving. Qualitative analyses of data sources indicated that Thinking Birds’ questions could encourage flexible thinking as instructors engaged in creative problem-based play. However, student-teachers demonstrated weakness in adopting flexible thinking during game design problem solving. Further studies of student-teachers’ shifting perspectives during different instructional design tasks would provide insights for developing the Thinking Birds’ questions as tools for creative problem solving.

Keywords: Creative problem-based play, educational games, flexible thinking, tertiary education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 868
9883 CMOS-Compatible Plasmonic Nanocircuits for On-Chip Integration

Authors: Shiyang Zhu, G. Q. Lo, D. L. Kwong

Abstract:

Silicon photonics is merging as a unified platform for driving photonic based telecommunications and for local photonic based interconnect but it suffers from large footprint as compared with the nanoelectronics. Plasmonics is an attractive alternative for nanophotonics. In this work, two CMOS compatible plasmonic waveguide platforms are compared. One is the horizontal metal-insulator-Si-insulator-metal nanoplasmonic waveguide and the other is metal-insulator-Si hybrid plasmonic waveguide. Various passive and active photonic devices have been experimentally demonstrated based on these two plasmonic waveguide platforms.

Keywords: Plasmonics, on-chip integration, Silicon photonics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2187
9882 Optimizing Allocation of Two Dimensional Irregular Shapes using an Agent Based Approach

Authors: Ramin Halavati, Saeed B. Shouraki, Mahdieh Noroozian, Saman H. Zadeh

Abstract:

Packing problems arise in a wide variety of application areas. The basic problem is that of determining an efficient arrangement of different objects in a region without any overlap and with minimal wasted gap between shapes. This paper presents a novel population based approach for optimizing arrangement of irregular shapes. In this approach, each shape is coded as an agent and the agents' reproductions and grouping policies results in arrangements of the objects in positions with least wasted area between them. The approach is implemented in an application for cutting sheets and test results on several problems from literature are presented.

Keywords: Optimization, Bin Packing, Agent Based Systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2462
9881 Design of an Innovative Accelerant Detector

Authors: Esther T. Akinlabi, Milan Isvarial, Stephen A. Akinlabi

Abstract:

Today, canines are still used effectively in acceleration detection situation. However, this method is becoming impractical in modern age and a new automated replacement to the canine is required. This paper reports the design of an innovative accelerant detector. Designing an accelerant detector is a long process as is any design process; therefore, a solution to the need for a mobile, effective accelerant detector is hereby presented. The device is simple and efficient to ensure that any accelerant detection can be conducted quickly and easily. The design utilizes Ultra Violet (UV) light to detect the accelerant. When the UV light shines on an accelerant, the hydrocarbons in the accelerant emit florescence. The advantages of using the UV light to detect accelerant are also outlined in this paper. The mobility of the device is achieved by using a Direct Current (DC) motor to run tank tracks. Tank tracks were chosen as to ensure that the device will be mobile in the rough terrain of a fire site. The materials selected for the various parts are also presented. A Solid Works Simulation was also conducted on the stresses in the shafts and the results are presented. This design is an innovative solution which offers a user friendly interface. The design is also environmentally friendly, ecologically sound and safe to use.

Keywords: Accelerant detector, Canines, Gas Chromatography- Mass Spectrometry (GC-MS), Ultra Violet light.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345
9880 Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion

Authors: Chiharu Okuma, Jun Murakami, Naoki Yamamoto

Abstract:

In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.

Keywords: Singular value decomposition (SVD), higher-order SVD (HOSVD), higher-order tensor, outer product expansion, power method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
9879 An ICA Algorithm for Separation of Convolutive Mixture of Speech Signals

Authors: Rajkishore Prasad, Hiroshi Saruwatari, Kiyohiro Shikano

Abstract:

This paper describes Independent Component Analysis (ICA) based fixed-point algorithm for the blind separation of the convolutive mixture of speech, picked-up by a linear microphone array. The proposed algorithm extracts independent sources by non- Gaussianizing the Time-Frequency Series of Speech (TFSS) in a deflationary way. The degree of non-Gaussianization is measured by negentropy. The relative performances of algorithm under random initialization and Null beamformer (NBF) based initialization are studied. It has been found that an NBF based initial value gives speedy convergence as well as better separation performance

Keywords: Blind signal separation, independent component analysis, negentropy, convolutive mixture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
9878 From Separatism to Coalition: Variants in Language Politics and Leadership Pattern in Dravidian Movement

Authors: Subramaniam Chandran

Abstract:

This paper describes the evolution of language politics and the part played by political leaders with reference to the Dravidian parties in Tamil Nadu. It explores the interesting evolution from separatism to coalition in sustaining the values of parliamentary democracy and federalism. It seems that the appropriation of language politics is fully ascribed to the DMK leadership under Annadurai and Karunanidhi. For them, the Tamil language is a self-determining power, a terrain of nationhood, and a perennial source of social and political powers. The DMK remains a symbol of Tamil nationalist party playing language politics in the interest of the Tamils. Though electoral alliances largely determine the success, the language politics still has significant space in the politics of Tamil Nadu. Ironically, DMK moves from the periphery to centre for getting national recognition for the Tamils as well as for its own maximization of power. The evolution can be seen in two major phases as: language politics for party building; and language politics for state building with three successive political processes, namely, language politics in the process of separatism, representative politics and coalition. The much pronounced Dravidian Movement is radical enough to democratize the party ideology to survive the spirit of parliamentary democracy. This has secured its own rewards in terms of political power. The political power provides the means to achieve the social and political goal of the political party. Language politics and leadership pattern actualized this trend though the movement is shifted from separatism to coalition.

Keywords: Language politics, cultural nationalism, leadership, social justice

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
9877 Content-Based Image Retrieval Using HSV Color Space Features

Authors: Hamed Qazanfari, Hamid Hassanpour, Kazem Qazanfari

Abstract:

In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods.

Keywords: Content-based image retrieval, color difference histogram, efficient features selection, entropy, correlation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 633
9876 Margin-Based Feed-Forward Neural Network Classifiers

Authors: Han Xiao, Xiaoyan Zhu

Abstract:

Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is developed and generated from Widrow-Hoff Principle that means to minimize the squared error. In this paper, we propose a new training algorithm for feed-forward neural networks based on Margin-Based Principle, which could effectively promote the accuracy and generalization ability of neural network classifiers with less labelled samples and flexible network. We have conducted experiments on four UCI open datasets and achieved good results as expected. In conclusion, our model could handle more sparse labelled and more high-dimension dataset in a high accuracy while modification from old ANN method to our method is easy and almost free of work.

Keywords: Max-Margin Principle, Feed-Forward Neural Network, Classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
9875 Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem

Authors: First G.M. Karthik, Second Ramachandra.V.Pujeri, Dr.

Abstract:

Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.

Keywords: Constraint Based Mining, FP tree, Data mining, GCS problem, CBFP mining technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671
9874 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1171
9873 A Complexity-Based Approach in Image Compression using Neural Networks

Authors: Hadi Veisi, Mansour Jamzad

Abstract:

In this paper we present an adaptive method for image compression that is based on complexity level of the image. The basic compressor/de-compressor structure of this method is a multilayer perceptron artificial neural network. In adaptive approach different Back-Propagation artificial neural networks are used as compressor and de-compressor and this is done by dividing the image into blocks, computing the complexity of each block and then selecting one network for each block according to its complexity value. Three complexity measure methods, called Entropy, Activity and Pattern-based are used to determine the level of complexity in image blocks and their ability in complexity estimation are evaluated and compared. In training and evaluation, each image block is assigned to a network based on its complexity value. Best-SNR is another alternative in selecting compressor network for image blocks in evolution phase which chooses one of the trained networks such that results best SNR in compressing the input image block. In our evaluations, best results are obtained when overlapping the blocks is allowed and choosing the networks in compressor is based on the Best-SNR. In this case, the results demonstrate superiority of this method comparing with previous similar works and JPEG standard coding.

Keywords: Adaptive image compression, Image complexity, Multi-layer perceptron neural network, JPEG Standard, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2195
9872 Colour Image Compression Method Based On Fractal Block Coding Technique

Authors: Dibyendu Ghoshal, Shimal Das

Abstract:

Image compression based on fractal coding is a lossy compression method and normally used for gray level images range and domain blocks in rectangular shape. Fractal based digital image compression technique provide a large compression ratio and in this paper, it is proposed using YUV colour space and the fractal theory which is based on iterated transformation. Fractal geometry is mainly applied in the current study towards colour image compression coding. These colour images possesses correlations among the colour components and hence high compression ratio can be achieved by exploiting all these redundancies. The proposed method utilises the self-similarity in the colour image as well as the cross-correlations between them. Experimental results show that the greater compression ratio can be achieved with large domain blocks but more trade off in image quality is good to acceptable at less than 1 bit per pixel.

Keywords: Fractal coding, Iterated Function System (IFS), Image compression, YUV colour space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1955
9871 The Potential of Tempo-Oxidized Cellulose Nanofibers to Replace Ethylene-Propylene-Diene Monomer Rubber

Authors: S. Dikmen Kucuk, A. Tozluoglu, Y. Guner

Abstract:

In recent years, petroleum-based polymers began to be limited due to effects on human and environmental point of view in many countries. Thus, organic-based biodegradable materials have attracted much interest in the composite industry because of environmental concerns. As a result of this, it has been asked that inorganic and petroleum-based materials should be reduced and altered with biodegradable materials. In this point, in this study, it is aimed to investigate the potential of use of TEMPO (2,2,6,6- tetramethylpiperidine 1-oxyl)-mediated oxidation nano-fibrillated cellulose instead of EPDM (ethylene-propylene-diene monomer) rubber, which is a petroleum-based material. Thus, the exchange of petroleum-based EPDM rubber with organic based cellulose nanofibers, which are environmentally friendly (green) and biodegradable, will be realized. The effect of tempo-oxidized cellulose nanofibers (TCNF) instead of EPDM rubber was analyzed by rheological, mechanical, chemical, thermal and aging analyses. The aged surfaces were visually scrutinized and surface morphological changes were examined via scanning electron microscopy (SEM). The results obtained showed that TEMPO oxidation nano-fibrillated cellulose can be used at an amount of 1.0 and 2.2 phr resulting the values stay within tolerance according to customer standard and without any chemical degradation, crack, colour change or staining.

Keywords: EPDM, cellulose, green materials, nanofibrillated cellulose, TCNF, tempo-oxidized nanofiber.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 933
9870 Visualization and Indexing of Spectral Databases

Authors: Tibor Kulcsar, Gabor Sarossy, Gabor Bereznai, Robert Auer, Janos Abonyi

Abstract:

On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.

Keywords: indexing high dimensional databases, dimensional reduction, clustering, similarity, k-nn algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1751
9869 Improving Spatiotemporal Change Detection: A High Level Fusion Approach for Discovering Uncertain Knowledge from Satellite Image Database

Authors: Wadii Boulila, Imed Riadh Farah, Karim Saheb Ettabaa, Basel Solaiman, Henda Ben Ghezala

Abstract:

This paper investigates the problem of tracking spa¬tiotemporal changes of a satellite image through the use of Knowledge Discovery in Database (KDD). The purpose of this study is to help a given user effectively discover interesting knowledge and then build prediction and decision models. Unfortunately, the KDD process for spatiotemporal data is always marked by several types of imperfections. In our paper, we take these imperfections into consideration in order to provide more accurate decisions. To achieve this objective, different KDD methods are used to discover knowledge in satellite image databases. Each method presents a different point of view of spatiotemporal evolution of a query model (which represents an extracted object from a satellite image). In order to combine these methods, we use the evidence fusion theory which considerably improves the spatiotemporal knowledge discovery process and increases our belief in the spatiotemporal model change. Experimental results of satellite images representing the region of Auckland in New Zealand depict the improvement in the overall change detection as compared to using classical methods.

Keywords: Knowledge discovery in satellite databases, knowledge fusion, data imperfection, data mining, spatiotemporal change detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
9868 A Survey of Business Component Identification Methods and Related Techniques

Authors: Zhongjie Wang, Xiaofei Xu, Dechen Zhan

Abstract:

With deep development of software reuse, componentrelated technologies have been widely applied in the development of large-scale complex applications. Component identification (CI) is one of the primary research problems in software reuse, by analyzing domain business models to get a set of business components with high reuse value and good reuse performance to support effective reuse. Based on the concept and classification of CI, its technical stack is briefly discussed from four views, i.e., form of input business models, identification goals, identification strategies, and identification process. Then various CI methods presented in literatures are classified into four types, i.e., domain analysis based methods, cohesion-coupling based clustering methods, CRUD matrix based methods, and other methods, with the comparisons between these methods for their advantages and disadvantages. Additionally, some insufficiencies of study on CI are discussed, and the causes are explained subsequently. Finally, it is concluded with some significantly promising tendency about research on this problem.

Keywords: Business component, component granularity, component identification, reuse performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1955