Search results for: Concordance Analysis Techniques
10236 A Hybrid Method for Determination of Effective Poles Using Clustering Dominant Pole Algorithm
Authors: Anuj Abraham, N. Pappa, Daniel Honc, Rahul Sharma
Abstract:
In this paper, an analysis of some model order reduction techniques is presented. A new hybrid algorithm for model order reduction of linear time invariant systems is compared with the conventional techniques namely Balanced Truncation, Hankel Norm reduction and Dominant Pole Algorithm (DPA). The proposed hybrid algorithm is known as Clustering Dominant Pole Algorithm (CDPA), is able to compute the full set of dominant poles and its cluster center efficiently. The dominant poles of a transfer function are specific eigenvalues of the state space matrix of the corresponding dynamical system. The effectiveness of this novel technique is shown through the simulation results.
Keywords: Balanced truncation, Clustering, Dominant pole, Hankel norm, Model reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 268710235 A New Image Encryption Approach using Combinational Permutation Techniques
Authors: A. Mitra, Y. V. Subba Rao, S. R. M. Prasanna
Abstract:
This paper proposes a new approach for image encryption using a combination of different permutation techniques. The main idea behind the present work is that an image can be viewed as an arrangement of bits, pixels and blocks. The intelligible information present in an image is due to the correlations among the bits, pixels and blocks in a given arrangement. This perceivable information can be reduced by decreasing the correlation among the bits, pixels and blocks using certain permutation techniques. This paper presents an approach for a random combination of the aforementioned permutations for image encryption. From the results, it is observed that the permutation of bits is effective in significantly reducing the correlation thereby decreasing the perceptual information, whereas the permutation of pixels and blocks are good at producing higher level security compared to bit permutation. A random combination method employing all the three techniques thus is observed to be useful for tactical security applications, where protection is needed only against a casual observer.Keywords: Encryption, Permutation, Good key, Combinationalpermutation, Pseudo random index generator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 223110234 Solving 94-bit ECDLP with 70 Computers in Parallel
Authors: Shunsuke Miyoshi, Yasuyuki Nogami, Takuya Kusaka, Nariyoshi Yamai
Abstract:
Elliptic curve discrete logarithm problem(ECDLP) is one of problems on which the security of pairing-based cryptography is based. This paper considers Pollard’s rho method to evaluate the security of ECDLP on Barreto-Naehrig(BN) curve that is an efficient pairing-friendly curve. Some techniques are proposed to make the rho method efficient. Especially, the group structure on BN curve, distinguished point method, and Montgomery trick are well-known techniques. This paper applies these techniques and shows its optimization. According to the experimental results for which a large-scale parallel system with MySQL is applied, 94-bit ECDLP was solved about 28 hours by parallelizing 71 computers.Keywords: Pollard’s rho method, BN curve, Montgomery multiplication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 187110233 On the Application of Meta-Design Techniques in Hardware Design Domain
Authors: R. Damaševičius
Abstract:
System-level design based on high-level abstractions is becoming increasingly important in hardware and embedded system design. This paper analyzes meta-design techniques oriented at developing meta-programs and meta-models for well-understood domains. Meta-design techniques include meta-programming and meta-modeling. At the programming level of design process, metadesign means developing generic components that are usable in a wider context of application than original domain components. At the modeling level, meta-design means developing design patterns that describe general solutions to the common recurring design problems, and meta-models that describe the relationship between different types of design models and abstractions. The paper describes and evaluates the implementation of meta-design in hardware design domain using object-oriented and meta-programming techniques. The presented ideas are illustrated with a case study.Keywords: Design patterns, meta-design, meta-modeling, metaprogramming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 231310232 Analysis of Supply Chain Risk Management Strategies: Case Study of Supply Chain Disruptions
Authors: Marcelo Dias Carvalho, Leticia Ishikawa
Abstract:
Supply Chain Risk Management refers to a set of strategies used by companies to avoid supply chain disruption caused by damage at production facilities, natural disasters, capacity issues, inventory problems, incorrect forecasts, and delays. Many companies use the techniques of the Toyota Production System, which in a way goes against a better management of supply chain risks. This paper studies key events in some multinationals to analyze the trade-off between the best supply chain risk management techniques and management policies designed to create lean enterprises. The result of a good balance of these actions is the reduction of losses, increased customer trust in the company and better preparedness to face the general risks of a supply chain.Keywords: Supply chain disruptions, supply chain management, supply chain resilience, just-in-time production, lean manufacturing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 352410231 Comparative Study of Evolutionary Model and Clustering Methods in Circuit Partitioning Pertaining to VLSI Design
Authors: K. A. Sumitra Devi, N. P. Banashree, Annamma Abraham
Abstract:
Partitioning is a critical area of VLSI CAD. In order to build complex digital logic circuits its often essential to sub-divide multi -million transistor design into manageable Pieces. This paper looks at the various partitioning techniques aspects of VLSI CAD, targeted at various applications. We proposed an evolutionary time-series model and a statistical glitch prediction system using a neural network with selection of global feature by making use of clustering method model, for partitioning a circuit. For evolutionary time-series model, we made use of genetic, memetic & neuro-memetic techniques. Our work focused in use of clustering methods - K-means & EM methodology. A comparative study is provided for all techniques to solve the problem of circuit partitioning pertaining to VLSI design. The performance of all approaches is compared using benchmark data provided by MCNC standard cell placement benchmark net lists. Analysis of the investigational results proved that the Neuro-memetic model achieves greater performance then other model in recognizing sub-circuits with minimum amount of interconnections between them.
Keywords: VLSI, circuit partitioning, memetic algorithm, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163710230 Fractal - Wavelet Based Techniques for Improving the Artificial Neural Network Models
Authors: Reza Bazargan Lari, Mohammad H. Fattahi
Abstract:
Natural resources management including water resources requires reliable estimations of time variant environmental parameters. Small improvements in the estimation of environmental parameters would result in grate effects on managing decisions. Noise reduction using wavelet techniques is an effective approach for preprocessing of practical data sets. Predictability enhancement of the river flow time series are assessed using fractal approaches before and after applying wavelet based preprocessing. Time series correlation and persistency, the minimum sufficient length for training the predicting model and the maximum valid length of predictions were also investigated through a fractal assessment.
Keywords: Wavelet, de-noising, predictability, time series fractal analysis, valid length, ANN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 206310229 Hybridized Technique to Analyze Workstress Related Data via the StressCafé
Authors: Anusua Ghosh, Andrew Nafalski, Jeffery Tweedale, Maureen Dollard
Abstract:
This paper presents anapproach of hybridizing two or more artificial intelligence (AI) techniques which arebeing used to fuzzify the workstress level ranking and categorize the rating accordingly. The use of two or more techniques (hybrid approach) has been considered in this case, as combining different techniques may lead to neutralizing each other-s weaknesses generating a superior hybrid solution. Recent researches have shown that there is a need for a more valid and reliable tools, for assessing work stress. Thus artificial intelligence techniques have been applied in this instance to provide a solution to a psychological application. An overview about the novel and autonomous interactive model for analysing work-stress that has been developedusing multi-agent systems is also presented in this paper. The establishment of the intelligent multi-agent decision analyser (IMADA) using hybridized technique of neural networks and fuzzy logic within the multi-agent based framework is also described.Keywords: Fuzzy logic, intelligent agent, multi-agent systems, neural network, workplace stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 396710228 Evolutionary Search Techniques to Solve Set Covering Problems
Authors: Darwin Gouwanda, S. G. Ponnambalam
Abstract:
Set covering problem is a classical problem in computer science and complexity theory. It has many applications, such as airline crew scheduling problem, facilities location problem, vehicle routing, assignment problem, etc. In this paper, three different techniques are applied to solve set covering problem. Firstly, a mathematical model of set covering problem is introduced and solved by using optimization solver, LINGO. Secondly, the Genetic Algorithm Toolbox available in MATLAB is used to solve set covering problem. And lastly, an ant colony optimization method is programmed in MATLAB programming language. Results obtained from these methods are presented in tables. In order to assess the performance of the techniques used in this project, the benchmark problems available in open literature are used.Keywords: Set covering problem, genetic algorithm, ant colony optimization, LINGO.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 362910227 Determination of Surface Roughness by Ball Burnishing Process Using Factorial Techniques
Authors: P. S. Dabeer, G. K. Purohit
Abstract:
Burnishing is a method of finishing and hardening machined parts by plastic deformation of the surface. Experimental work based on central composite second order rotatable design has been carried out on a lathe machine to establish the effects of ball burnishing parameters on the surface roughness of brass material. Analysis of the results by the analysis of variance technique and the F-test show that the parameters considered, have significant effects on the surface roughness.
Keywords: Ball burnishing, Response surface Methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 247710226 Role of Association Rule Mining in Numerical Data Analysis
Authors: Sudhir Jagtap, Kodge B. G., Shinde G. N., Devshette P. M
Abstract:
Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. The numerical data analysis became key process in research and development of all the fields [6]. In this paper we have made an attempt to analyze the specified numerical patterns with reference to the association rule mining techniques with minimum confidence and minimum support mining criteria. The extracted rules and analyzed results are graphically demonstrated. Association rules are a simple but very useful form of data mining that describe the probabilistic co-occurrence of certain events within a database [7]. They were originally designed to analyze market-basket data, in which the likelihood of items being purchased together within the same transactions are analyzed.Keywords: Numerical data analysis, Data Mining, Association Rule Mining
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 286110225 Wavelet-Based ECG Signal Analysis and Classification
Authors: Madina Hamiane, May Hashim Ali
Abstract:
This paper presents the processing and analysis of ECG signals. The study is based on wavelet transform and uses exclusively the MATLAB environment. This study includes removing Baseline wander and further de-noising through wavelet transform and metrics such as signal-to noise ratio (SNR), Peak signal-to-noise ratio (PSNR) and the mean squared error (MSE) are used to assess the efficiency of the de-noising techniques. Feature extraction is subsequently performed whereby signal features such as heart rate, rise and fall levels are extracted and the QRS complex was detected which helped in classifying the ECG signal. The classification is the last step in the analysis of the ECG signals and it is shown that these are successfully classified as Normal rhythm or Abnormal rhythm. The final result proved the adequacy of using wavelet transform for the analysis of ECG signals.
Keywords: ECG Signal, QRS detection, thresholding, wavelet decomposition, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127310224 Web Page Watermarking: XML files using Synonyms and Acronyms
Authors: Nighat Mir, Sayed Afaq Hussain
Abstract:
Advent enhancements in the field of computing have increased massive use of web based electronic documents. Current Copyright protection laws are inadequate to prove the ownership for electronic documents and do not provide strong features against copying and manipulating information from the web. This has opened many channels for securing information and significant evolutions have been made in the area of information security. Digital Watermarking has developed into a very dynamic area of research and has addressed challenging issues for digital content. Watermarking can be visible (logos or signatures) and invisible (encoding and decoding). Many visible watermarking techniques have been studied for text documents but there are very few for web based text. XML files are used to trade information on the internet and contain important information. In this paper, two invisible watermarking techniques using Synonyms and Acronyms are proposed for XML files to prove the intellectual ownership and to achieve the security. Analysis is made for different attacks and amount of capacity to be embedded in the XML file is also noticed. A comparative analysis for capacity is also made for both methods. The system has been implemented using C# language and all tests are made practically to get the results.Keywords: Watermarking, Extensible Markup Language (XML), Synonyms, Acronyms, Copyright protection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 228110223 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis
Authors: A.K. Tangirala, S. Babji
Abstract:
In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 153410222 An Approach for Reducing the Computational Complexity of LAMSTAR Intrusion Detection System using Principal Component Analysis
Authors: V. Venkatachalam, S. Selvan
Abstract:
The security of computer networks plays a strategic role in modern computer systems. Intrusion Detection Systems (IDS) act as the 'second line of defense' placed inside a protected network, looking for known or potential threats in network traffic and/or audit data recorded by hosts. We developed an Intrusion Detection System using LAMSTAR neural network to learn patterns of normal and intrusive activities, to classify observed system activities and compared the performance of LAMSTAR IDS with other classification techniques using 5 classes of KDDCup99 data. LAMSAR IDS gives better performance at the cost of high Computational complexity, Training time and Testing time, when compared to other classification techniques (Binary Tree classifier, RBF classifier, Gaussian Mixture classifier). we further reduced the Computational Complexity of LAMSTAR IDS by reducing the dimension of the data using principal component analysis which in turn reduces the training and testing time with almost the same performance.Keywords: Binary Tree Classifier, Gaussian Mixture, IntrusionDetection System, LAMSTAR, Radial Basis Function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174710221 Noise Optimization Techniques for 1V 1GHz CMOS Low-Noise Amplifiers Design
Authors: M. Zamin Khan, Yanjie Wang, R. Raut
Abstract:
A 1V, 1GHz low noise amplifier (LNA) has been designed and simulated using Spectre simulator in a standard TSMC 0.18um CMOS technology.With low power and noise optimization techniques, the amplifier provides a gain of 24 dB, a noise figure of only 1.2 dB, power dissipation of 14 mW from a 1 V power supply.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 245610220 Harnessing Replication in Object Allocation
Authors: H. T. Barney, G. C. Low
Abstract:
The design of distributed systems involves the partitioning of the system into components or partitions and the allocation of these components to physical nodes. Techniques have been proposed for both the partitioning and allocation process. However these techniques suffer from a number of limitations. For instance object replication has the potential to greatly improve the performance of an object orientated distributed system but can be difficult to use effectively and there are few techniques that support the developer in harnessing object replication. This paper presents a methodological technique that helps developers decide how objects should be allocated in order to improve performance in a distributed system that supports replication. The performance of the proposed technique is demonstrated and tested on an example system.Keywords: Allocation, Distributed Systems, Replication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143910219 Model Discovery and Validation for the Qsar Problem using Association Rule Mining
Authors: Luminita Dumitriu, Cristina Segal, Marian Craciun, Adina Cocu, Lucian P. Georgescu
Abstract:
There are several approaches in trying to solve the Quantitative 1Structure-Activity Relationship (QSAR) problem. These approaches are based either on statistical methods or on predictive data mining. Among the statistical methods, one should consider regression analysis, pattern recognition (such as cluster analysis, factor analysis and principal components analysis) or partial least squares. Predictive data mining techniques use either neural networks, or genetic programming, or neuro-fuzzy knowledge. These approaches have a low explanatory capability or non at all. This paper attempts to establish a new approach in solving QSAR problems using descriptive data mining. This way, the relationship between the chemical properties and the activity of a substance would be comprehensibly modeled.Keywords: association rules, classification, data mining, Quantitative Structure - Activity Relationship.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178810218 Software Maintenance Severity Prediction for Object Oriented Systems
Authors: Parvinder S. Sandhu, Roma Jaswal, Sandeep Khimta, Shailendra Singh
Abstract:
As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.Keywords: Neural Network, Software faults, Software Metric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157510217 Design and Optimization of a Microstrip Patch Antenna for Increased Bandwidth
Authors: Ankit Jain, Archana Agrawal
Abstract:
With the ever-increasing need for wireless communication and the emergence of many systems, it is important to design broadband antennas to cover a wide frequency range. The aim of this paper is to design a broadband patch antenna, employing the three techniques of slotting, adding directly coupled parasitic elements, and fractal EBG structures. The bandwidth is improved from 9.32% to 23.77%. A wideband ranging from 4.15 GHz to 5.27 GHz is obtained. Also a comparative analysis of embedding EBG structures at different heights is also done. The composite effect of integrating these techniques in the design provides a simple and efficient method for obtaining low profile, broadband, high gain antenna. By the addition of parasitic elements the bandwidth was increased to only 18.04%. Later on by embedding EBG structures the bandwidth was increased up to 23.77%. The design is suitable for variety of wireless applications like WLAN and Radar Applications.
Keywords: Bandwidth, broadband, EBG structures, parasitic elements, Slotting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 339210216 Using PFA in Feature Analysis and Selection for H.264 Adaptation
Authors: Nora A. Naguib, Ahmed E. Hussein, Hesham A. Keshk, Mohamed I. El-Adawy
Abstract:
Classification of video sequences based on their contents is a vital process for adaptation techniques. It helps decide which adaptation technique best fits the resource reduction requested by the client. In this paper we used the principal feature analysis algorithm to select a reduced subset of video features. The main idea is to select only one feature from each class based on the similarities between the features within that class. Our results showed that using this feature reduction technique the source video features can be completely omitted from future classification of video sequences.
Keywords: Adaptation, feature selection, H.264, Principal Feature Analysis (PFA)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 160710215 Analysis of Diverse Clustering Tools in Data Mining
Authors: S. Sarumathi, N. Shanthi, M. Sharmila
Abstract:
Clustering in data mining is an unsupervised learning technique of aggregating the data objects into meaningful groups such that the intra cluster similarity of objects are maximized and inter cluster similarity of objects are minimized. Over the past decades several clustering tools were emerged in which clustering algorithms are inbuilt and are easier to use and extract the expected results. Data mining mainly deals with the huge databases that inflicts on cluster analysis and additional rigorous computational constraints. These challenges pave the way for the emergence of powerful expansive data mining clustering softwares. In this survey, a variety of clustering tools used in data mining are elucidated along with the pros and cons of each software.
Keywords: Cluster Analysis, Clustering Algorithms, Clustering Techniques, Association, Visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 220110214 Building Information Modeling-Based Approach for Automatic Quantity Take-off and Cost Estimation
Authors: Lo Kar Yin, Law Ka Mei
Abstract:
Architectural, engineering, construction and operations (AECO) industry practitioners have been well adapting to the dynamic construction market from the fundamental training of its disciplines. As further triggered by the pandemic since 2019, great steps are taken in virtual environment and the best collaboration is strived with project teams without boundaries. With adoption of Building Information Modeling-based approach and qualitative analysis, this paper is to review quantity take-off (QTO) and cost estimation process through modeling techniques in liaison with suppliers, fabricators, subcontractors, contractors, designers, consultants and services providers in the construction industry value chain for automatic project cost budgeting, project cost control and cost evaluation on design options of in-situ reinforced-concrete construction and Modular Integrated Construction (MiC) at design stage, variation of works and cash flow/spending analysis at construction stage as far as practicable, with a view to sharing the findings for enhancing mutual trust and co-operation among AECO industry practitioners. It is to foster development through a common prototype of design and build project delivery method in NEC4 Engineering and Construction Contract (ECC) Options A and C.
Keywords: Building Information Modeling, cost estimation, quantity take-off, modeling techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71010213 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction
Authors: M. D. Haneef, R. B. Randall, Z. Peng
Abstract:
Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time, and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration-based analysis and wear prediction. In present study, a simulation model was developed to investigate the bearing wear behaviour, resulting because of different operating conditions, to complement the vibration analysis. In current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. In addition, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journals and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 μm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behaviour and on the other hand it also helps to establish a co-relation between wear based and vibration based analysis. Therefore, the model provides a cost effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.Keywords: Condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 229910212 Spatio-Temporal Video Slice Edges Analysis for Shot Transition Detection and Classification
Authors: Aissa Saoudi, Hassane Essafi
Abstract:
In this work we will present a new approach for shot transition auto-detection. Our approach is based on the analysis of Spatio-Temporal Video Slice (STVS) edges extracted from videos. The proposed approach is capable to efficiently detect both abrupt shot transitions 'cuts' and gradual ones such as fade-in, fade-out and dissolve. Compared to other techniques, our method is distinguished by its high level of precision and speed. Those performances are obtained due to minimizing the problem of the boundary shot detection to a simple 2D image partitioning problem.Keywords: Boundary shot detection, Shot transition detection, Video analysis, Video indexing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163810211 Sensitivity Analysis for Direction of Arrival Estimation Using Capon and Music Algorithms in Mobile Radio Environment
Authors: Mustafa Abdalla, Khaled A. Madi, Rajab Farhat
Abstract:
An array antenna system with innovative signal processing can improve the resolution of a source direction of arrival (DoA) estimation. High resolution techniques take the advantage of array antenna structures to better process the incoming waves. They also have the capability to identify the direction of multiple targets. This paper investigates performance of the DOA estimation algorithm namely; Capon and MUSIC on the uniform linear array (ULA). The simulation results show that in Capon and MUSIC algorithm the resolution of the DOA techniques improves as number of snapshots, number of array elements, signal-to-noise ratio and separation angle between the two sources θ increases.Keywords: Antenna array, Capon, MUSIC, Direction-of-arrival estimation, signal processing, uniform linear arrays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 273010210 Case-Based Reasoning: A Hybrid Classification Model Improved with an Expert's Knowledge for High-Dimensional Problems
Authors: Bruno Trstenjak, Dzenana Donko
Abstract:
Data mining and classification of objects is the process of data analysis, using various machine learning techniques, which is used today in various fields of research. This paper presents a concept of hybrid classification model improved with the expert knowledge. The hybrid model in its algorithm has integrated several machine learning techniques (Information Gain, K-means, and Case-Based Reasoning) and the expert’s knowledge into one. The knowledge of experts is used to determine the importance of features. The paper presents the model algorithm and the results of the case study in which the emphasis was put on achieving the maximum classification accuracy without reducing the number of features.
Keywords: Case based reasoning, classification, expert's knowledge, hybrid model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 141910209 Multiscale Analysis and Change Detection Based on a Contrario Approach
Authors: F.Katlane, M.S.Naceur, M.A.Loghmari
Abstract:
Automatic methods of detecting changes through satellite imaging are the object of growing interest, especially beca²use of numerous applications linked to analysis of the Earth’s surface or the environment (monitoring vegetation, updating maps, risk management, etc...). This work implemented spatial analysis techniques by using images with different spatial and spectral resolutions on different dates. The work was based on the principle of control charts in order to set the upper and lower limits beyond which a change would be noted. Later, the a contrario approach was used. This was done by testing different thresholds for which the difference calculated between two pixels was significant. Finally, labeled images were considered, giving a particularly low difference which meant that the number of “false changes” could be estimated according to a given limit.Keywords: multi-scale, a contrario approach, significantthresholds, change detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 146510208 Comparison between Pushover Analysis Techniques and Validation of the Simplified Modal Pushover Analysis
Authors: N. F. Hanna, A. M. Haridy
Abstract:
One of the main drawbacks of the Modal Pushover Analysis (MPA) is the need to perform nonlinear time-history analysis, which complicates the analysis method and time. A simplified version of the MPA has been proposed based on the concept of the inelastic deformation ratio. Furthermore, the effect of the higher modes of vibration is considered by assuming linearly-elastic responses, which enables the use of standard elastic response spectrum analysis. In this thesis, the simplified MPA (SMPA) method is applied to determine the target global drift and the inter-story drifts of steel frame building. The effect of the higher vibration modes is considered within the framework of the SMPA. A comprehensive survey about the inelastic deformation ratio is presented. After that, a suitable expression from literature is selected for the inelastic deformation ratio and then implemented in the SMPA. The estimated seismic demands using the SMPA, such as target drift, base shear, and the inter-story drifts, are compared with the seismic responses determined by applying the standard MPA. The accuracy of the estimated seismic demands is validated by comparing with the results obtained by the nonlinear time-history analysis using real earthquake records.
Keywords: Modal analysis, pushover analysis, seismic performance, target displacement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 162210207 Construction of Large Scale UAVs Using Homebuilt Composite Techniques
Authors: Brian J. Kozak, Joshua D. Shipman, Peng Hao Wang, Blake Shipp
Abstract:
The unmanned aerial system (UAS) industry is growing at a rapid pace. This growth has increased the demand for low cost, custom made and high strength unmanned aerial vehicles (UAV). The area of most growth is in the area of 25 kg to 200 kg vehicles. Vehicles this size are beyond the size and scope of simple wood and fabric designs commonly found in hobbyist aircraft. These high end vehicles require stronger materials to complete their mission. Traditional aircraft construction materials such as aluminum are difficult to use without machining or advanced computer controlled tooling. However, by using general aviation composite aircraft homebuilding techniques and materials, a large scale UAV can be constructed cheaply and easily. Furthermore, these techniques could be used to easily manufacture cost made composite shapes and airfoils that would be cost prohibitive when using metals. These homebuilt aircraft techniques are being demonstrated by the researchers in the construction of a 75 kg aircraft.
Keywords: Composite aircraft, homebuilding, unmanned aerial system, unmanned aerial vehicles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 817