Search results for: robust ranking technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3727

Search results for: robust ranking technique

2947 Introduction of the Fluid-Structure Coupling into the Force Analysis Technique

Authors: Océane Grosset, Charles Pézerat, Jean-Hugh Thomas, Frédéric Ablitzer

Abstract:

This paper presents a method to take into account the fluid-structure coupling into an inverse method, the Force Analysis Technique (FAT). The FAT method, also called RIFF method (Filtered Windowed Inverse Resolution), allows to identify the force distribution from local vibration field. In order to only identify the external force applied on a structure, it is necessary to quantify the fluid-structure coupling, especially in naval application, where the fluid is heavy. This method can be decomposed in two parts, the first one consists in identifying the fluid-structure coupling and the second one to introduced it in the FAT method to reconstruct the external force. Results of simulations on a plate coupled with a cavity filled with water are presented.

Keywords: Fluid-structure coupling, inverse methods, naval, vibrations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1149
2946 Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding

Authors: K. Anitha Sheela, J. Tarun Kumar

Abstract:

HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.

Keywords: AMC, HSDPA, LDPC, WCDMA, 3GPP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028
2945 Evolutionary Multi-objective Optimization for Positioning of Residential Houses

Authors: Ayman El Ansary, Mohamed Shalaby

Abstract:

The current study describes a multi-objective optimization technique for positioning of houses in a residential neighborhood. The main task is the placement of residential houses in a favorable configuration satisfying a number of objectives. Solving the house layout problem is a challenging task. It requires an iterative approach to satisfy design requirements (e.g. energy efficiency, skyview, daylight, roads network, visual privacy, and clear access to favorite views). These design requirements vary from one project to another based on location and client preferences. In the Gulf region, the most important socio-cultural factor is the visual privacy in indoor space. Hence, most of the residential houses in this region are surrounded by high fences to provide privacy, which has a direct impact on other requirements (e.g. daylight and direction to favorite views). This investigation introduces a novel technique to optimally locate and orient residential buildings to satisfy a set of design requirements. The developed technique explores the search space for possible solutions. This study considers two dimensional house planning problems. However, it can be extended to solve three dimensional cases.

Keywords: Evolutionary optimization, Houses planning, Urban modeling, Daylight, Visual Privacy, Residential compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521
2944 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: Artificial Neural Networks, ANNs, classifier algorithms, credit risk assessment, logistic regression, machine learning, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1233
2943 Study of Natural Patterns on Digital Image Correlation Using Simulation Method

Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish

Abstract:

Digital image correlation (DIC) is a contactless fullfield displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.

Keywords: Digital image correlation (DIC), Deformation simulation, Natural pattern, Subset size.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2774
2942 Adaptive Impedance Control for Unknown Non-Flat Environment

Authors: Norsinnira Zainul Azlan, Hiroshi Yamaura

Abstract:

This paper presents a new adaptive impedance control strategy, based on Function Approximation Technique (FAT) to compensate for unknown non-flat environment shape or time-varying environment location. The target impedance in the force controllable direction is modified by incorporating adaptive compensators and the uncertainties are represented by FAT, allowing the update law to be derived easily. The force error feedback is utilized in the estimation and the accurate knowledge of the environment parameters are not required by the algorithm. It is shown mathematically that the stability of the controller is guaranteed based on Lyapunov theory. Simulation results presented to demonstrate the validity of the proposed controller.

Keywords: Adaptive impedance control, Function Approximation Technique (FAT), impedance control, unknown environment position.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
2941 The Variable Step-Size Gauss-Seidel Pseudo Affine Projection Algorithm

Authors: F. Albu, C. Paleologu

Abstract:

In this paper, a new pseudo affine projection (AP) algorithm based on Gauss-Seidel (GS) iterations is proposed for acoustic echo cancellation (AEC). It is shown that the algorithm is robust against near-end signal variations (including double-talk).

Keywords: pseudo affine projection algorithm, acoustic echo cancellation, double-talk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1405
2940 Numerical Analysis of Cold-Formed Steel Shear Wall Panels Subjected to Cyclic Loading

Authors: H. Meddah, M. Berediaf-Bourahla, B. El-Djouzi, N. Bourahla

Abstract:

Shear walls made of cold formed steel are used as lateral force resisting components in residential and low-rise commercial and industrial constructions. The seismic design analysis of such structures is often complex due to the slenderness of members and their instability prevalence. In this context, a simplified modeling technique across the panel is proposed by using the finite element method. The approach is based on idealizing the whole panel by a nonlinear shear link element which reflects its shear behavior connected to rigid body elements which transmit the forces to the end elements (studs) that resist the tension and the compression. The numerical model of the shear wall panel was subjected to cyclic loads in order to evaluate the seismic performance of the structure in terms of lateral displacement and energy dissipation capacity. In order to validate this model, the numerical results were compared with those from literature tests. This modeling technique is particularly useful for the design of cold formed steel structures where the shear forces in each panel and the axial forces in the studs can be obtained using spectrum analysis.

Keywords: Cold-formed steel, cyclic loading, modeling technique, nonlinear analysis, shear wall panel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1240
2939 Multimodal Biometric System Based on Near- Infra-Red Dorsal Hand Geometry and Fingerprints for Single and Whole Hands

Authors: Mohamed K. Shahin, Ahmed M. Badawi, Mohamed E. M. Rasmy

Abstract:

Prior research evidenced that unimodal biometric systems have several tradeoffs like noisy data, intra-class variations, restricted degrees of freedom, non-universality, spoof attacks, and unacceptable error rates. In order for the biometric system to be more secure and to provide high performance accuracy, more than one form of biometrics are required. Hence, the need arise for multimodal biometrics using combinations of different biometric modalities. This paper introduces a multimodal biometric system (MMBS) based on fusion of whole dorsal hand geometry and fingerprints that acquires right and left (Rt/Lt) near-infra-red (NIR) dorsal hand geometry (HG) shape and (Rt/Lt) index and ring fingerprints (FP). Database of 100 volunteers were acquired using the designed prototype. The acquired images were found to have good quality for all features and patterns extraction to all modalities. HG features based on the hand shape anatomical landmarks were extracted. Robust and fast algorithms for FP minutia points feature extraction and matching were used. Feature vectors that belong to similar biometric traits were fused using feature fusion methodologies. Scores obtained from different biometric trait matchers were fused using the Min-Max transformation-based score fusion technique. Final normalized scores were merged using the sum of scores method to obtain a single decision about the personal identity based on multiple independent sources. High individuality of the fused traits and user acceptability of the designed system along with its experimental high performance biometric measures showed that this MMBS can be considered for med-high security levels biometric identification purposes.

Keywords: Unimodal, Multi-Modal, Biometric System, NIR Imaging, Dorsal Hand Geometry, Fingerprint, Whole Hands, Feature Extraction, Feature Fusion, Score Fusion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193
2938 Performance Evaluation of the OCDM/WDM Technique for Optical Packet Switches

Authors: V. Eramo, L. Piazzo, M. Listanti, A. Germoni, A Cianfrani

Abstract:

The performance of the Optical Code Division Multiplexing/ Wavelength Division Multiplexing (WDM/OCDM) technique for Optical Packet Switch is investigated. The impact on the performance of the impairment due to both Multiple Access Interference and Beat noise is studied. The Packet Loss Probability due to output packet contentions is evaluated as a function of the main switch and traffic parameters when Gold coherent optical codes are adopted. The Packet Loss Probability of the OCDM/WDM switch can reach 10-9 when M=16 wavelengths, Gold code of length L=511 and only 24 wavelength converters are used in the switch.

Keywords: Optical code division multiplexing, bufferless optical packet switch, performance evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421
2937 Automatic Distance Compensation for Robust Voice-based Human-Computer Interaction

Authors: Randy Gomez, Keisuke Nakamura, Kazuhiro Nakadai

Abstract:

Distant-talking voice-based HCI system suffers from performance degradation due to mismatch between the acoustic speech (runtime) and the acoustic model (training). Mismatch is caused by the change in the power of the speech signal as observed at the microphones. This change is greatly influenced by the change in distance, affecting speech dynamics inside the room before reaching the microphones. Moreover, as the speech signal is reflected, its acoustical characteristic is also altered by the room properties. In general, power mismatch due to distance is a complex problem. This paper presents a novel approach in dealing with distance-induced mismatch by intelligently sensing instantaneous voice power variation and compensating model parameters. First, the distant-talking speech signal is processed through microphone array processing, and the corresponding distance information is extracted. Distance-sensitive Gaussian Mixture Models (GMMs), pre-trained to capture both speech power and room property are used to predict the optimal distance of the speech source. Consequently, pre-computed statistic priors corresponding to the optimal distance is selected to correct the statistics of the generic model which was frozen during training. Thus, model combinatorics are post-conditioned to match the power of instantaneous speech acoustics at runtime. This results to an improved likelihood in predicting the correct speech command at farther distances. We experiment using real data recorded inside two rooms. Experimental evaluation shows voice recognition performance using our method is more robust to the change in distance compared to the conventional approach. In our experiment, under the most acoustically challenging environment (i.e., Room 2: 2.5 meters), our method achieved 24.2% improvement in recognition performance against the best-performing conventional method.

Keywords: Human Machine Interaction, Human Computer Interaction, Voice Recognition, Acoustic Model Compensation, Acoustic Speech Enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
2936 Optical Limiting Characteristics of Core-Shell Nanoparticles

Authors: G.Vinitha, A.Ramalingam

Abstract:

TiO2 nanoparticles were synthesized by hydrothermal method at 180°C from TiOSO4 aqueous solution with1m/l concentration. The obtained products were coated with silica by means of a seeded polymerization technique for a coating time of 1440 minutes to obtain well defined TiO2@SiO2 core-shell structure. The uncoated and coated nanoparticles were characterized by using X-Ray diffraction technique (XRD), Fourier Transform Infrared Spectroscopy (FT-IR) to study their physico-chemical properties. Evidence from XRD and FTIR results show that SiO2 is homogenously coated on the surface of titania particles. FTIR spectra show that there exists an interaction between TiO2 and SiO2 and results in the formation of Ti-O-Si chemical bonds at the interface of TiO2 particles and SiO2 coating layer. The non linear optical limiting properties of TiO2 and TiO2@SiO2 nanoparticles dispersed in ethylene glycol were studied at 532nm using 5ns Nd:YAG laser pulses. Three-photon absorption is responsible for optical limiting characteristics in these nanoparticles and it is seen that the optical nonlinearity is enhanced in core-shell structures when compared with single counterparts. This effective three-photon type absorption at this wavelength, is of potential application in fabricating optical limiting devices.

Keywords: hydrothermal method, optical limiting devicesseeded polymerization technique, three-photon type absorption

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797
2935 Analyzing the Effect of Ambient Temperature and Loads Power Factor on Electric Generator Power Rating

Authors: Ahmed Elsebaay, Maged A. Abu Adma, Mahmoud Ramadan

Abstract:

This study presents a technique clarifying the effect of ambient air temperature and loads power factor changing from standard values on electric generator power rating. The study introduces an optimized technique for selecting the correct electric generator power rating for certain application and operating site ambient temperature. The de-rating factors due to the previous effects will be calculated to be applied on a generator to select its power rating accurately to avoid unsafe operation and save its lifetime. The information in this paper provides a simple, accurate, and general method for synchronous generator selection and eliminates common errors.

Keywords: Ambient temperature, de-rating factor, electric generator, power factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3855
2934 Adaptive Dynamic Time Warping for Variable Structure Pattern Recognition

Authors: S. V. Yendiyarov

Abstract:

Pattern discovery from time series is of fundamental importance. Particularly, when information about the structure of a pattern is not complete, an algorithm to discover specific patterns or shapes automatically from the time series data is necessary. The dynamic time warping is a technique that allows local flexibility in aligning time series. Because of this, it is widely used in many fields such as science, medicine, industry, finance and others. However, a major problem of the dynamic time warping is that it is not able to work with structural changes of a pattern. This problem arises when the structure is influenced by noise, which is a common thing in practice for almost every application. This paper addresses this problem by means of developing a novel technique called adaptive dynamic time warping.

Keywords: Pattern recognition, optimal control, quadratic programming, dynamic programming, dynamic time warping, sintering control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2017
2933 Design of Multi-disease Diagnosis Processor using Hypernetworks Technique

Authors: Jae-Yeon Song, Seung-Yerl Lee, Kyu-Yeul Wang, Byung-Soo Kim, Sang-Seol Lee, Seong-Seob Shin, Jae-Young Choi, Chong Ho Lee, Jeahyun Park, Duck-Jin Chung

Abstract:

In this paper, we propose disease diagnosis hardware architecture by using Hypernetworks technique. It can be used to diagnose 3 different diseases (SPECT Heart, Leukemia, Prostate cancer). Generally, the disparate diseases require specified diagnosis hardware model for each disease. Using similarities of three diseases diagnosis processor, we design diagnosis processor that can diagnose three different diseases. Our proposed architecture that is combining three processors to one processor can reduce hardware size without decrease of the accuracy.

Keywords: Diagnosis processor, Hypernetworks, Leukemia, Mask, Prostate cancer, SPECT Heart data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1339
2932 Model Order Reduction of Linear Time Variant High Speed VLSI Interconnects using Frequency Shift Technique

Authors: J.V.R.Ravindra, M.B.Srinivas,

Abstract:

Accurate modeling of high speed RLC interconnects has become a necessity to address signal integrity issues in current VLSI design. To accurately model a dispersive system of interconnects at higher frequencies; a full-wave analysis is required. However, conventional circuit simulation of interconnects with full wave models is extremely CPU expensive. We present an algorithm for reducing large VLSI circuits to much smaller ones with similar input-output behavior. A key feature of our method, called Frequency Shift Technique, is that it is capable of reducing linear time-varying systems. This enables it to capture frequency-translation and sampling behavior, important in communication subsystems such as mixers, RF components and switched-capacitor filters. Reduction is obtained by projecting the original system described by linear differential equations into a lower dimension. Experiments have been carried out using Cadence Design Simulator cwhich indicates that the proposed technique achieves more % reduction with less CPU time than the other model order reduction techniques existing in literature. We also present applications to RF circuit subsystems, obtaining size reductions and evaluation speedups of orders of magnitude with insignificant loss of accuracy.

Keywords: Model order Reduction, RLC, crosstalk

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
2931 Recovering the Boundary Data in the Two Dimensional Inverse Heat Conduction Problem Using the Ritz-Galerkin Method

Authors: Saeed Sarabadan, Kamal Rashedi

Abstract:

This article presents a numerical method to find the heat flux in an inhomogeneous inverse heat conduction problem with linear boundary conditions and an extra specification at the terminal. The method is based upon applying the satisfier function along with the Ritz-Galerkin technique to reduce the approximate solution of the inverse problem to the solution of a system of algebraic equations. The instability of the problem is resolved by taking advantage of the Landweber’s iterations as an admissible regularization strategy. In computations, we find the stable and low-cost results which demonstrate the efficiency of the technique.

Keywords: Inverse problem, parabolic equations, heat equation, Ritz-Galerkin method, Landweber iterations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1162
2930 Evolutionary Techniques for Model Order Reduction of Large Scale Linear Systems

Authors: S. Panda, J. S. Yadav, N. P. Patidar, C. Ardil

Abstract:

Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. The GA has been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly non-linear, mixed integer optimization problems that are typical of complex engineering systems. PSO technique is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. In this paper both PSO and GA optimization are employed for finding stable reduced order models of single-input- single-output large-scale linear systems. Both the techniques guarantee stability of reduced order model if the original high order model is stable. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example from literature and the results are compared with recently published conventional model reduction technique.

Keywords: Genetic Algorithm, Particle Swarm Optimization, Order Reduction, Stability, Transfer Function, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2697
2929 Some Aspects Regarding I. R. Absorbing Materials Based On Thin Alumina Films for Solar-Thermal Energy Conversion, Using X-Ray Diffraction Technique

Authors: Sorina Adriana Mitrea, Silvia Maria Hodorogea, Anca Duta, Luminita Isac, Elena Purghel, Mihaela Voinea

Abstract:

Solar energy is the most “available", ecological and clean energy. This energy can be used in active or passive mode. The active mode implies the transformation of solar energy into a useful energy. The solar energy can be transformed into thermal energy, using solar collectors. In these collectors, the active and the most important element is the absorber, material which performs the absorption of solar radiation and, in at the same time, limits its reflection. The paper presents some aspects regarding the IR absorbing material – a type of cermets, used as absorber in the solar collectors, by X Ray Diffraction Technique (XRD) characterization.

Keywords: Alumina films, solar energy, X-ray diffraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558
2928 A Fault Analysis Cracked-Rotor-to-Stator Rub and Unbalance by Vibration Analysis Technique

Authors: B. X. Tchomeni, A. A. Alugongo, L. M. Masu

Abstract:

An analytical 4-DOF nonlinear model of a de Laval rotor-stator system based on Energy Principles has been used theoretically and experimentally to investigate fault symptoms in a rotating system. The faults, namely rotor-stator-rub, crack and unbalance are modeled as excitations on the rotor shaft. Mayes steering function is used to simulate the breathing behaviour of the crack. The fault analysis technique is based on waveform signal, orbits and Fast Fourier Transform (FFT) derived from simulated and real measured signals. Simulated and experimental results manifest considerable mutual resemblance of elliptic-shaped orbits and FFT for a same range of test data.

Keywords: A breathing crack, fault, FFT, nonlinear, orbit, rotorstator rub, vibration analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2937
2927 Predicting Oil Content of Fresh Palm Fruit Using Transmission-Mode Ultrasonic Technique

Authors: Sutthawee Suwannarat, Thanate Khaorapapong, Mitchai Chongcheawchamnan

Abstract:

In this paper, an ultrasonic technique is proposed to predict oil content in a fresh palm fruit. This is accomplished by measuring the attenuation based on ultrasonic transmission mode. Several palm fruit samples with known oil content by Soxhlet extraction (ISO9001:2008) were tested with our ultrasonic measurement. Amplitude attenuation data results for all palm samples were collected. The Feedforward Neural Networks (FNNs) are applied to predict the oil content for the samples. The Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) of the FNN model for predicting oil content percentage are 7.6186 and 5.2287 with the correlation coefficient (R) of 0.9193.

Keywords: Non-destructive, ultrasonic testing, oil content, fresh palm fruit, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
2926 Hierarchical Clustering Algorithms in Data Mining

Authors: Z. Abdullah, A. R. Hamdan

Abstract:

Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the area in data mining and it can be classified into partition, hierarchical, density based and grid based. Therefore, in this paper we do survey and review four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems as well as deriving more robust and scalable algorithms for clustering.

Keywords: Clustering, method, algorithm, hierarchical, survey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3354
2925 A New Approach of Fuzzy Methods for Evaluating of Hydrological Data

Authors: Nasser Shamskia, Seyyed Habib Rahmati, Hassan Haleh , Seyyedeh Hoda Rahmati

Abstract:

The main criteria of designing in the most hydraulic constructions essentially are based on runoff or discharge of water. Two of those important criteria are runoff and return period. Mostly, these measures are calculated or estimated by stochastic data. Another feature in hydrological data is their impreciseness. Therefore, in order to deal with uncertainty and impreciseness, based on Buckley-s estimation method, a new fuzzy method of evaluating hydrological measures are developed. The method introduces triangular shape fuzzy numbers for different measures in which both of the uncertainty and impreciseness concepts are considered. Besides, since another important consideration in most of the hydrological studies is comparison of a measure during different months or years, a new fuzzy method which is consistent with special form of proposed fuzzy numbers, is also developed. Finally, to illustrate the methods more explicitly, the two algorithms are tested on one simple example and a real case study.

Keywords: Fuzzy Discharge, Fuzzy estimation, Fuzzy ranking method, Hydrological data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692
2924 Comparative Study of Ant Colony and Genetic Algorithms for VLSI Circuit Partitioning

Authors: Sandeep Singh Gill, Rajeevan Chandel, Ashwani Chandel

Abstract:

This paper presents a comparative study of Ant Colony and Genetic Algorithms for VLSI circuit bi-partitioning. Ant colony optimization is an optimization method based on behaviour of social insects [27] whereas Genetic algorithm is an evolutionary optimization technique based on Darwinian Theory of natural evolution and its concept of survival of the fittest [19]. Both the methods are stochastic in nature and have been successfully applied to solve many Non Polynomial hard problems. Results obtained show that Genetic algorithms out perform Ant Colony optimization technique when tested on the VLSI circuit bi-partitioning problem.

Keywords: Partitioning, genetic algorithm, ant colony optimization, non-polynomial hard, netlist, mutation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2230
2923 Kirchhoff’s Depth Migration over Heterogeneous Velocity Models with Ray Tracing Modeling Approach

Authors: Alok Kumar Routa, Priya Ranjan Mohanty

Abstract:

Complex seismic signatures are generated due to the complexity of the subsurface which is difficult to interpret. In the present study, an attempt has been made to model the complex subsurface using the Ray tracing modeling technique. Add to this, for the imaging of these geological features, Kirchhoff’s prestack depth migration is applied over the synthetic common shot gather dataset. It is found that the Kirchhoff’s migration technique in addition with the Ray tracing modeling concept has the flexibility towards the imaging of various complex geology which gives satisfactory results with proper delineation of the reflectors at their respective true depth position. The entire work has been carried out under the MATLAB environment.

Keywords: Kirchhoff’s migration, Prestack depth migration, Ray tracing modeling, Velocity model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1353
2922 Optimized Approach for Secure Data Sharing in Distributed Database

Authors: Ahmed Mateen, Zhu Qingsheng, Ahmad Bilal

Abstract:

In the current age of technology, information is the most precious asset of a company. Today, companies have a large amount of data. As the data become larger, access to data for some particular information is becoming slower day by day. Faster data processing to shape it in the form of information is the biggest issue. The major problems in distributed databases are the efficiency of data distribution and response time of data distribution. The security of data distribution is also a big issue. For these problems, we proposed a strategy that can maximize the efficiency of data distribution and also increase its response time. This technique gives better results for secure data distribution from multiple heterogeneous sources. The newly proposed technique facilitates the companies for secure data sharing efficiently and quickly.

Keywords: ER-schema, electronic record, P2P framework, API, query formulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1044
2921 Structural Integrity Management for Fixed Offshore Platforms in Malaysia

Authors: Narayanan Sambu Potty , Mohammad Kabir B. Mohd Akram

Abstract:

Structural Integrity Management (SIM) is important for the protection of offshore crew, environment, business assets and company and industry reputation. API RP 2A contained guidelines for assessment of existing platforms mostly for the Gulf of Mexico (GOM). ISO 19902 SIM framework also does not specifically cater for Malaysia. There are about 200 platforms in Malaysia with 90 exceeding their design life. The Petronas Carigali Sdn Bhd (PCSB) uses the Asset Integrity Management System and the very subjective Risk based Inspection Program for these platforms. Petronas currently doesn-t have a standalone Petronas Technical Standard PTS-SIM. This study proposes a recommended practice for the SIM process for offshore structures in Malaysia, including studies by API and ISO and local elements such as the number of platforms, types of facilities, age and risk ranking. Case study on SMG-A platform in Sabah shows missing or scattered platform data and a gap in inspection history. It is to undergo a level 3 underwater inspection in year 2015.

Keywords: platform, assessment, integrity, risk based inspection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7240
2920 System Reduction Using Modified Pole Clustering and Modified Cauer Continued Fraction

Authors: Jay Singh, C. B. Vishwakarma, Kalyan Chatterjee

Abstract:

A mixed method by combining modified pole clustering technique and modified cauer continued fraction is proposed for reducing the order of the large-scale dynamic systems. The denominator polynomial of the reduced order model is obtained by using modified pole clustering technique while the coefficients of the numerator are obtained by modified cauer continued fraction. This method generated 'k' number of reduced order models for kth order reduction. The superiority of the proposed method has been elaborated through numerical example taken from the literature and compared with few existing order reduction methods.

Keywords: Modified Pole Clustering, Modified Cauer Continued Fraction, Order Reduction, Stability, Transfer Function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1941
2919 A Cooperative Weighted Discriminator Energy Detector Technique in Fading Environment

Authors: Muhammad R. Alrabeiah, Ibrahim S. Alnomay

Abstract:

The need in cognitive radio system for a simple, fast, and independent technique to sense the spectrum occupancy has led to the energy detection approach. Energy detector is known by its dependency on noise variation in the system which is one of its major drawbacks. In this paper, we are aiming to improve its performance by utilizing a weighted collaborative spectrum sensing, it is similar to the collaborative spectrum sensing methods introduced previously in the literature. These weighting methods give more improvement for collaborative spectrum sensing as compared to no weighting case. There is two method proposed in this paper: the first one depends on the channel status between each sensor and the primary user while the second depends on the value of the energy measured in each sensor.

Keywords: Cognitive radio, Spectrum sensing, Collaborative sensors, Weighted Decisions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
2918 3D Rendering of American Sign Language Finger-Spelling: A Comparative Study of Two Animation Techniques

Authors: Nicoletta Adamo-Villani

Abstract:

In this paper we report a study aimed at determining the most effective animation technique for representing ASL (American Sign Language) finger-spelling. Specifically, in the study we compare two commonly used 3D computer animation methods (keyframe animation and motion capture) in order to ascertain which technique produces the most 'accurate', 'readable', and 'close to actual signing' (i.e. realistic) rendering of ASL finger-spelling. To accomplish this goal we have developed 20 animated clips of fingerspelled words and we have designed an experiment consisting of a web survey with rating questions. 71 subjects ages 19-45 participated in the study. Results showed that recognition of the words was correlated with the method used to animate the signs. In particular, keyframe technique produced the most accurate representation of the signs (i.e., participants were more likely to identify the words correctly in keyframed sequences rather than in motion captured ones). Further, findings showed that the animation method had an effect on the reported scores for readability and closeness to actual signing; the estimated marginal mean readability and closeness was greater for keyframed signs than for motion captured signs. To our knowledge, this is the first study aimed at measuring and comparing accuracy, readability and realism of ASL animations produced with different techniques.

Keywords: 3D Animation, American Sign Language, DeafEducation, Motion Capture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1982