Search results for: Brewster-Zidek technique.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3065

Search results for: Brewster-Zidek technique.

2735 Passivity Analysis of Stochastic Neural Networks With Multiple Time Delays

Authors: Biao Qin, Jin Huang, Jiaojiao Ren, Wei Kang

Abstract:

This paper deals with the problem of passivity analysis for stochastic neural networks with leakage, discrete and distributed delays. By using delay partitioning technique, free weighting matrix method and stochastic analysis technique, several sufficient conditions for the passivity of the addressed neural networks are established in terms of linear matrix inequalities (LMIs), in which both the time-delay and its time derivative can be fully considered. A numerical example is given to show the usefulness and effectiveness of the obtained results.

Keywords: Passivity, Stochastic neural networks, Multiple time delays, Linear matrix inequalities (LMIs).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
2734 Artificial Voltage-Controlled Capacitance and Inductance using Voltage-Controlled Transconductance

Authors: Mansour I. Abbadi, Abdel-Rahman M. Jaradat

Abstract:

In this paper, a technique is proposed to implement an artificial voltage-controlled capacitance or inductance which can replace the well-known varactor diode in many applications. The technique is based on injecting the current of a voltage-controlled current source onto a fixed capacitor or inductor. Then, by controlling the transconductance of the current source by an external bias voltage, a voltage-controlled capacitive or inductive reactance is obtained. The proposed voltage-controlled reactance devices can be designed to work anywhere in the frequency spectrum. Practical circuits for the proposed voltage-controlled reactances are suggested and simulated.

Keywords: voltage-controlled capacitance, voltage-controlled inductance, varactor diode, variable transconductance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4827
2733 Optimized Vector Quantization for Bayer Color Filter Array

Authors: M. Lakshmi, J. Senthil Kumar

Abstract:

Digital cameras to reduce cost, use an image sensor to capture color images. Color Filter Array (CFA) in digital cameras permits only one of the three primary (red-green-blue) colors to be sensed in a pixel and interpolates the two missing components through a method named demosaicking. Captured data is interpolated into a full color image and compressed in applications. Color interpolation before compression leads to data redundancy. This paper proposes a new Vector Quantization (VQ) technique to construct a VQ codebook with Differential Evolution (DE) Algorithm. The new technique is compared to conventional Linde- Buzo-Gray (LBG) method.

Keywords: Color Filter Array (CFA), Biorthogonal Wavelet, Vector Quantization (VQ), Differential Evolution (DE).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907
2732 Harnessing Replication in Object Allocation

Authors: H. T. Barney, G. C. Low

Abstract:

The design of distributed systems involves the partitioning of the system into components or partitions and the allocation of these components to physical nodes. Techniques have been proposed for both the partitioning and allocation process. However these techniques suffer from a number of limitations. For instance object replication has the potential to greatly improve the performance of an object orientated distributed system but can be difficult to use effectively and there are few techniques that support the developer in harnessing object replication. This paper presents a methodological technique that helps developers decide how objects should be allocated in order to improve performance in a distributed system that supports replication. The performance of the proposed technique is demonstrated and tested on an example system.

Keywords: Allocation, Distributed Systems, Replication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440
2731 Multilevel Arnoldi-Tikhonov Regularization Methods for Large-Scale Linear Ill-Posed Systems

Authors: Yiqin Lin, Liang Bao

Abstract:

This paper is devoted to the numerical solution of large-scale linear ill-posed systems. A multilevel regularization method is proposed. This method is based on a synthesis of the Arnoldi-Tikhonov regularization technique and the multilevel technique. We show that if the Arnoldi-Tikhonov method is a regularization method, then the multilevel method is also a regularization one. Numerical experiments presented in this paper illustrate the effectiveness of the proposed method.

Keywords: Discrete ill-posed problem, Tikhonov regularization, discrepancy principle, Arnoldi process, multilevel method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 762
2730 Image Segmentation Using the K-means Algorithm for Texture Features

Authors: Wan-Ting Lin, Chuen-Horng Lin, Tsung-Ho Wu, Yung-Kuan Chan

Abstract:

This study aims to segment objects using the K-means algorithm for texture features. Firstly, the algorithm transforms color images into gray images. This paper describes a novel technique for the extraction of texture features in an image. Then, in a group of similar features, objects and backgrounds are differentiated by using the K-means algorithm. Finally, this paper proposes a new object segmentation algorithm using the morphological technique. The experiments described include the segmentation of single and multiple objects featured in this paper. The region of an object can be accurately segmented out. The results can help to perform image retrieval and analyze features of an object, as are shown in this paper.

Keywords: k-mean, multiple objects, segmentation, texturefeatures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2822
2729 A Modified Run Length Coding Technique for Test Data Compression Based on Multi-Level Selective Huffman Coding

Authors: C. Kalamani, K. Paramasivam

Abstract:

Test data compression is an efficient method for reducing the test application cost. The problem of reducing test data has been addressed by researchers in three different aspects: Test Data Compression, Built-in-Self-Test (BIST) and Test set compaction. The latter two methods are capable of enhancing fault coverage with cost of hardware overhead. The drawback of the conventional methods is that they are capable of reducing the test storage and test power but when test data have redundant length of runs, no additional compression method is followed. This paper presents a modified Run Length Coding (RLC) technique with Multilevel Selective Huffman Coding (MLSHC) technique to reduce test data volume, test pattern delivery time and power dissipation in scan test applications where redundant length of runs is encountered then the preceding run symbol is replaced with tiny codeword. Experimental results show that the presented method not only improves the test data compression but also reduces the overall test data volume compared to recent schemes. Experiments for the six largest ISCAS-98 benchmarks show that our method outperforms most known techniques.

Keywords: Modified run length coding, multilevel selective Huffman coding, built-in-self-test modified selective Huffman coding, automatic test equipment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274
2728 Market Acceptance of Irradiated Food in the City of Piracicaba, Brazil

Authors: Vanessa de Cillos Silva, Fabrício José Piacente, Sônia Maria De Stefano Piedade, Valter Arthur

Abstract:

The increasing concern in relation to safety and hygiene of food consumption makes it so that food conservation is studied. Food radiation is a technique used for conservation, but many consumers associate this technique with dangers such as environmental contamination and development of diseases. This research had the objective of evaluating the acceptance of radiated products by the consumer market in the city of Piracicaba/SP-Brasil. The methodology adopted was the application of a questionnaire in the city’s supermarkets. After the application, the data was tabulated and analyzed. It was observed that the majority of interviewees would not eat irradiated food. The unfamiliarity and questions about the safety of irradiated food were the main causes of your rejection.

Keywords: Irradiation, market acceptance, questionnaire, storage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 785
2727 Testing Loaded Programs Using Fault Injection Technique

Authors: S. Manaseer, F. A. Masooud, A. A. Sharieh

Abstract:

Fault tolerance is critical in many of today's large computer systems. This paper focuses on improving fault tolerance through testing. Moreover, it concentrates on the memory faults: how to access the editable part of a process memory space and how this part is affected. A special Software Fault Injection Technique (SFIT) is proposed for this purpose. This is done by sequentially scanning the memory of the target process, and trying to edit maximum number of bytes inside that memory. The technique was implemented and tested on a group of programs in software packages such as jet-audio, Notepad, Microsoft Word, Microsoft Excel, and Microsoft Outlook. The results from the test sample process indicate that the size of the scanned area depends on several factors. These factors are: process size, process type, and virtual memory size of the machine under test. The results show that increasing the process size will increase the scanned memory space. They also show that input-output processes have more scanned area size than other processes. Increasing the virtual memory size will also affect the size of the scanned area but to a certain limit.

Keywords: Complex software systems, Error detection, Fault tolerance, Injection and testing methodology, Memory faults, Process and virtual memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886
2726 Growth and Characterization of L-Asparagine (LAS) Crystal Admixture of Paranitrophenol (PNP): A NLO Material

Authors: Grace Sahaya Sheba, P. Omegala Priyakumari, M. Gunasekaran

Abstract:

L-asparagine admixture Paranitrophenol (LAPNP) single crystals were grown successfully by solution method with slow evaporation technique at room temperature. Crystals of size 12mm×5 mm×3mm have been obtained in 15 days. The grown crystals were Brown color and transparent. The solubility of the grown samples has been found out at various temperatures. The lattice parameters of the grown crystals were determined by X-ray diffraction technique. The reflection planes of the sample were confirmed by the powder X-ray diffraction study and diffraction peaks were indexed. Fourier transform infrared (FTIR) studies were used to confirm the presence of various functional groups in the crystals. UV–visible absorption spectrum was recorded to study the optical transparency of grown crystal. The nonlinear optical (NLO) property of the grown crystal was confirmed by Kurtz–Perry powder technique and a study of its second harmonic generation efficiency in comparison with potassium dihydrogen phosphate (KDP) has been made. The mechanical strength of the crystal was estimated by Vickers hardness test. The grown crystals were subjected to thermo gravimetric and differential thermal analysis (TG/DTA). The dielectric behavior of the sample was also studied

Keywords: Characterization, Microhardnes, Non-linear optical materials, Solution growth, Spectroscopy, XRD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2998
2725 Symbolic Analysis of Large Circuits Using Discrete Wavelet Transform

Authors: Ali Al-Ataby , Fawzi Al-Naima

Abstract:

Symbolic Circuit Analysis (SCA) is a technique used to generate the symbolic expression of a network. It has become a well-established technique in circuit analysis and design. The symbolic expression of networks offers excellent way to perform frequency response analysis, sensitivity computation, stability measurements, performance optimization, and fault diagnosis. Many approaches have been proposed in the area of SCA offering different features and capabilities. Numerical Interpolation methods are very common in this context, especially by using the Fast Fourier Transform (FFT). The aim of this paper is to present a method for SCA that depends on the use of Wavelet Transform (WT) as a mathematical tool to generate the symbolic expression for large circuits with minimizing the analysis time by reducing the number of computations.

Keywords: Numerical Interpolation, Sparse Matrices, SymbolicAnalysis, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
2724 Balancing of Quad Tree using Point Pattern Analysis

Authors: Amitava Chakraborty, Sudip Kumar De, Ranjan Dasgupta

Abstract:

Point quad tree is considered as one of the most common data organizations to deal with spatial data & can be used to increase the efficiency for searching the point features. As the efficiency of the searching technique depends on the height of the tree, arbitrary insertion of the point features may make the tree unbalanced and lead to higher time of searching. This paper attempts to design an algorithm to make a nearly balanced quad tree. Point pattern analysis technique has been applied for this purpose which shows a significant enhancement of the performance and the results are also included in the paper for the sake of completeness.

Keywords: Algorithm, Height balanced tree, Point patternanalysis, Point quad tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2700
2723 Mining Educational Data to Support Students’ Major Selection

Authors: Kunyanuth Kularbphettong, Cholticha Tongsiri

Abstract:

This paper aims to create the model for student in choosing an emphasized track of student majoring in computer science at Suan Sunandha Rajabhat University. The objective of this research is to develop the suggested system using data mining technique to analyze knowledge and conduct decision rules. Such relationships can be used to demonstrate the reasonableness of student choosing a track as well as to support his/her decision and the system is verified by experts in the field. The sampling is from student of computer science based on the system and the questionnaire to see the satisfaction. The system result is found to be satisfactory by both experts and student as well. 

Keywords: Data mining technique, the decision support system, knowledge and decision rules.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3284
2722 Generating Frequent Patterns through Intersection between Transactions

Authors: M. Jamali, F. Taghiyareh

Abstract:

The problem of frequent itemset mining is considered in this paper. One new technique proposed to generate frequent patterns in large databases without time-consuming candidate generation. This technique is based on focusing on transaction instead of concentrating on itemset. This algorithm based on take intersection between one transaction and others transaction and the maximum shared items between transactions computed instead of creating itemset and computing their frequency. With applying real life transactions and some consumption is taken from real life data, the significant efficiency acquire from databases in generation association rules mining.

Keywords: Association rules, data mining, frequent patterns, shared itemset.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1404
2721 A Technique for Reachability Graph Generation for the Petri Net Models of Parallel Processes

Authors: Farooq Ahmad, Hejiao Huang, Xiaolong Wang

Abstract:

Reachability graph (RG) generation suffers from the problem of exponential space and time complexity. To alleviate the more critical problem of time complexity, this paper presents the new approach for RG generation for the Petri net (PN) models of parallel processes. Independent RGs for each parallel process in the PN structure are generated in parallel and cross-product of these RGs turns into the exhaustive state space from which the RG of given parallel system is determined. The complexity analysis of the presented algorithm illuminates significant decrease in the time complexity cost of RG generation. The proposed technique is applicable to parallel programs having multiple threads with the synchronization problem.

Keywords: Parallel processes, Petri net, reachability graph, time complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014
2720 A Real-Time Signal Processing Technique for MIDI Generation

Authors: Farshad Arvin, Shyamala Doraisamy

Abstract:

This paper presents a new hardware interface using a microcontroller which processes audio music signals to standard MIDI data. A technique for processing music signals by extracting note parameters from music signals is described. An algorithm to convert the voice samples for real-time processing without complex calculations is proposed. A high frequency microcontroller as the main processor is deployed to execute the outlined algorithm. The MIDI data generated is transmitted using the EIA-232 protocol. The analyses of data generated show the feasibility of using microcontrollers for real-time MIDI generation hardware interface.

Keywords: Signal processing, MIDI, Microcontroller, EIA-232.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
2719 Motion Detection Techniques Using Optical Flow

Authors: A. A. Shafie, Fadhlan Hafiz, M. H. Ali

Abstract:

Motion detection is very important in image processing. One way of detecting motion is using optical flow. Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. The method used for finding the optical flow in this project is assuming that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. This technique is later used in developing software for motion detection which has the capability to carry out four types of motion detection. The motion detection software presented in this project also can highlight motion region, count motion level as well as counting object numbers. Many objects such as vehicles and human from video streams can be recognized by applying optical flow technique.

Keywords: Background modeling, Motion detection, Optical flow, Velocity smoothness constant, motion trajectories.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5384
2718 Chinese Event Detection Technique Based on Dependency Parsing and Rule Matching

Authors: Weitao Lin

Abstract:

To quickly extract adequate information from large-scale unstructured text data, this paper studies the representation of events in Chinese scenarios and performs the regularized abstraction. It proposes a Chinese event detection technique based on dependency parsing and rule matching. The method first performs dependency parsing on the original utterance, then performs pattern matching at the word or phrase granularity based on the results of dependent syntactic analysis, filters out the utterances with prominent non-event characteristics, and obtains the final results. The experimental results show the effectiveness of the method.

Keywords: Natural Language Processing, Chinese event detection, rules matching, dependency parsing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175
2717 Vehicle Velocity Estimation for Traffic Surveillance System

Authors: H. A. Rahim, U. U. Sheikh, R. B. Ahmad, A. S. M. Zain

Abstract:

This paper describes an algorithm to estimate realtime vehicle velocity using image processing technique from the known camera calibration parameters. The presented algorithm involves several main steps. First, the moving object is extracted by utilizing frame differencing technique. Second, the object tracking method is applied and the speed is estimated based on the displacement of the object-s centroid. Several assumptions are listed to simplify the transformation of 2D images from 3D real-world images. The results obtained from the experiment have been compared to the estimated ground truth. From this experiment, it exhibits that the proposed algorithm has achieved the velocity accuracy estimation of about ± 1.7 km/h.

Keywords: camera calibration, object tracking, velocity estimation, video image processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4456
2716 A GPU Based Texture Mapping Technique for 3D Models Using Multi-View Images

Authors: In Lee, Kyung-Kyu Kang, Jaewoon Lee, Dongho Kim

Abstract:

Previous the 3D model texture generation from multi-view images and mapping algorithms has issues in the texture chart generation which are the self-intersection and the concentration of the texture in texture space. Also we may suffer from some problems due to the occluded areas, such as inside parts of thighs. In this paper we propose a texture mapping technique for 3D models using multi-view images on the GPU. We do texture mapping directly on the GPU fragment shader per pixel without generation of the texture map. And we solve for the occluded area using the 3D model depth information. Our method needs more calculation on the GPU than previous works, but it has shown real-time performance and previously mentioned problems do not occur.

Keywords: Texture Mapping, Multi-view Images, Camera Calibration, GPU Shader.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947
2715 Decision Support System for Solving Multi-Objective Routing Problem

Authors: Ismail El Gayar, Ossama Ismail, Yousri El Gamal

Abstract:

This paper presented a technique to solve one of the transportation problems that faces us in real life which is the Bus Scheduling Problem. Most of the countries using buses in schools, companies and traveling offices as an example to transfer multiple passengers from many places to specific place and vice versa. This transferring process can cost time and money, so we build a decision support system that can solve this problem. In this paper, a genetic algorithm with the shortest path technique is used to generate a competitive solution to other well-known techniques. It also presents a comparison between our solution and other solutions for this problem.

Keywords: Bus scheduling problem, decision support system, genetic algorithm, operation planning, shortest path, transportation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
2714 Control of Pendulum on a Cart with State Dependent Riccati Equations

Authors: N. M. Singh, Jayant Dubey, Ghanshyam Laddha

Abstract:

State Dependent Riccati Equation (SDRE) approach is a modification of the well studied LQR method. It has the capability of being applied to control nonlinear systems. In this paper the technique has been applied to control the single inverted pendulum (SIP) which represents a rich class of nonlinear underactuated systems. SIP modeling is based on Euler-Lagrange equations. A procedure is developed for judicious selection of weighting parameters and constraint handling. The controller designed by SDRE technique here gives better results than existing controllers designed by energy based techniques.

Keywords: State Dependent Riccati Equation (SDRE), Single Inverted Pendulum (SIP), Linear Quadratic Regulator (LQR)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3086
2713 STM Spectroscopy of Alloyed Nanocrystal Composite CdSxSe1-X

Authors: T. Abdallah, K. Easawi, A. Khalid, S. Negm, H. Talaat

Abstract:

Nanocrystals (NC) alloyed composite CdSxSe1-x(x=0 to 1) have been prepared using the chemical solution deposition technique. The energy band gap of these alloyed nanocrystals of approximately the same size, have been determined by scanning tunneling spectroscopy (STS) technique at room temperature. The values of the energy band gap obtained directly using STS are compared to those measured by optical spectroscopy. Increasing the molar fraction ratio x from 0 to 1 causes clearly observed increase in the band gap of the alloyed composite nanocrystal. Vegard-s law was applied to calculate the parameters of the effective mass approximation (EMA) model and the dimension obtained were compared to the values measured by STM. The good agreement of the calculated and measured values is a direct result of applying Vegard's law in the nanocomposites.

Keywords: Alloy semiconductor nanocrystals, STM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
2712 Introduction of the Fluid-Structure Coupling into the Force Analysis Technique

Authors: Océane Grosset, Charles Pézerat, Jean-Hugh Thomas, Frédéric Ablitzer

Abstract:

This paper presents a method to take into account the fluid-structure coupling into an inverse method, the Force Analysis Technique (FAT). The FAT method, also called RIFF method (Filtered Windowed Inverse Resolution), allows to identify the force distribution from local vibration field. In order to only identify the external force applied on a structure, it is necessary to quantify the fluid-structure coupling, especially in naval application, where the fluid is heavy. This method can be decomposed in two parts, the first one consists in identifying the fluid-structure coupling and the second one to introduced it in the FAT method to reconstruct the external force. Results of simulations on a plate coupled with a cavity filled with water are presented.

Keywords: Fluid-structure coupling, inverse methods, naval, vibrations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1169
2711 Evolutionary Multi-objective Optimization for Positioning of Residential Houses

Authors: Ayman El Ansary, Mohamed Shalaby

Abstract:

The current study describes a multi-objective optimization technique for positioning of houses in a residential neighborhood. The main task is the placement of residential houses in a favorable configuration satisfying a number of objectives. Solving the house layout problem is a challenging task. It requires an iterative approach to satisfy design requirements (e.g. energy efficiency, skyview, daylight, roads network, visual privacy, and clear access to favorite views). These design requirements vary from one project to another based on location and client preferences. In the Gulf region, the most important socio-cultural factor is the visual privacy in indoor space. Hence, most of the residential houses in this region are surrounded by high fences to provide privacy, which has a direct impact on other requirements (e.g. daylight and direction to favorite views). This investigation introduces a novel technique to optimally locate and orient residential buildings to satisfy a set of design requirements. The developed technique explores the search space for possible solutions. This study considers two dimensional house planning problems. However, it can be extended to solve three dimensional cases.

Keywords: Evolutionary optimization, Houses planning, Urban modeling, Daylight, Visual Privacy, Residential compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
2710 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: Artificial Neural Networks, ANNs, classifier algorithms, credit risk assessment, logistic regression, machine learning, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281
2709 Study of Natural Patterns on Digital Image Correlation Using Simulation Method

Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish

Abstract:

Digital image correlation (DIC) is a contactless fullfield displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.

Keywords: Digital image correlation (DIC), Deformation simulation, Natural pattern, Subset size.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2799
2708 Adaptive Impedance Control for Unknown Non-Flat Environment

Authors: Norsinnira Zainul Azlan, Hiroshi Yamaura

Abstract:

This paper presents a new adaptive impedance control strategy, based on Function Approximation Technique (FAT) to compensate for unknown non-flat environment shape or time-varying environment location. The target impedance in the force controllable direction is modified by incorporating adaptive compensators and the uncertainties are represented by FAT, allowing the update law to be derived easily. The force error feedback is utilized in the estimation and the accurate knowledge of the environment parameters are not required by the algorithm. It is shown mathematically that the stability of the controller is guaranteed based on Lyapunov theory. Simulation results presented to demonstrate the validity of the proposed controller.

Keywords: Adaptive impedance control, Function Approximation Technique (FAT), impedance control, unknown environment position.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
2707 Numerical Analysis of Cold-Formed Steel Shear Wall Panels Subjected to Cyclic Loading

Authors: H. Meddah, M. Berediaf-Bourahla, B. El-Djouzi, N. Bourahla

Abstract:

Shear walls made of cold formed steel are used as lateral force resisting components in residential and low-rise commercial and industrial constructions. The seismic design analysis of such structures is often complex due to the slenderness of members and their instability prevalence. In this context, a simplified modeling technique across the panel is proposed by using the finite element method. The approach is based on idealizing the whole panel by a nonlinear shear link element which reflects its shear behavior connected to rigid body elements which transmit the forces to the end elements (studs) that resist the tension and the compression. The numerical model of the shear wall panel was subjected to cyclic loads in order to evaluate the seismic performance of the structure in terms of lateral displacement and energy dissipation capacity. In order to validate this model, the numerical results were compared with those from literature tests. This modeling technique is particularly useful for the design of cold formed steel structures where the shear forces in each panel and the axial forces in the studs can be obtained using spectrum analysis.

Keywords: Cold-formed steel, cyclic loading, modeling technique, nonlinear analysis, shear wall panel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1265
2706 Performance Evaluation of the OCDM/WDM Technique for Optical Packet Switches

Authors: V. Eramo, L. Piazzo, M. Listanti, A. Germoni, A Cianfrani

Abstract:

The performance of the Optical Code Division Multiplexing/ Wavelength Division Multiplexing (WDM/OCDM) technique for Optical Packet Switch is investigated. The impact on the performance of the impairment due to both Multiple Access Interference and Beat noise is studied. The Packet Loss Probability due to output packet contentions is evaluated as a function of the main switch and traffic parameters when Gold coherent optical codes are adopted. The Packet Loss Probability of the OCDM/WDM switch can reach 10-9 when M=16 wavelengths, Gold code of length L=511 and only 24 wavelength converters are used in the switch.

Keywords: Optical code division multiplexing, bufferless optical packet switch, performance evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1444