Search results for: panel vector error correction model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8945

Search results for: panel vector error correction model

7505 Parameters Extraction for Pseudomorphic HEMTs Using Genetic Algorithms

Authors: Mazhar B. Tayel, Amr H. Yassin

Abstract:

A proposed small-signal model parameters for a pseudomorphic high electron mobility transistor (PHEMT) is presented. Both extrinsic and intrinsic circuit elements of a smallsignal model are determined using genetic algorithm (GA) as a stochastic global search and optimization tool. The parameters extraction of the small-signal model is performed on 200-μm gate width AlGaAs/InGaAs PHEMT. The equivalent circuit elements for a proposed 18 elements model are determined directly from the measured S- parameters. The GA is used to extract the parameters of the proposed small-signal model from 0.5 up to 18 GHz.

Keywords: PHEMT, Genetic Algorithms, small signal modeling, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2244
7504 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.

Keywords: Wavelet transform, computational error, computational duration, strong ground motion data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1342
7503 New Adaptive Linear Discriminante Analysis for Face Recognition with SVM

Authors: Mehdi Ghayoumi

Abstract:

We have applied new accelerated algorithm for linear discriminate analysis (LDA) in face recognition with support vector machine. The new algorithm has the advantage of optimal selection of the step size. The gradient descent method and new algorithm has been implemented in software and evaluated on the Yale face database B. The eigenfaces of these approaches have been used to training a KNN. Recognition rate with new algorithm is compared with gradient.

Keywords: lda, adaptive, svm, face recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1399
7502 Proposal of Optimality Evaluation for Quantum Secure Communication Protocols by Taking the Average of the Main Protocol Parameters: Efficiency, Security and Practicality

Authors: Georgi Bebrov, Rozalina Dimova

Abstract:

In the field of quantum secure communication, there is no evaluation that characterizes quantum secure communication (QSC) protocols in a complete, general manner. The current paper addresses the problem concerning the lack of such an evaluation for QSC protocols by introducing an optimality evaluation, which is expressed as the average over the three main parameters of QSC protocols: efficiency, security, and practicality. For the efficiency evaluation, the common expression of this parameter is used, which incorporates all the classical and quantum resources (bits and qubits) utilized for transferring a certain amount of information (bits) in a secure manner. By using criteria approach whether or not certain criteria are met, an expression for the practicality evaluation is presented, which accounts for the complexity of the QSC practical realization. Based on the error rates that the common quantum attacks (Measurement and resend, Intercept and resend, probe attack, and entanglement swapping attack) induce, the security evaluation for a QSC protocol is proposed as the minimum function taken over the error rates of the mentioned quantum attacks. For the sake of clarity, an example is presented in order to show how the optimality is calculated.

Keywords: Quantum cryptography, quantum secure communcation, quantum secure direct communcation security, quantum secure direct communcation efficiency, quantum secure direct communcation practicality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 928
7501 Pectoral Muscles Suppression in Digital Mammograms Using Hybridization of Soft Computing Methods

Authors: I. Laurence Aroquiaraj, K. Thangavel

Abstract:

Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach.

Keywords: X-ray Mammography, CCL, Fuzzy, Straight line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721
7500 Kinetic Modeling of Transesterification of Triacetin Using Synthesized Ion Exchange Resin (SIERs)

Authors: Hafizuddin W. Yussof, Syamsutajri S. Bahri, Adam P. Harvey

Abstract:

Strong anion exchange resins with QN+OH-, have the potential to be developed and employed as heterogeneous catalyst for transesterification, as they are chemically stable to leaching of the functional group. Nine different SIERs (SIER1-9) with QN+OH-were prepared by suspension polymerization of vinylbenzyl chloridedivinylbenzene (VBC-DVB) copolymers in the presence of n-heptane (pore-forming agent). The amine group was successfully grafted into the polymeric resin beads through functionalization with trimethylamine. These SIERs are then used as a catalyst for the transesterification of triacetin with methanol. A set of differential equations that represents the Langmuir-Hinshelwood-Hougen- Watson (LHHW) and Eley-Rideal (ER) models for the transesterification reaction were developed. These kinetic models of LHHW and ER were fitted to the experimental data. Overall, the synthesized ion exchange resin-catalyzed reaction were welldescribed by the Eley-Rideal model compared to LHHW models, with sum of square error (SSE) of 0.742 and 0.996, respectively.

Keywords: Anion exchange resin, Eley-Rideal, Langmuir-Hinshelwood-Hougen-Watson, transesterification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2365
7499 The Relationship between Business-model Innovation and Firm Value: A Dynamic Perspective

Authors: Yung C. Ho, Hui C. Fang, Ming J. Hsieh

Abstract:

When consistently innovative business-models can give companies a competitive advantage, longitudinal empirical research, which can reflect dynamic business-model changes, has yet to prove a definitive connection. This study consequently employs a dynamic perspective in conjunction with innovation theory to examine the relationship between the types of business-model innovation and firm value. This study tries to examine various types of business-model innovation in high-end and low-end technology industries such as HTC and the 7-Eleven chain stores with research periods of 14 years and 32 years, respectively. The empirical results suggest that adopting radical business-model innovation in addition to expanding new target markets can successfully lead to a competitive advantage. Sustained advanced technological competences and service/product innovation are the key successful factors in high-end and low-end technology industry business-models respectively. In sum up, the business-model innovation can yield a higher market value and financial value in high-end technology industries than low-end ones.

Keywords: Business-model, Dynamic Perspective, Firm Value, Innovation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2719
7498 Manual Testing of Web Software Systems Supported by Direct Guidance of the Tester Based On Design Model

Authors: Karel Frajtak, Miroslav Bures, Ivan Jelinek

Abstract:

Software testing is important stage of software development cycle. Current testing process involves tester and electronic documents with test case scenarios. In this paper we focus on new approach to testing process using automated test case generation and tester guidance through the system based on the model of the system. Test case generation and model-based testing is not possible without proper system model. We aim on providing better feedback from the testing process thus eliminating the unnecessary paper work.

Keywords: Model based testing, test automation, test generating, tester support.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937
7497 An Extension of Multi-Layer Perceptron Based on Layer-Topology

Authors: Jānis Zuters

Abstract:

There are a lot of extensions made to the classic model of multi-layer perceptron (MLP). A notable amount of them has been designed to hasten the learning process without considering the quality of generalization. The paper proposes a new MLP extension based on exploiting topology of the input layer of the network. Experimental results show the extended model to improve upon generalization capability in certain cases. The new model requires additional computational resources to compare to the classic model, nevertheless the loss in efficiency isn-t regarded to be significant.

Keywords: Learning algorithm, multi-layer perceptron, topology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494
7496 Impedance of an Encircling Coil due to a Cylindrical Tube with Varying Properties

Authors: Valentina Koliskina

Abstract:

Change in impedance of an encircling coil is obtained in the present paper for the case where the electric conductivity and magnetic permeability of a metal cylindrical tube depend on the radial coordinate. The system of equations for the vector potential is solved by means of the Fourier cosine transform. The solution is expressed in terms of improper integral containing modified Bessel functions of complex order.

Keywords: Eddy currents, magnetic permeability, Besselfunctions

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748
7495 Geometric and Material Nonlinear Analysis of Reinforced Concrete Structure Considering Soil-Structure Interaction

Authors: Mohamed M. El-Gendy, Ibrahim A. El-Arabi, Rafik W. Abdel-Missih, Omar A. Kandil

Abstract:

In the present research, a finite element model is presented to study the geometrical and material nonlinear behavior of reinforced concrete plane frames considering soil-structure interaction. The nonlinear behaviors of concrete and reinforcing steel are considered both in compression and tension up to failure. The model takes account also for the number, diameter, and distribution of rebar along every cross section. Soil behavior is taken into consideration using four different models; namely: linear-, nonlinear Winkler's model, and linear-, nonlinear continuum model. A computer program (NARC) is specially developed in order to perform the analysis. The results achieved by the present model show good agreement with both theoretical and experimental published literature. The nonlinear behavior of a rectangular frame resting on soft soil up to failure using the proposed model is introduced for demonstration.

Keywords: Nonlinear analysis, Geometric nonlinearity, Material nonlinearity, Reinforced concrete, Finite element method, Soilstructure interaction, Winkler's soil model, Continuum soil model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2640
7494 A Methodological Test to Study the Concrete Workability with the Fractal Model

Authors: F. Achouri, K. Chouicha

Abstract:

The main parameters affecting the workability are the water content, particle size, and the total surface of the grains, as long as the mixing water begins by wetting the surface of the grains and then fills the voids between the grains to form entrapped water, the quantity of water remaining is called free water. The aim of this study is to undertake a fractal approach through the relationship between the concrete formulation parameters and workability. To develop this approach a series of concrete taken from the literature was investigated by varying formulation parameters such as G/S, the quantity of cement C and the quantity of water W. We also call another model as the model of water layer thickness and model of paste layer thickness to judge their relevance, hence the following results: the relevance of the water layer thickness model is considered as a relevant when there is a variation in the water quantity. The model of the paste layer thickness is only applicable if we considered that the paste is made with the grain value Dmax = 2.85: value from which we see a stability of the model.

Keywords: Concrete, fractal method, paste layer thickness, water layer thickness, workability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1614
7493 Predicting Extrusion Process Parameters Using Neural Networks

Authors: Sachin Man Bajimaya, SangChul Park, Gi-Nam Wang

Abstract:

The objective of this paper is to estimate realistic principal extrusion process parameters by means of artificial neural network. Conventionally, finite element analysis is used to derive process parameters. However, the finite element analysis of the extrusion model does not consider the manufacturing process constraints in its modeling. Therefore, the process parameters obtained through such an analysis remains highly theoretical. Alternatively, process development in industrial extrusion is to a great extent based on trial and error and often involves full-size experiments, which are both expensive and time-consuming. The artificial neural network-based estimation of the extrusion process parameters prior to plant execution helps to make the actual extrusion operation more efficient because more realistic parameters may be obtained. And so, it bridges the gap between simulation and real manufacturing execution system. In this work, a suitable neural network is designed which is trained using an appropriate learning algorithm. The network so trained is used to predict the manufacturing process parameters.

Keywords: Artificial Neural Network (ANN), Indirect Extrusion, Finite Element Analysis, MES.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341
7492 Improved Processing Speed for Text Watermarking Algorithm in Color Images

Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari

Abstract:

Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.

Keywords: Steganography, watermarking, private keys, time complexity measurements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 797
7491 Development of a Catchment Water Quality Model for Continuous Simulations of Pollutants Build-up and Wash-off

Authors: Iqbal Hossain, Dr. Monzur Imteaz, Dr. Shirley Gato-Trinidad, Prof. Abdallah Shanableh

Abstract:

Estimation of runoff water quality parameters is required to determine appropriate water quality management options. Various models are used to estimate runoff water quality parameters. However, most models provide event-based estimates of water quality parameters for specific sites. The work presented in this paper describes the development of a model that continuously simulates the accumulation and wash-off of water quality pollutants in a catchment. The model allows estimation of pollutants build-up during dry periods and pollutants wash-off during storm events. The model was developed by integrating two individual models; rainfall-runoff model, and catchment water quality model. The rainfall-runoff model is based on the time-area runoff estimation method. The model allows users to estimate the time of concentration using a range of established methods. The model also allows estimation of the continuing runoff losses using any of the available estimation methods (i.e., constant, linearly varying or exponentially varying). Pollutants build-up in a catchment was represented by one of three pre-defined functions; power, exponential, or saturation. Similarly, pollutants wash-off was represented by one of three different functions; power, rating-curve, or exponential. The developed runoff water quality model was set-up to simulate the build-up and wash-off of total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The application of the model was demonstrated using available runoff and TSS field data from road and roof surfaces in the Gold Coast, Australia. The model provided excellent representation of the field data demonstrating the simplicity yet effectiveness of the proposed model.

Keywords: Catchment, continuous pollutants build-up, pollutants wash-off, runoff, runoff water quality model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3096
7490 Approaches to Determining Optimal Asset Structure for a Commercial Bank

Authors: Svetlana Saksonova

Abstract:

Every commercial bank optimises its asset portfolio depending on the profitability of assets and chosen or imposed constraints. This paper proposes and applies a stylized model for optimising banks' asset and liability structure, reflecting profitability of different asset categories and their risks as well as costs associated with different liability categories and reserve requirements. The level of detail for asset and liability categories is chosen to create a suitably parsimonious model and to include the most important categories in the model. It is shown that the most appropriate optimisation criterion for the model is the maximisation of the ratio of net interest income to assets. The maximisation of this ratio is subject to several constraints. Some are accounting identities or dictated by legislative requirements; others vary depending on the market objectives for a particular bank. The model predicts variable amount of assets allocated to loan provision.

Keywords: asset structure, commercial bank, model, optimisation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2939
7489 Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation

Authors: Gautam Sarkar, Anjan Rakshit, Amitava Chatterjee, Kesab Bhattacharya

Abstract:

The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.

Keywords: Capacitance measurement, NAND Schmitt trigger, microcontroller, GUI, lead compensation, hysteresis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7345
7488 Robotic End-Effector Impedance Control without Expensive Torque/Force Sensor

Authors: Shiuh-Jer Huang, Yu-Chi Liu, Su-Hai Hsiang

Abstract:

A novel low-cost impedance control structure is proposed for monitoring the contact force between end-effector and environment without installing an expensive force/torque sensor. Theoretically, the end-effector contact force can be estimated from the superposition of each joint control torque. There have a nonlinear matrix mapping function between each joint motor control input and end-effector actuating force/torques vector. This new force control structure can be implemented based on this estimated mapping matrix. First, the robot end-effector is manipulated to specified positions, then the force controller is actuated based on the hall sensor current feedback of each joint motor. The model-free fuzzy sliding mode control (FSMC) strategy is employed to design the position and force controllers, respectively. All the hardware circuits and software control programs are designed on an Altera Nios II embedded development kit to constitute an embedded system structure for a retrofitted Mitsubishi 5 DOF robot. Experimental results show that PI and FSMC force control algorithms can achieve reasonable contact force monitoring objective based on this hardware control structure.

Keywords: Robot, impedance control, fuzzy sliding mode control, contact force estimator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3966
7487 Dynamic Analyses for Passenger Volume of Domestic Airline and High Speed Rail

Authors: Shih-Ching Lo

Abstract:

Discrete choice model is the most used methodology for studying traveler-s mode choice and demand. However, to calibrate the discrete choice model needs to have plenty of questionnaire survey. In this study, an aggregative model is proposed. The historical data of passenger volumes for high speed rail and domestic civil aviation are employed to calibrate and validate the model. In this study, different models are compared so as to propose the best one. From the results, systematic equations forecast better than single equation do. Models with the external variable, which is oil price, are better than models based on closed system assumption.

Keywords: forecasting, passenger volume, dynamic competition model, external variable, oil price

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
7486 An Erosion-based Modeling of Abrasive Waterjet Turning

Authors: I. Zohourkari, M. Zohoor

Abstract:

In this paper, an erosion-based model for abrasive waterjet (AWJ) turning process is presented. By using modified Hashish erosion model, the volume of material removed by impacting of abrasive particles to surface of the rotating cylindrical specimen is estimated and radius reduction at each rotation is calculated. Different to previous works, the proposed model considers the continuous change in local impact angle due to change in workpiece diameter, axial traverse rate of the jet, the abrasive particle roundness and density. The accuracy of the proposed model is examined by experimental tests under various traverse rates. The final diameters estimated by the proposed model are in good accordance with experiments.

Keywords: Abrasive, Erosion, impact, Particle, Waterjet, Turning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2397
7485 A Dual Model for Efficiency Evaluation Considering Time Lag Effect

Authors: Yan Shuang Zhang, Taehan Lee, Byung Ho Jeong

Abstract:

A DEA model can generally evaluate the performance using multiple inputs and outputs for the same period. However, it is hard to avoid the production lead time phenomenon some times, such as long-term project or marketing activity. A couple of models have been suggested to capture this time lag issue in the context of DEA. This paper develops a dual-MPO model to deal with time lag effect in evaluating efficiency. A numerical example is also given to show that the proposed model can be used to get efficiency and reference set of inefficient DMUs and to obtain projected target value of input attributes for inefficient DMUs to be efficient.

Keywords: DEA, efficiency, time lag, dual problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786
7484 A Bayesian Network Reliability Modeling for FlexRay Systems

Authors: Kuen-Long Leu, Yung-Yuan Chen, Chin-Long Wey, Jwu-E Chen, Chung-Hsien Hsu

Abstract:

The increasing importance of FlexRay systems in automotive domain inspires unceasingly relative researches. One primary issue among researches is to verify the reliability of FlexRay systems either from protocol aspect or from system design aspect. However, research rarely discusses the effect of network topology on the system reliability. In this paper, we will illustrate how to model the reliability of FlexRay systems with various network topologies by a well-known probabilistic reasoning technology, Bayesian Network. In this illustration, we especially investigate the effectiveness of error containment built in star topology and fault-tolerant midpoint synchronization algorithm adopted in FlexRay communication protocol. Through a FlexRay steer-by-wire case study, the influence of different topologies on the failure probability of the FlexRay steerby- wire system is demonstrated. The notable value of this research is to show that the Bayesian Network inference is a powerful and feasible method for the reliability assessment of FlexRay systems.

Keywords: Bayesian Network, FlexRay, fault tolerance, network topology, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2005
7483 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: Artificial Neural Networks, ANNs, classifier algorithms, credit risk assessment, logistic regression, machine learning, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1225
7482 A Simulation Model for Bid Price Decision Making

Authors: R. Sammoura

Abstract:

In Lebanon, public construction projects are awarded to the contractor submitting the lowest bid price based on a competitive bidding process. The contractor has to make a strategic decision in choosing the appropriate bid price that will offer a satisfactory profit with a greater probability to win. A simulation model for bid price decision making based on the lowest bid price evaluation is developed. The model, built using Crystal Ball decisionengineering software, considers two main factors affecting the bidding process: the number of qualified bidders and the size of the project. The validity of the model is tested on twelve separate projects. The study also shows how to use the model to conduct risk analysis and help any specific contractor to decide on his bid price with associated certainty level in a scientific method.

Keywords: Bid price, Competition, Decision making, Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2756
7481 Analysis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises

Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov

Abstract:

For optimal unbiased filter as mean-square and in the case of functioning anomalous noises in the observation memory channel, we have proved insensitivity of filter to inaccurate knowledge of the anomalous noise intensity matrix and its equivalence to truncated filter plotted only by non anomalous components of an observation vector.

Keywords: Mathematical expectation, filtration, anomalous noise, memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2024
7480 Dynamical Transmission Model of Chikungunya in Thailand

Authors: P. Pongsumpun

Abstract:

One of the important tropical diseases is Chikunkunya. This disease is transmitted between the human by the insect-borne virus, of the genus Alphavirus. It occurs in Africa, Asia and the Indian subcontinent. In Thailand, the incidences due to this disease are increasing every year. In this study, the transmission of this disease is studied through dynamical model analysis.

Keywords: Chikunkunya, dynamical model, Endemic region, Routh-Hurwitz criteria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1345
7479 Rice cDNA Encoding PROLM is Capable of Rescuing Salt Sensitive Yeast Phenotypes G19 and Axt3K from Salt Stress

Authors: Prasad Senadheera, Younousse Saidi, Frans JM Maathuis

Abstract:

Rice seed expression (cDNA) library in the Lambda Zap 11® phage constructed from the developing grain 10-20 days after flowering was transformed into yeast for functional complementation assays in three salt sensitive yeast mutants S. cerevisiae strain CY162, G19 and Axt3K. Transformed cells of G19 and Axt3K with pYES vector with cDNA inserts showed enhance tolerance than those with empty pYes vector. Sequencing of the cDNA inserts revealed that they encode for the putative proteins with the sequence homologous to rice putative protein PROLM24 (Os06g31070), a prolamin precursor. Expression of this cDNA did not affect yeast growth in absence of salt. Axt3k and G19 strains expressing the PROLM24 were able to grow upto 400 mM and 600 mM of NaCl respectively. Similarly, Axt3k mutant with PROLM24 expression showed comparatively higher growth rate in the medium with excess LiCl (50 mM). The observation that expression of PROLM24 rescued the salt sensitive phenotypes of G19 and Axt3k indicates the existence of a regulatory system that ameliorates the effect of salt stress in the transformed yeast mutants. However, the exact function of the cDNA sequence, which shows partial sequence homology to yeast UTR1 is not clear. Although UTR1 involved in ferrous uptake and iron homeostasis in yeast cells, there is no evidence to prove its role in Na+ homeostasis in yeast cells. Absence of transmembrane regions in Os06g31070 protein indicates that salt tolerance is achieved not through the direct functional complementation of the mutant genes but through an alternative mechanism.

Keywords: Rice seed expression, salt stress, prolamin, salinitytolerance, Oryza sativa

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
7478 Frequency-Variation Based Method for Parameter Estimation of Transistor Amplifier

Authors: Akash Rathee, Harish Parthasarathy

Abstract:

In this paper, a frequency-variation based method has been proposed for transistor parameter estimation in a commonemitter transistor amplifier circuit. We design an algorithm to estimate the transistor parameters, based on noisy measurements of the output voltage when the input voltage is a sine wave of variable frequency and constant amplitude. The common emitter amplifier circuit has been modelled using the transistor Ebers-Moll equations and the perturbation technique has been used for separating the linear and nonlinear parts of the Ebers-Moll equations. This model of the amplifier has been used to determine the amplitude of the output sinusoid as a function of the frequency and the parameter vector. Then, applying the proposed method to the frequency components, the transistor parameters have been estimated. As compared to the conventional time-domain least squares method, the proposed method requires much less data storage and it results in more accurate parameter estimation, as it exploits the information in the time and frequency domain, simultaneously. The proposed method can be utilized for parameter estimation of an analog device in its operating range of frequencies, as it uses data collected from different frequencies output signals for parameter estimation.

Keywords: Perturbation Technique, Parameter estimation, frequency-variation based method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728
7477 A Hidden Markov Model for Modeling Pavement Deterioration under Incomplete Monitoring Data

Authors: Nam Lethanh, Bryan T. Adey

Abstract:

In this paper, the potential use of an exponential hidden Markov model to model a hidden pavement deterioration process, i.e. one that is not directly measurable, is investigated. It is assumed that the evolution of the physical condition, which is the hidden process, and the evolution of the values of pavement distress indicators, can be adequately described using discrete condition states and modeled as a Markov processes. It is also assumed that condition data can be collected by visual inspections over time and represented continuously using an exponential distribution. The advantage of using such a model in decision making process is illustrated through an empirical study using real world data.

Keywords: Deterioration modeling, Exponential distribution, Hidden Markov model, Pavement management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2282
7476 Combined Fuzzy and Predictive Controller for Unity Power Factor Converter

Authors: Abdelhalim Kessal

Abstract:

This paper treats a design of combined control of a single phase power factor correction (PFC). The strategy of the proposed control is based on two parts, the first, for the outer loop (DC output regulated voltage), and the second govern the input current of the converter in order to achieve a sinusoidal form in phase with the grid voltage. Two kinds of regulators are used, Fuzzy controller for the outer loop and predictive controller for the inner loop. The controllers are verified and discussed through simulation under MATLAB/Simulink platform. Also an experimental confirmation is applied. Results present a high dynamic performance under various parameters changes.

Keywords: Boost converter, harmonic distortion, Fuzzy, prediction, unity power factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1134