Search results for: multi-factorial error modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3184

Search results for: multi-factorial error modeling

1954 Multi-Agent Systems Applied in the Modeling and Simulation of Biological Problems: A Case Study in Protein Folding

Authors: Pedro Pablo González Pérez, Hiram I. Beltrán, Arturo Rojo-Domínguez, Máximo EduardoSánchez Gutiérrez

Abstract:

Multi-agent system approach has proven to be an effective and appropriate abstraction level to construct whole models of a diversity of biological problems, integrating aspects which can be found both in "micro" and "macro" approaches when modeling this type of phenomena. Taking into account these considerations, this paper presents the important computational characteristics to be gathered into a novel bioinformatics framework built upon a multiagent architecture. The version of the tool presented herein allows studying and exploring complex problems belonging principally to structural biology, such as protein folding. The bioinformatics framework is used as a virtual laboratory to explore a minimalist model of protein folding as a test case. In order to show the laboratory concept of the platform as well as its flexibility and adaptability, we studied the folding of two particular sequences, one of 45-mer and another of 64-mer, both described by an HP model (only hydrophobic and polar residues) and coarse grained 2D-square lattice. According to the discussion section of this piece of work, these two sequences were chosen as breaking points towards the platform, in order to determine the tools to be created or improved in such a way to overcome the needs of a particular computation and analysis of a given tough sequence. The backwards philosophy herein is that the continuous studying of sequences provides itself important points to be added into the platform, to any time improve its efficiency, as is demonstrated herein.

Keywords: multi-agent systems, blackboard-based agent architecture, bioinformatics framework, virtual laboratory, protein folding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206
1953 An Accurate Method for Phylogeny Tree Reconstruction Based on a Modified Wild Dog Algorithm

Authors: Essam Al Daoud

Abstract:

This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.

Keywords: Least squares, neighbor joining, phylogenetic tree, wild dogpack.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393
1952 Comparison of the Existing Methods in Determination of the Characteristic Polynomial

Authors: Mohammad Saleh Tavazoei, Mohammad Haeri

Abstract:

This paper presents comparison among methods of determination of the characteristic polynomial coefficients. First, the resultant systems from the methods are compared based on frequency criteria such as the closed loop bandwidth, gain and phase margins. Then the step responses of the resultant systems are compared on the basis of the transient behavior criteria including overshoot, rise time, settling time and error (via IAE, ITAE, ISE and ITSE integral indices). Also relative stability of the systems is compared together. Finally the best choices in regards to the above diverse criteria are presented.

Keywords: Characteristic Polynomial, Transient Response, Filters, Stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014
1951 Numerical Simulation of Free Surface Water Wave for the Flow around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method

Authors: Saadia Adjali, Omar Imine, Mohammed Aounallah, Mustapha Belkadi

Abstract:

Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRIC scheme for VOF discretization. The results obtained compare well with the experimental data.

Keywords: Free surface flows, Breaking waves, Boundary layer, Wigley hull, Volume of fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3564
1950 Numerical Simulation of Free Surface Water Wave for the Flow around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method

Authors: Saadia Adjali, Omar Imine, Mohammed Aounallah, Mustapha Belkadi

Abstract:

Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRIC scheme for VOF discretization. The results obtained compare well with the experimental data.

Keywords: Free surface flows, breaking waves, boundary layer, Wigley hull, volume of fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3303
1949 Application of Artificial Neural Networks for Temperature Forecasting

Authors: Mohsen Hayati, Zahra Mohebi

Abstract:

In this paper, the application of neural networks to study the design of short-term temperature forecasting (STTF) Systems for Kermanshah city, west of Iran was explored. One important architecture of neural networks named Multi-Layer Perceptron (MLP) to model STTF systems is used. Our study based on MLP was trained and tested using ten years (1996-2006) meteorological data. The results show that MLP network has the minimum forecasting error and can be considered as a good method to model the STTF systems.

Keywords: Artificial neural networks, Forecasting, Weather, Multi-layer perceptron.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4357
1948 Application of a Dual Satellite Geolocation System on Locating Sweeping Interference

Authors: M. H. Chan

Abstract:

This paper describes an application of a dual satellite geolocation (DSG) system on identifying and locating the unknown source of uplink sweeping interference. The geolocation system integrates the method of joint time difference of arrival (TDOA) and frequency difference of arrival (FDOA) with ephemeris correction technique which successfully demonstrated high accuracy in interference source location. The factors affecting the location error were also discussed.

Keywords: Dual satellite geolocation system, DGS, geolocation, TDOA/FDOA, and sweeping interference

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4040
1947 Influence of Dilution and Lean-premixed on Mild Combustion in an Industrial Burner

Authors: Sh.Khalilarya, H.Oryani, S.Jafarmadar, H.Khatamnezhad, A.Nemati

Abstract:

Understanding of how and where NOx formation occurs in industrial burner is very important for efficient and clean operation of utility burners. Also the importance of this problem is mainly due to its relation to the pollutants produced by more burners used widely of gas turbine in thermal power plants and glass and steel industry. In this article, a numerical model of an industrial burner operating in MILD combustion is validated with experimental data.. Then influence of air flow rate and air temperature on combustor temperature profiles and NOX product are investigated. In order to modification this study reports on the effects of fuel and air dilution (with inert gases H2O, CO2, N2), and also influence of lean-premixed of fuel, on the temperature profiles and NOX emission. Conservation equations of mass, momentum and energy, and transport equations of species concentrations, turbulence, combustion and radiation modeling in addition to NO modeling equations were solved together to present temperature and NO distribution inside the burner. The results shows that dilution, cause to a reduction in value of temperature and NOX emission, and suppresses any flame propagation inside the furnace and made the flame inside the furnace invisible. Dilution with H2O rather than N2 and CO2 decreases further the value of the NOX. Also with raise of lean-premix level, local temperature of burner and the value of NOX product are decreases because of premixing prevents local “hot spots" within the combustor volume that can lead to significant NOx formation. Also leanpremixing of fuel with air cause to amount of air in reaction zone is reach more than amount that supplied as is actually needed to burn the fuel and this act lead to limiting NOx formation

Keywords: Mild combustion, Flameless, Numerical simulation, Burner, CFD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1776
1946 Turing Pattern in the Oregonator Revisited

Authors: Elragig Aiman, Dreiwi Hanan, Townley Stuart, Elmabrook Idriss

Abstract:

In this paper, we reconsider the analysis of the Oregonator model. We highlight an error in this analysis which leads to an incorrect depiction of the parameter region in which diffusion driven instability is possible. We believe that the cause of the oversight is the complexity of stability analyses based on eigenvalues and the dependence on parameters of matrix minors appearing in stability calculations. We regenerate the parameter space where Turing patterns can be seen, and we use the common Lyapunov function (CLF) approach, which is numerically reliable, to further confirm the dependence of the results on diffusion coefficients intensities.

Keywords: Diffusion driven instability, common Lyapunov function (CLF), turing pattern, positive-definite matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1048
1945 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: Carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
1944 A Low-Voltage Current-Mode Wheatstone Bridge using CMOS Transistors

Authors: Ebrahim Farshidi

Abstract:

This paper presents a new circuit arrangement for a current-mode Wheatstone bridge that is suitable for low-voltage integrated circuits implementation. Compared to the other proposed circuits, this circuit features severe reduction of the elements number, low supply voltage (1V) and low power consumption (<350uW). In addition, the circuit has favorable nonlinearity error (<0.35%), operate with multiple sensors and works by single supply voltage. The circuit employs MOSFET transistors, so it can be used for standard CMOS fabrication. Simulation results by HSPICE show high performance of the circuit and confirm the validity of the proposed design technique.

Keywords: Wheatstone bridge, current-mode, low-voltage, MOS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3024
1943 Speech Data Compression using Vector Quantization

Authors: H. B. Kekre, Tanuja K. Sarode

Abstract:

Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table shows computational complexity of these three algorithms. Here we have introduced a new performance parameter Average Fractional Change in Speech Sample (AFCSS). Our FCG algorithm gives far better performance considering mean absolute error, AFCSS and complexity as compared to others.

Keywords: Vector Quantization, Data Compression, Encoding, , Speech coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2404
1942 Rotor Bearing System Analysis Using the Transfer Matrix Method with Thickness Assumption of Disk and Bearing

Authors: Omid Ghasemalizadeh, Mohammad Reza Mirzaee, Hossein Sadeghi, Mohammad Taghi Ahmadian

Abstract:

There are lots of different ways to find the natural frequencies of a rotating system. One of the most effective methods which is used because of its precision and correctness is the application of the transfer matrix. By use of this method the entire continuous system is subdivided and the corresponding differential equation can be stated in matrix form. So to analyze shaft that is this paper issue the rotor is divided as several elements along the shaft which each one has its own mass and moment of inertia, which this work would create possibility of defining the named matrix. By Choosing more elements number, the size of matrix would become larger and as a result more accurate answers would be earned. In this paper the dynamics of a rotor-bearing system is analyzed, considering the gyroscopic effect. To increase the accuracy of modeling the thickness of the disk and bearings is also taken into account which would cause more complicated matrix to be solved. Entering these parameters to our modeling would change the results completely that these differences are shown in the results. As said upper, to define transfer matrix to reach the natural frequencies of probed system, introducing some elements would be one of the requirements. For the boundary condition of these elements, bearings at the end of the shaft are modeled as equivalent spring and dampers for the discretized system. Also, continuous model is used for the shaft in the system. By above considerations and using transfer matrix, exact results are taken from the calculations. Results Show that, by increasing thickness of the bearing the amplitude of vibration would decrease, but obviously the stiffness of the shaft and the natural frequencies of the system would accompany growth. Consequently it is easily understood that ignoring the influences of bearing and disk thicknesses would results not real answers.

Keywords: Rotor System, Disk and Bearing Thickness, Transfer Matrix, Amplitude.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548
1941 Constraints on IRS Control: An Alternative Approach to Tax Gap Analysis

Authors: J. T. Manhire

Abstract:

A tax authority wants to take actions it knows will foster the greatest degree of voluntary taxpayer compliance to reduce the “tax gap.” This paper suggests that even if a tax authority could attain a state of complete knowledge, there are constraints on whether and to what extent such actions would result in reducing the macro-level tax gap. These limits are not merely a consequence of finite agency resources. They are inherent in the system itself. To show that this is one possible interpretation of the tax gap data, the paper formulates known results in a different way by analyzing tax compliance as a population with a single covariate. This leads to a standard use of the logistic map to analyze the dynamics of non-compliance growth or decay over a sequence of periods. This formulation gives the same results as the tax gap studies performed over the past fifty years in the U.S. given the published margins of error. Limitations and recommendations for future work are discussed, along with some implications for tax policy.

Keywords: Tax law, tax compliance, tax gap, income tax.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736
1940 Traffic Violation Detection System based on RFID

Authors: S. Hajeb, M. Javadi, S. M. Hashemi, P. Parvizi

Abstract:

Road Traffic Accidents are a major cause of disability and death throughout the world. The control of intelligent vehicles in order to reduce human error and boost ease congestion is not accomplished solely by the aid of human resources. The present article is an attempt to introduce an intelligent control system based on RFID technology. By the help of RFID technology, vehicles are connected to computerized systems, intelligent light poles and other available hardware along the way. In this project, intelligent control system is capable of tracking all vehicles, crisis management and control, traffic guidance and recording Driving offences along the highway.

Keywords: RFID, Intelligent highway, Traffic violation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13979
1939 Part of Speech Tagging Using Statistical Approach for Nepali Text

Authors: Archit Yajnik

Abstract:

Part of Speech Tagging has always been a challenging task in the era of Natural Language Processing. This article presents POS tagging for Nepali text using Hidden Markov Model and Viterbi algorithm. From the Nepali text, annotated corpus training and testing data set are randomly separated. Both methods are employed on the data sets. Viterbi algorithm is found to be computationally faster and accurate as compared to HMM. The accuracy of 95.43% is achieved using Viterbi algorithm. Error analysis where the mismatches took place is elaborately discussed.

Keywords: Hidden Markov model, Viterbi algorithm, POS tagging, natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
1938 Advanced Convolutional Neural Network Paradigms-Comparison of VGG16 with Resnet50 in Crime Detection

Authors: Taiwo. M. Akinmuyisitan, John Cosmas

Abstract:

This paper practically demonstrates the theories and concepts of an Advanced Convolutional Neural Network in the design and development of a scalable artificial intelligence model for the detection of criminal masterminds. The technique uses machine vision algorithms to compute the facial characteristics of suspects and classify actors as criminal or non-criminal faces. The paper proceeds further to compare the results of the error accuracy of two popular custom convolutional pre-trained networks, VGG16 and Resnet50. The result shows that VGG16 is probably more efficient than ResNet50 for the dataset we used.

Keywords: Artificial intelligence, convolutional neural networks, Resnet50, VGG16.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 288
1937 A New Approach to Predicting Physical Biometrics from Behavioural Biometrics

Authors: Raid R. O. Al-Nima, S. S. Dlay, W. L. Woo

Abstract:

A relationship between face and signature biometrics is established in this paper. A new approach is developed to predict faces from signatures by using artificial intelligence. A multilayer perceptron (MLP) neural network is used to generate face details from features extracted from signatures, here face is the physical biometric and signatures is the behavioural biometric. The new method establishes a relationship between the two biometrics and regenerates a visible face image from the signature features. Furthermore, the performance efficiencies of our new technique are demonstrated in terms of minimum error rates compared to published work.

Keywords: Behavioural biometric, Face biometric, Neural network, Physical biometric, Signature biometric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684
1936 Integrated Method for Detection of Unknown Steganographic Content

Authors: Magdalena Pejas

Abstract:

This article concerns the presentation of an integrated method for detection of steganographic content embedded by new unknown programs. The method is based on data mining and aggregated hypothesis testing. The article contains the theoretical basics used to deploy the proposed detection system and the description of improvement proposed for the basic system idea. Further main results of experiments and implementation details are collected and described. Finally example results of the tests are presented.

Keywords: Steganography, steganalysis, data embedding, data mining, feature extraction, knowledge base, system learning, hypothesis testing, error estimation, black box program, file structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564
1935 Cooperative CDD Scheme Based On Adaptive Modulation in Wireless Communication System

Authors: Seung-Jun Yu, Hwan-Jun Choi, Hyoung-Kyu Song

Abstract:

Among spatial diversity scheme, orthogonal space-time block code (OSTBC) and cyclic delay diversity (CDD) have been widely studied for the cooperative wireless relaying system. However, conventional OSTBC and CDD cannot cope with change in the number of relays owing to low throughput or error performance. In this paper, we propose a cooperative cyclic delay diversity (CDD) scheme that use hierarchical modulation at the source and adaptive modulation based on cyclic redundancy check (CRC) code at the relays.

Keywords: Adaptive modulation, Cooperative communication, CDD, OSTBC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
1934 Electrodermal Activity Measurement Using Constant Current AC Source

Authors: Cristian Chacha, David Asiain, Jesús Ponce de León, José Ramón Beltrán

Abstract:

This work explores and characterizes the behavior of the AFE AD5941 in impedance measurement using an embedded algorithm that allows using a constant current AC source. The main aim of this research is to improve the exact measurement of impedance values for their application in EDA-focused wearable devices. Through comprehensive study and characterization, it has been observed that employing a measurement sequence with a constant current source produces results with increased dispersion but higher accuracy and a more linear behavior with respect to error. As a result, this approach leads to a more accurate system for impedance measurement.

Keywords: Electrodermal Activity, constant current AC source, wearable, precision, accuracy, impedance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 86
1933 Visual-Graphical Methods for Exploring Longitudinal Data

Authors: H. W. Ker

Abstract:

Longitudinal data typically have the characteristics of changes over time, nonlinear growth patterns, between-subjects variability, and the within errors exhibiting heteroscedasticity and dependence. The data exploration is more complicated than that of cross-sectional data. The purpose of this paper is to organize/integrate of various visual-graphical techniques to explore longitudinal data. From the application of the proposed methods, investigators can answer the research questions include characterizing or describing the growth patterns at both group and individual level, identifying the time points where important changes occur and unusual subjects, selecting suitable statistical models, and suggesting possible within-error variance.

Keywords: Data exploration, exploratory analysis, HLMs/LMEs, longitudinal data, visual-graphical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2095
1932 Development Partitioning Intervalwise Block Method for Solving Ordinary Differential Equations

Authors: K.H.Khairul Anuar, K.I.Othman, F.Ishak, Z.B.Ibrahim, Z.Majid

Abstract:

Solving Ordinary Differential Equations (ODEs) by using Partitioning Block Intervalwise (PBI) technique is our aim in this paper. The PBI technique is based on Block Adams Method and Backward Differentiation Formula (BDF). Block Adams Method only use the simple iteration for solving while BDF requires Newtonlike iteration involving Jacobian matrix of ODEs which consumes a considerable amount of computational effort. Therefore, PBI is developed in order to reduce the cost of iteration within acceptable maximum error

Keywords: Adam Block Method, BDF, Ordinary Differential Equations, Partitioning Block Intervalwise

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671
1931 Septic B-Spline Collocation Method for Numerical Solution of the Kuramoto-Sivashinsky Equation

Authors: M. Zarebnia, R. Parvaz

Abstract:

In this paper the Kuramoto-Sivashinsky equation is solved numerically by collocation method. The solution is approximated as a linear combination of septic B-spline functions. Applying the Von-Neumann stability analysis technique, we show that the method is unconditionally stable. The method is applied on some test examples, and the numerical results have been compared with the exact solutions. The global relative error and L∞ in the solutions show the efficiency of the method computationally.

Keywords: Kuramoto-Sivashinsky equation, Septic B-spline, Collocation method, Finite difference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2063
1930 Efficient Electromagnetic Modeling of Dual-GateTransistor with Iterative Method using Auxiliary Sources

Authors: Z. Harouni, L. Osman, M. Yeddes, A. Gharsallah, H. Baudrand

Abstract:

In this paper, an efficient wave concept iterative process (WCIP) with auxiliary Sources is presented for full wave investigation of an active microwave structure on micro strip technology. Good agreement between the experimental and simulation results is observed.

Keywords: WCIP, Dual-Gate Transistor, Auxiliary source.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1267
1929 Accurate Crosstalk Analysis for RLC On-Chip VLSI Interconnect

Authors: Susmita Sahoo, Madhumanti Datta, Rajib Kar

Abstract:

This work proposes an accurate crosstalk noise estimation method in the presence of multiple RLC lines for the use in design automation tools. This method correctly models the loading effects of non switching aggressors and aggressor tree branches using resistive shielding effect and realistic exponential input waveforms. Noise peak and width expressions have been derived. The results obtained are at good agreement with SPICE results. Results show that average error for noise peak is 4.7% and for the width is 6.15% while allowing a very fast analysis.

Keywords: Crosstalk, distributed RLC segments, On-Chip interconnect, output response, VLSI, noise peak, noise width.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
1928 The Flashbulb Memory of the Positive and Negative Events: Wenchuan Earthquake and Acceptance to College

Authors: Aiping Liu, Xiaoping Ying, Jing Luo

Abstract:

53 college students answered questions regarding the circumstances in which they first heard about the news of Wenchuan earthquake or the news of their acceptance to college which took place approximately one year ago, and answered again two years later. The number of details recalled about their circumstances for both events was high and didn-t decline two years later. However, consistency in reported details over two years was low. Participants were more likely to construct central (e.g., Where were you?) than peripheral information (What were you wearing?), and the confidence of the central information was higher than peripheral information, which indicated that they constructed more when they were more confident.

Keywords: flashbulb memory, consistency, reconstructive error, confidence

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2439
1927 A Review of in-orbit Observations of Radiation- Induced Effects in Commercial Memories onboard Alsat-1

Authors: Y. Bentoutou, A.M. Si Mohammed

Abstract:

This paper presents a review of an 8-year study on radiation effects in commercial memory devices operating within the main on-board computer system OBC386 of the Algerian microsatellite Alsat-1. A statistical analysis of single-event upset (SEU) and multiple-bit upset (MBU) activity in these commercial memories shows that the typical SEU rate at alsat-1's orbit is 4.04 × 10-7 SEU/bit/day, where 98.6% of these SEUs cause single-bit errors, 1.22% cause double-byte errors, and the remaining SEUs result in multiple-bit and severe errors.

Keywords: Radiation effects, error detection and correction, satellite computer, small satellite mission.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
1926 A Proposed Trust Model for the Semantic Web

Authors: Hoda Waguih

Abstract:

A serious problem on the WWW is finding reliable information. Not everything found on the Web is true and the Semantic Web does not change that in any way. The problem will be even more crucial for the Semantic Web, where agents will be integrating and using information from multiple sources. Thus, if an incorrect premise is used due to a single faulty source, then any conclusions drawn may be in error. Thus, statements published on the Semantic Web have to be seen as claims rather than as facts, and there should be a way to decide which among many possibly inconsistent sources is most reliable. In this work, we propose a trust model for the Semantic Web. The proposed model is inspired by the use trust in human society. Trust is a type of social knowledge and encodes evaluations about which agents can be taken as reliable sources of information or services. Our proposed model allows agents to decide which among different sources of information to trust and thus act rationally on the semantic web.

Keywords: Semantic Web, Trust, Web of Trust, WWW.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
1925 An Optimized Design of Non-uniform Filterbank

Authors: Ram Kumar Soni, Alok Jain, Rajiv Saxena

Abstract:

The tree structured approach of non-uniform filterbank (NUFB) is normally used in perfect reconstruction (PR). The PR is not always feasible due to certain limitations, i.e, constraints in selecting design parameters, design complexity and some times output is severely affected by aliasing error if necessary and sufficient conditions of PR is not satisfied perfectly. Therefore, there has been generalized interest of researchers to go for near perfect reconstruction (NPR). In this proposed work, an optimized tree structure technique is used for the design of NPR non-uniform filterbank. Window functions of Blackman family are used to design the prototype FIR filter. A single variable linear optimization is used to minimize the amplitude distortion. The main feature of the proposed design is its simplicity with linear phase property.

Keywords: Tree structure, NUFB, QMF, NPR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738