Search results for: time complexity.
7029 An Approach for Reducing the Computational Complexity of LAMSTAR Intrusion Detection System using Principal Component Analysis
Authors: V. Venkatachalam, S. Selvan
Abstract:
The security of computer networks plays a strategic role in modern computer systems. Intrusion Detection Systems (IDS) act as the 'second line of defense' placed inside a protected network, looking for known or potential threats in network traffic and/or audit data recorded by hosts. We developed an Intrusion Detection System using LAMSTAR neural network to learn patterns of normal and intrusive activities, to classify observed system activities and compared the performance of LAMSTAR IDS with other classification techniques using 5 classes of KDDCup99 data. LAMSAR IDS gives better performance at the cost of high Computational complexity, Training time and Testing time, when compared to other classification techniques (Binary Tree classifier, RBF classifier, Gaussian Mixture classifier). we further reduced the Computational Complexity of LAMSTAR IDS by reducing the dimension of the data using principal component analysis which in turn reduces the training and testing time with almost the same performance.Keywords: Binary Tree Classifier, Gaussian Mixture, IntrusionDetection System, LAMSTAR, Radial Basis Function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17527028 Production Line Layout Planning Based on Complexity Measurement
Authors: Guoliang Fan, Aiping Li, Nan Xie, Liyun Xu, Xuemei Liu
Abstract:
Mass customization production increases the difficulty of the production line layout planning. The material distribution process for variety of parts is very complex, which greatly increases the cost of material handling and logistics. In response to this problem, this paper presents an approach of production line layout planning based on complexity measurement. Firstly, by analyzing the influencing factors of equipment layout, the complexity model of production line is established by using information entropy theory. Then, the cost of the part logistics is derived considering different variety of parts. Furthermore, the function of optimization including two objectives of the lowest cost, and the least configuration complexity is built. Finally, the validity of the function is verified in a case study. The results show that the proposed approach may find the layout scheme with the lowest logistics cost and the least complexity. Optimized production line layout planning can effectively improve production efficiency and equipment utilization with lowest cost and complexity.
Keywords: Production line, layout planning, complexity measurement, optimization, mass customization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10897027 Estimating Shortest Circuit Path Length Complexity
Authors: Azam Beg, P. W. Chandana Prasad, S.M.N.A Senenayake
Abstract:
When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.Keywords: Monte Carlo circuit simulation data, binary decision diagrams, neural network modeling, shortest path length estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13787026 Complexity Analysis of Some Known Graph Coloring Instances
Authors: Jeffrey L. Duffany
Abstract:
Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.Keywords: graph coloring, complexity, algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14017025 Feature Selection Approaches with Missing Values Handling for Data Mining - A Case Study of Heart Failure Dataset
Authors: N.Poolsawad, C.Kambhampati, J. G. F. Cleland
Abstract:
In this paper, we investigated the characteristic of a clinical dataseton the feature selection and classification measurements which deal with missing values problem.And also posed the appropriated techniques to achieve the aim of the activity; in this research aims to find features that have high effect to mortality and mortality time frame. We quantify the complexity of a clinical dataset. According to the complexity of the dataset, we proposed the data mining processto cope their complexity; missing values, high dimensionality, and the prediction problem by using the methods of missing value replacement, feature selection, and classification.The experimental results will extend to develop the prediction model for cardiology.Keywords: feature selection, missing values, classification, clinical dataset, heart failure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32127024 Selective Intra Prediction Mode Decision for H.264/AVC Encoders
Authors: Jun Sung Park, Hyo Jung Song
Abstract:
H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards such as MPEG-2, but computational complexity is increased significantly. In this paper, we propose selective mode decision schemes for fast intra prediction mode selection. The objective is to reduce the computational complexity of the H.264/AVC encoder without significant rate-distortion performance degradation. In our proposed schemes, the intra prediction complexity is reduced by limiting the luma and chroma prediction modes using the directional information of the 16×16 prediction mode. Experimental results are presented to show that the proposed schemes reduce the complexity by up to 78% maintaining the similar PSNR quality with about 1.46% bit rate increase in average.Keywords: Video encoding, H.264, Intra prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34687023 A New H.264-Based Rate Control Algorithm for Stereoscopic Video Coding
Authors: Yi Liao, Wencheng Yang, Gangyi Jiang
Abstract:
According to investigating impact of complexity of stereoscopic frame pairs on stereoscopic video coding and transmission, a new rate control algorithm is presented. The proposed rate control algorithm is performed on three levels: stereoscopic group of pictures (SGOP) level, stereoscopic frame (SFrame) level and frame level. A temporal-spatial frame complexity model is firstly established, in the bits allocation stage, the frame complexity, position significance and reference property between the left and right frames are taken into account. Meanwhile, the target buffer is set according to the frame complexity. Experimental results show that the proposed method can efficiently control the bitrates, and it outperforms the fixed quantization parameter method from the rate distortion perspective, and average PSNR gain between rate-distortion curves (BDPSNR) is 0.21dB.
Keywords: Stereoscopic video coding, rate control, stereoscopic group of pictures, complexity of stereoscopic frame pairs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17177022 Urdu Nastaleeq Optical Character Recognition
Authors: Zaheer Ahmad, Jehanzeb Khan Orakzai, Inam Shamsher, Awais Adnan
Abstract:
This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average.Keywords: Cursive Script, OCR, Urdu.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27787021 Dynamic Data Partition Algorithm for a Parallel H.264 Encoder
Authors: Juntae Kim, Jaeyoung Park, Kyoungkun Lee, Jong Tae Kim
Abstract:
The H.264/AVC standard is a highly efficient video codec providing high-quality videos at low bit-rates. As employing advanced techniques, the computational complexity has been increased. The complexity brings about the major problem in the implementation of a real-time encoder and decoder. Parallelism is the one of approaches which can be implemented by multi-core system. We analyze macroblock-level parallelism which ensures the same bit rate with high concurrency of processors. In order to reduce the encoding time, dynamic data partition based on macroblock region is proposed. The data partition has the advantages in load balancing and data communication overhead. Using the data partition, the encoder obtains more than 3.59x speed-up on a four-processor system. This work can be applied to other multimedia processing applications.Keywords: H.264/AVC, video coding, thread-level parallelism, OpenMP, multimedia
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17957020 Performance Complexity Measurement of Tightening Equipment Based on Kolmogorov Entropy
Authors: Guoliang Fan, Aiping Li, Xuemei Liu, Liyun Xu
Abstract:
The performance of the tightening equipment will decline with the working process in manufacturing system. The main manifestations are the randomness and discretization degree increasing of the tightening performance. To evaluate the degradation tendency of the tightening performance accurately, a complexity measurement approach based on Kolmogorov entropy is presented. At first, the states of performance index are divided for calibrating the discrete degree. Then the complexity measurement model based on Kolmogorov entropy is built. The model describes the performance degradation tendency of tightening equipment quantitatively. At last, a study case is applied for verifying the efficiency and validity of the approach. The research achievement shows that the presented complexity measurement can effectively evaluate the degradation tendency of the tightening equipment. It can provide theoretical basis for preventive maintenance and life prediction of equipment.
Keywords: Complexity measurement, Kolmogorov entropy, manufacturing system, performance evaluation, tightening equipment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9647019 Identifying Chaotic Architecture: Origins of Nonlinear Design Theory
Authors: Mohammadsadegh Zanganehfar
Abstract:
Through the emergence of modern architecture, an aggressive desire for new design theories appeared through the works of architects and critics. The discourse of complexity and volumetric composition happened to be an important and controversial issue in the discipline of architecture which was discussed through a general point of view in Robert Venturi and Denise Scott Brown's book “Complexity and contradiction in architecture” in 1966, this paper attempts to identify chaos theory as a scientific model of complexity and its relation to architecture design theory by conducting a qualitative analysis and multidisciplinary critical approach through architecture and basic sciences resources. Accordingly, we identify chaotic architecture as the correlation between chaos theory and the discipline of architecture, and as an independent nonlinear design theory with specific characteristics and properties.
Keywords: Architecture complexity, chaos theory, fractals, nonlinear dynamic systems, nonlinear ontology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10827018 Recursive Wiener-Khintchine Theorem
Authors: Khalid M. Aamir, Mohammad A. Maud
Abstract:
Power Spectral Density (PSD) computed by taking the Fourier transform of auto-correlation functions (Wiener-Khintchine Theorem) gives better result, in case of noisy data, as compared to the Periodogram approach. However, the computational complexity of Wiener-Khintchine approach is more than that of the Periodogram approach. For the computation of short time Fourier transform (STFT), this problem becomes even more prominent where computation of PSD is required after every shift in the window under analysis. In this paper, recursive version of the Wiener-Khintchine theorem has been derived by using the sliding DFT approach meant for computation of STFT. The computational complexity of the proposed recursive Wiener-Khintchine algorithm, for a window size of N, is O(N).
Keywords: Power Spectral Density (PSD), Wiener-KhintchineTheorem, Periodogram, Short Time Fourier Transform (STFT), TheSliding DFT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24847017 Towards a Simulation Model to Ensure the Availability of Machines in Maintenance Activities
Authors: Maryam Gallab, Hafida Bouloiz, Youness Chater, Mohamed Tkiouat
Abstract:
The aim of this paper is to present a model based on multi-agent systems in order to manage the maintenance activities and to ensure the reliability and availability of machines just with the required resources (operators, tools). The interest of the simulation is to solve the complexity of the system and to find results without cost or wasting time. An implementation of the model is carried out on the AnyLogic platform to display the defined performance indicators.Keywords: Maintenance, complexity, simulation, multi-agent systems, AnyLogic platform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15087016 Bit Model Based Key Management Scheme for Secure Group Communication
Authors: R. Varalakshmi
Abstract:
For the last decade, researchers have started to focus their interest on Multicast Group Key Management Framework. The central research challenge is secure and efficient group key distribution. The present paper is based on the Bit model based Secure Multicast Group key distribution scheme using the most popular absolute encoder output type code named Gray Code. The focus is of two folds. The first fold deals with the reduction of computation complexity which is achieved in our scheme by performing fewer multiplication operations during the key updating process. To optimize the number of multiplication operations, an O(1) time algorithm to multiply two N-bit binary numbers which could be used in an N x N bit-model of reconfigurable mesh is used in this proposed work. The second fold aims at reducing the amount of information stored in the Group Center and group members while performing the update operation in the key content. Comparative analysis to illustrate the performance of various key distribution schemes is shown in this paper and it has been observed that this proposed algorithm reduces the computation and storage complexity significantly. Our proposed algorithm is suitable for high performance computing environment.
Keywords: Multicast Group key distribution, Bit model, Integer Multiplications, reconfigurable mesh, optimal algorithm, Gray Code, Computation Complexity, Storage Complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19717015 Soft Real-Time Fuzzy Task Scheduling for Multiprocessor Systems
Authors: Mahdi Hamzeh, Sied Mehdi Fakhraie, Caro Lucas
Abstract:
All practical real-time scheduling algorithms in multiprocessor systems present a trade-off between their computational complexity and performance. In real-time systems, tasks have to be performed correctly and timely. Finding minimal schedule in multiprocessor systems with real-time constraints is shown to be NP-hard. Although some optimal algorithms have been employed in uni-processor systems, they fail when they are applied in multiprocessor systems. The practical scheduling algorithms in real-time systems have not deterministic response time. Deterministic timing behavior is an important parameter for system robustness analysis. The intrinsic uncertainty in dynamic real-time systems increases the difficulties of scheduling problem. To alleviate these difficulties, we have proposed a fuzzy scheduling approach to arrange real-time periodic and non-periodic tasks in multiprocessor systems. Static and dynamic optimal scheduling algorithms fail with non-critical overload. In contrast, our approach balances task loads of the processors successfully while consider starvation prevention and fairness which cause higher priority tasks have higher running probability. A simulation is conducted to evaluate the performance of the proposed approach. Experimental results have shown that the proposed fuzzy scheduler creates feasible schedules for homogeneous and heterogeneous tasks. It also and considers tasks priorities which cause higher system utilization and lowers deadline miss time. According to the results, it performs very close to optimal schedule of uni-processor systems.Keywords: Computational complexity, Deadline, Feasible scheduling, Fuzzy scheduling, Priority, Real-time multiprocessor systems, Robustness, System utilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21297014 Computing the Loop Bound in Iterative Data Flow Graphs Using Natural Token Flow
Authors: Ali Shatnawi
Abstract:
Signal processing applications which are iterative in nature are best represented by data flow graphs (DFG). In these applications, the maximum sampling frequency is dependent on the topology of the DFG, the cyclic dependencies in particular. The determination of the iteration bound, which is the reciprocal of the maximum sampling frequency, is critical in the process of hardware implementation of signal processing applications. In this paper, a novel technique to compute the iteration bound is proposed. This technique is different from all previously proposed techniques, in the sense that it is based on the natural flow of tokens into the DFG rather than the topology of the graph. The proposed algorithm has lower run-time complexity than all known algorithms. The performance of the proposed algorithm is illustrated through analytical analysis of the time complexity, as well as through simulation of some benchmark problems.Keywords: Data flow graph, Iteration period bound, Rateoptimalscheduling, Recursive DSP algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25627013 Comparative Study of Complexity in Streetscape Composition
Authors: Ahmed Mansouri, Naoji Matsumoto
Abstract:
This research is a comparative study of complexity, as a multidimensional concept, in the context of streetscape composition in Algeria and Japan. 80 streetscapes visual arrays have been collected and then presented to 20 participants, with different cultural backgrounds, in order to be categorized and classified according to their degrees of complexity. Three analysis methods have been used in this research: cluster analysis, ranking method and Hayashi Quantification method (Method III). The results showed that complexity, disorder, irregularity and disorganization are often conflicting concepts in the urban context. Algerian daytime streetscapes seem to be balanced, ordered and regular, and Japanese daytime streetscapes seem to be unbalanced, regular and vivid. Variety, richness and irregularity with some aspects of order and organization seem to characterize Algerian night streetscapes. Japanese night streetscapes seem to be more related to balance, regularity, order and organization with some aspects of confusion and ambiguity. Complexity characterized mainly Algerian avenues with green infrastructure. Therefore, for Japanese participants, Japanese traditional night streetscapes were complex. And for foreigners, Algerian and Japanese avenues nightscapes were the most complex visual arrays.
Keywords: Streetscape, Nightscape, Complexity, Visual Array, Affordance, Cluster Analysis, Hayashi Quantification Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23447012 An Investigation on Efficient Spreading Codes for Transmitter Based Techniques to Mitigate MAI and ISI in TDD/CDMA Downlink
Authors: Abhijit Mitra, C. Ardil
Abstract:
We investigate efficient spreading codes for transmitter based techniques of code division multiple access (CDMA) systems. The channel is considered to be known at the transmitter which is usual in a time division duplex (TDD) system where the channel is assumed to be the same on uplink and downlink. For such a TDD/CDMA system, both bitwise and blockwise multiuser transmission schemes are taken up where complexity is transferred to the transmitter side so that the receiver has minimum complexity. Different spreading codes are considered at the transmitter to spread the signal efficiently over the entire spectrum. The bit error rate (BER) curves portray the efficiency of the codes in presence of multiple access interference (MAI) as well as inter symbol interference (ISI).
Keywords: Code division multiple access, time division duplex, transmitter technique, precoding, pre-rake, rake, spreading code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14747011 Time Synchronization between the eNBs in E-UTRAN under the Asymmetric IP Network
Abstract:
In this paper, we present a method for a time synchronization between the two eNodeBs (eNBs) in E-UTRAN (Evolved Universal Terrestrial Radio Access) network. The two eNBs are cooperating in so-called inter eNB CA (Carrier Aggregation) case and connected via asymmetrical IP network. We solve the problem by using broadcasting signals generated in E-UTRAN as synchronization signals. The results show that the time synchronization with the proposed method is possible with the error significantly less than 1 ms which is sufficient considering the time transmission interval is 1 ms in E-UTRAN. This makes this method (with low complexity) more suitable than Network Time Protocol (NTP) in the mobile applications with generated broadcasting signals where time synchronization in asymmetrical network is required.
Keywords: E-UTRAN, IP scheduled throughput, initial burst delay, synchronization, NTP, delay, asymmetric network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7177010 Complexity Reduction Approach with Jacobi Iterative Method for Solving Composite Trapezoidal Algebraic Equations
Authors: Mohana Sundaram Muthuvalu, Jumat Sulaiman
Abstract:
In this paper, application of the complexity reduction approach based on half- and quarter-sweep iteration concepts with Jacobi iterative method for solving composite trapezoidal (CT) algebraic equations is discussed. The performances of the methods for CT algebraic equations are comparatively studied by their application in solving linear Fredholm integral equations of the second kind. Furthermore, computational complexity analysis and numerical results for three test problems are also included in order to verify performance of the methods.
Keywords: Complexity reduction approach, Composite trapezoidal scheme, Jacobi method, Linear Fredholm integral equations
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15957009 A Low Complexity Frequency Offset Estimation for MB-OFDM based UWB Systems
Authors: Wang Xue, Liu Dan, Liu Ying, Wang Molin, Qian Zhihong
Abstract:
A low-complexity, high-accurate frequency offset estimation for multi-band orthogonal frequency division multiplexing (MB-OFDM) based ultra-wide band systems is presented regarding different carrier frequency offsets, different channel frequency responses, different preamble patterns in different bands. Utilizing a half-cycle Constant Amplitude Zero Auto Correlation (CAZAC) sequence as the preamble sequence, the estimator with a semi-cross contrast scheme between two successive OFDM symbols is proposed. The CRLB and complexity of the proposed algorithm are derived. Compared to the reference estimators, the proposed method achieves significantly less complexity (about 50%) for all preamble patterns of the MB-OFDM systems. The CRLBs turn out to be of well performance.Keywords: CAZAC, Frequency Offset, Semi-cross Contrast, MB-OFDM, UWB
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16737008 Data-driven Multiscale Tsallis Complexity: Application to EEG Analysis
Authors: Young-Seok Choi
Abstract:
This work proposes a data-driven multiscale based quantitative measures to reveal the underlying complexity of electroencephalogram (EEG), applying to a rodent model of hypoxic-ischemic brain injury and recovery. Motivated by that real EEG recording is nonlinear and non-stationary over different frequencies or scales, there is a need of more suitable approach over the conventional single scale based tools for analyzing the EEG data. Here, we present a new framework of complexity measures considering changing dynamics over multiple oscillatory scales. The proposed multiscale complexity is obtained by calculating entropies of the probability distributions of the intrinsic mode functions extracted by the empirical mode decomposition (EMD) of EEG. To quantify EEG recording of a rat model of hypoxic-ischemic brain injury following cardiac arrest, the multiscale version of Tsallis entropy is examined. To validate the proposed complexity measure, actual EEG recordings from rats (n=9) experiencing 7 min cardiac arrest followed by resuscitation were analyzed. Experimental results demonstrate that the use of the multiscale Tsallis entropy leads to better discrimination of the injury levels and improved correlations with the neurological deficit evaluation after 72 hours after cardiac arrest, thus suggesting an effective metric as a prognostic tool.
Keywords: Electroencephalogram (EEG), multiscale complexity, empirical mode decomposition, Tsallis entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20637007 Bandwidth Efficient Diversity Scheme Using STTC Concatenated With STBC: MIMO Systems
Authors: Sameru Sharma, Sanjay Sharma, Derick Engles
Abstract:
Multiple-input multiple-output (MIMO) systems are widely in use to improve quality, reliability of wireless transmission and increase the spectral efficiency. However in MIMO systems, multiple copies of data are received after experiencing various channel effects. The limitations on account of complexity due to number of antennas in case of conventional decoding techniques have been looked into. Accordingly we propose a modified sphere decoder (MSD-1) algorithm with lower complexity and give rise to system with high spectral efficiency. With the aim to increase signal diversity we apply rotated quadrature amplitude modulation (QAM) constellation in multi dimensional space. Finally, we propose a new architecture involving space time trellis code (STTC) concatenated with space time block code (STBC) using MSD-1 at the receiver for improving system performance. The system gains have been verified with channel state information (CSI) errors.Keywords: Channel State Information , Diversity, Multi-Antenna, Rotated Constellation, Space Time Codes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16667006 A Holistic Workflow Modeling Method for Business Process Redesign
Authors: Heejung Lee
Abstract:
In a highly competitive environment, it becomes more important to shorten the whole business process while delivering or even enhancing the business value to the customers and suppliers. Although the workflow management systems receive much attention for its capacity to practically support the business process enactment, the effective workflow modeling method remain still challenging and the high degree of process complexity makes it more difficult to gain the short lead time. This paper presents a workflow structuring method in a holistic way that can reduce the process complexity using activity-needs and formal concept analysis, which eventually enhances the key performance such as quality, delivery, and cost in business process.
Keywords: Workflow management, reengineering, formal concept analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19527005 A Family of Minimal Residual Based Algorithm for Adaptive Filtering
Authors: Noor Atinah Ahmad
Abstract:
The Minimal Residual (MR) is modified for adaptive filtering application. Three forms of MR based algorithm are presented: i) the low complexity SPCG, ii) MREDSI, and iii) MREDSII. The low complexity is a reduced complexity version of a previously proposed SPCG algorithm. Approximations introduced reduce the algorithm to an LMS type algorithm, but, maintain the superior convergence of the SPCG algorithm. Both MREDSI and MREDSII are MR based methods with Euclidean direction of search. The choice of Euclidean directions is shown via simulation to give better misadjustment compared to their gradient search counterparts.Keywords: Adaptive filtering, Adaptive least square, Minimalresidual method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14427004 Low Complexity Multi Mode Interleaver Core for WiMAX with Support for Convolutional Interleaving
Authors: Rizwan Asghar, Dake Liu
Abstract:
A hardware efficient, multi mode, re-configurable architecture of interleaver/de-interleaver for multiple standards, like DVB, WiMAX and WLAN is presented. The interleavers consume a large part of silicon area when implemented by using conventional methods as they use memories to store permutation patterns. In addition, different types of interleavers in different standards cannot share the hardware due to different construction methodologies. The novelty of the work presented in this paper is threefold: 1) Mapping of vital types of interleavers including convolutional interleaver onto a single architecture with flexibility to change interleaver size; 2) Hardware complexity for channel interleaving in WiMAX is reduced by using 2-D realization of the interleaver functions; and 3) Silicon cost overheads reduced by avoiding the use of small memories. The proposed architecture consumes 0.18mm2 silicon area for 0.12μm process and can operate at a frequency of 140 MHz. The reduced complexity helps in minimizing the memory utilization, and at the same time provides strong support to on-the-fly computation of permutation patterns.Keywords: Hardware interleaver implementation, WiMAX, DVB, block interleaver, convolutional interleaver, hardwaremultiplexing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20367003 Robust Numerical Scheme for Pricing American Options under Jump Diffusion Models
Authors: Salah Alrabeei, Mohammad Yousuf
Abstract:
The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. However, most of the option pricing models have no analytical solution. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, we solve the American option under jump diffusion models by using efficient time-dependent numerical methods. several techniques are integrated to reduced the overcome the computational complexity. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). Partial fraction decomposition technique is applied to rational approximation schemes to overcome the complexity of inverting polynomial of matrices. The proposed method is easy to implement on serial or parallel versions. Numerical results are presented to prove the accuracy and efficiency of the proposed method.Keywords: Integral differential equations, American options, jump–diffusion model, rational approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5617002 Evaluating Sinusoidal Functions by a Low Complexity Cubic Spline Interpolator with Error Optimization
Authors: Abhijit Mitra, Harpreet Singh Dhillon
Abstract:
We present a novel scheme to evaluate sinusoidal functions with low complexity and high precision using cubic spline interpolation. To this end, two different approaches are proposed to find the interpolating polynomial of sin(x) within the range [- π , π]. The first one deals with only a single data point while the other with two to keep the realization cost as low as possible. An approximation error optimization technique for cubic spline interpolation is introduced next and is shown to increase the interpolator accuracy without increasing complexity of the associated hardware. The architectures for the proposed approaches are also developed, which exhibit flexibility of implementation with low power requirement.
Keywords: Arithmetic, spline interpolator, hardware design, erroranalysis, optimization methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20577001 Multi-view Description of Real-Time Systems- Architecture
Authors: A. Bessam, M. T. Kimour
Abstract:
Real-time embedded systems should benefit from component-based software engineering to handle complexity and deal with dependability. In these systems, applications should not only be logically correct but also behave within time windows. However, in the current component based software engineering approaches, a few of component models handles time properties in a manner that allows efficient analysis and checking at the architectural level. In this paper, we present a meta-model for component-based software description that integrates timing issues. To achieve a complete functional model of software components, our meta-model focuses on four functional aspects: interface, static behavior, dynamic behavior, and interaction protocol. With each aspect we have explicitly associated a time model. Such a time model can be used to check a component-s design against certain properties and to compute the timing properties of component assemblies.Keywords: Real-time systems, Software architecture, software component, dependability, time properties, ADL, metamodeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16367000 Fuzzy Based Problem-Solution Data Structureas a Data Oriented Model for ABS Controlling
Authors: Ahmad Habibizad Navin, Mehdi Naghian Fesharaki, Mohamad Teshnelab, Ehsan Shahamatnia
Abstract:
The anti-lock braking systems installed on vehicles for safe and effective braking, are high-order nonlinear and timevariant. Using fuzzy logic controllers increase efficiency of such systems, but impose a high computational complexity as well. The main concept introduced by this paper is reducing computational complexity of fuzzy controllers by deploying problem-solution data structure. Unlike conventional methods that are based on calculations, this approach is based on data oriented modeling.Keywords: ABS, Fuzzy controller, PSDS, Time-Memory tradeoff, Data oriented modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736