Search results for: time complexity measurements.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7691

Search results for: time complexity measurements.

7661 A Complexity Measure for Java Bean based Software Components

Authors: Sandeep Khimta, Parvinder S. Sandhu, Amanpreet Singh Brar

Abstract:

The traditional software product and process metrics are neither suitable nor sufficient in measuring the complexity of software components, which ultimately is necessary for quality and productivity improvement within organizations adopting CBSE. Researchers have proposed a wide range of complexity metrics for software systems. However, these metrics are not sufficient for components and component-based system and are restricted to the module-oriented systems and object-oriented systems. In this proposed study it is proposed to find the complexity of the JavaBean Software Components as a reflection of its quality and the component can be adopted accordingly to make it more reusable. The proposed metric involves only the design issues of the component and does not consider the packaging and the deployment complexity. In this way, the software components could be kept in certain limit which in turn help in enhancing the quality and productivity.

Keywords: JavaBean Components, Complexity, Metrics, Validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
7660 An Approach to Capture, Evaluate and Handle Complexity of Engineering Change Occurrences in New Product Development

Authors: Mohammad Rostami Mehr, Seyed Arya Mir Rashed, Arndt Lueder, Magdalena Mißler-Behr

Abstract:

This paper represents the conception that complex problems do not necessary need similar complex solutions in order to cope with the complexity. Furthermore, a simple solution based on established methods can provide a sufficient way dealing with the complexity. To verify this conception, the presented paper focuses on the field of change management as a part of new product development process in automotive sector. In the field of complexity management, dealing with increasing complexity is essential, while, only non-flexible rigid processes that are not designed to handle complexity are available. The basic methodology of this paper can be divided in four main sections: 1) analyzing the complexity of the change management, 2) literature review in order to identify potential solutions and methods, 3) capturing and implementing expertise of experts from change management filed of an automobile manufacturing company and 4) systematical comparison of the identified methods from literature and connecting these with defined requirements of the complexity of the change management in order to develop a solution. As a practical outcome, this paper provides a method to capture the complexity of engineering changes (EC) and includes it within the EC evaluation process, following case-related process guidance to cope with the complexity. Furthermore, this approach supports the conception that dealing with complexity is possible while utilizing rather simple and established methods by combining them in to a powerful tool.

Keywords: complexity management, new product development, engineering change management, flexibility

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 495
7659 An Efficient Multi Join Algorithm Utilizing a Lattice of Double Indices

Authors: Hanan A. M. Abd Alla, Lilac A. E. Al-Safadi

Abstract:

In this paper, a novel multi join algorithm to join multiple relations will be introduced. The novel algorithm is based on a hashed-based join algorithm of two relations to produce a double index. This is done by scanning the two relations once. But instead of moving the records into buckets, a double index will be built. This will eliminate the collision that can happen from a complete hash algorithm. The double index will be divided into join buckets of similar categories from the two relations. The algorithm then joins buckets with similar keys to produce joined buckets. This will lead at the end to a complete join index of the two relations. without actually joining the actual relations. The time complexity required to build the join index of two categories is Om log m where m is the size of each category. Totaling time complexity to O n log m for all buckets. The join index will be used to materialize the joined relation if required. Otherwise, it will be used along with other join indices of other relations to build a lattice to be used in multi-join operations with minimal I/O requirements. The lattice of the join indices can be fitted into the main memory to reduce time complexity of the multi join algorithm.

Keywords: Multi join, Relation, Lattice, Join indices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1267
7658 A Complexity-Based Approach in Image Compression using Neural Networks

Authors: Hadi Veisi, Mansour Jamzad

Abstract:

In this paper we present an adaptive method for image compression that is based on complexity level of the image. The basic compressor/de-compressor structure of this method is a multilayer perceptron artificial neural network. In adaptive approach different Back-Propagation artificial neural networks are used as compressor and de-compressor and this is done by dividing the image into blocks, computing the complexity of each block and then selecting one network for each block according to its complexity value. Three complexity measure methods, called Entropy, Activity and Pattern-based are used to determine the level of complexity in image blocks and their ability in complexity estimation are evaluated and compared. In training and evaluation, each image block is assigned to a network based on its complexity value. Best-SNR is another alternative in selecting compressor network for image blocks in evolution phase which chooses one of the trained networks such that results best SNR in compressing the input image block. In our evaluations, best results are obtained when overlapping the blocks is allowed and choosing the networks in compressor is based on the Best-SNR. In this case, the results demonstrate superiority of this method comparing with previous similar works and JPEG standard coding.

Keywords: Adaptive image compression, Image complexity, Multi-layer perceptron neural network, JPEG Standard, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2182
7657 Investigating the Transformer Operating Conditions for Evaluating the Dielectric Response

Authors: Jalal M. Abdallah

Abstract:

This paper presents an experimental investigation of transformer dielectric response and solid insulation water content. The dielectric response was carried out on the base of Hybrid Frequency Dielectric Spectroscopy and Polarization Current measurements method (FDS &PC). The calculation of the water content in paper is based on the water content in oil and the obtained equilibrium curves. A reference measurements were performed at equilibrium conditions for water content in oil and paper of transformer at different stable temperatures (25, 50, 60 and 70°C) to prepare references to evaluate the insulation behavior at the not equilibrium conditions. Some measurements performed at the different simulated normal working modes of transformer operation at the same temperature where the equilibrium conditions. The obtained results show that when transformer temperature is mach more than the its ambient temperature, the transformer temperature decreases immediately after disconnecting the transformer from the network and this temperature reduction influences the transformer insulation condition in the measuring process. In addition to the oil temperature at the near places to the sensors, the temperature uniformity in transformer which can be changed by a big change in the load of transformer before the measuring time will influence the result. The investigations have shown that the extremely influence of the time between disconnecting the transformer and beginning the measurements on the results. And the online monitoring for water content in paper measurements, on the basis of the oil water content on line monitoring and the obtained equilibrium curves. The measurements where performed continuously and for about 50 days without any disconnection in the prepared the adiabatic room.

Keywords: Conductivity, Moisture, Temperature, Oil-paperinsulation, Online monitoring, Water content in oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2616
7656 Handling Complexity of a Complex System Design: Paradigm, Formalism and Transformations

Authors: Hycham Aboutaleb, Bruno Monsuez

Abstract:

Current systems complexity has reached a degree that requires addressing conception and design issues while taking into account environmental, operational, social, legal and financial aspects. Therefore, one of the main challenges is the way complex systems are specified and designed. The exponential growing effort, cost and time investment of complex systems in modeling phase emphasize the need for a paradigm, a framework and an environment to handle the system model complexity. For that, it is necessary to understand the expectations of the human user of the model and his limits. This paper presents a generic framework for designing complex systems, highlights the requirements a system model needs to fulfill to meet human user expectations, and suggests a graphbased formalism for modeling complex systems. Finally, a set of transformations are defined to handle the model complexity.

Keywords: Higraph-based, formalism, system engineering paradigm, modeling requirements, graph-based transformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608
7655 Artificial Neural Networks for Classifying Magnetic Measurements in Tokamak Reactors

Authors: A. Greco, N. Mammone, F.C. Morabito, M.Versaci

Abstract:

This paper is mainly concerned with the application of a novel technique of data interpretation to the characterization and classification of measurements of plasma columns in Tokamak reactors for nuclear fusion applications. The proposed method exploits several concepts derived from soft computing theory. In particular, Artifical Neural Networks have been exploited to classify magnetic variables useful to determine shape and position of the plasma with a reduced computational complexity. The proposed technique is used to analyze simulated databases of plasma equilibria based on ITER geometry configuration. As well as demonstrating the successful recovery of scalar equilibrium parameters, we show that the technique can yield practical advantages compares with earlier methods.

Keywords: Tokamak, sensors, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
7654 An Approach for Reducing the Computational Complexity of LAMSTAR Intrusion Detection System using Principal Component Analysis

Authors: V. Venkatachalam, S. Selvan

Abstract:

The security of computer networks plays a strategic role in modern computer systems. Intrusion Detection Systems (IDS) act as the 'second line of defense' placed inside a protected network, looking for known or potential threats in network traffic and/or audit data recorded by hosts. We developed an Intrusion Detection System using LAMSTAR neural network to learn patterns of normal and intrusive activities, to classify observed system activities and compared the performance of LAMSTAR IDS with other classification techniques using 5 classes of KDDCup99 data. LAMSAR IDS gives better performance at the cost of high Computational complexity, Training time and Testing time, when compared to other classification techniques (Binary Tree classifier, RBF classifier, Gaussian Mixture classifier). we further reduced the Computational Complexity of LAMSTAR IDS by reducing the dimension of the data using principal component analysis which in turn reduces the training and testing time with almost the same performance.

Keywords: Binary Tree Classifier, Gaussian Mixture, IntrusionDetection System, LAMSTAR, Radial Basis Function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1707
7653 Estimating Shortest Circuit Path Length Complexity

Authors: Azam Beg, P. W. Chandana Prasad, S.M.N.A Senenayake

Abstract:

When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.

Keywords: Monte Carlo circuit simulation data, binary decision diagrams, neural network modeling, shortest path length estimation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1348
7652 Underlying Cognitive Complexity Measure Computation with Combinatorial Rules

Authors: Benjapol Auprasert, Yachai Limpiyakorn

Abstract:

Measuring the complexity of software has been an insoluble problem in software engineering. Complexity measures can be used to predict critical information about testability, reliability, and maintainability of software systems from automatic analysis of the source code. During the past few years, many complexity measures have been invented based on the emerging Cognitive Informatics discipline. These software complexity measures, including cognitive functional size, lend themselves to the approach of the total cognitive weights of basic control structures such as loops and branches. This paper shows that the current existing calculation method can generate different results that are algebraically equivalence. However, analysis of the combinatorial meanings of this calculation method shows significant flaw of the measure, which also explains why it does not satisfy Weyuker's properties. Based on the findings, improvement directions, such as measures fusion, and cumulative variable counting scheme are suggested to enhance the effectiveness of cognitive complexity measures.

Keywords: Cognitive Complexity Measure, Cognitive Weight of Basic Control Structure, Counting Rules, Cumulative Variable Counting Scheme.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
7651 Complexity Leadership and Knowledge Management in Higher Education

Authors: Prabhakar Venugopal Gantasala

Abstract:

Complex environments triggered by globalization have necessitated new paradigms of leadership – Complexity Leadership that encompass multiple roles that leaders need to take upon. Success of Higher Education institutions depends on how well leaders can provide adaptive, administrative and enabling leadership. Complexity Leadership seems all the more relevant for institutions that are knowledge-driven and thrive on Knowledge creation, Knowledge storage and retrieval, Knowledge Sharing and Knowledge applications. Discussed in this paper are the elements of Globalization and the opportunities and challenges that are brought forth by globalization. The Complexity leadership paradigm in a knowledge-based economy and the need for such a paradigm shift for higher education institutions is presented. Further, the paper also discusses the support the leader requires in a knowledge-driven economy through knowledge management initiatives.

Keywords: Globalization, Complexity Leadership, Knowledge Management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1766
7650 Production Line Layout Planning Based on Complexity Measurement

Authors: Guoliang Fan, Aiping Li, Nan Xie, Liyun Xu, Xuemei Liu

Abstract:

Mass customization production increases the difficulty of the production line layout planning. The material distribution process for variety of parts is very complex, which greatly increases the cost of material handling and logistics. In response to this problem, this paper presents an approach of production line layout planning based on complexity measurement. Firstly, by analyzing the influencing factors of equipment layout, the complexity model of production line is established by using information entropy theory. Then, the cost of the part logistics is derived considering different variety of parts. Furthermore, the function of optimization including two objectives of the lowest cost, and the least configuration complexity is built. Finally, the validity of the function is verified in a case study. The results show that the proposed approach may find the layout scheme with the lowest logistics cost and the least complexity. Optimized production line layout planning can effectively improve production efficiency and equipment utilization with lowest cost and complexity.

Keywords: Production line, layout planning, complexity measurement, optimization, mass customization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1055
7649 Complexity Analysis of Some Known Graph Coloring Instances

Authors: Jeffrey L. Duffany

Abstract:

Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.

Keywords: graph coloring, complexity, algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1371
7648 Daily Site Risks Associated with Construction Projects and On-spot Corrective Measurements: Case Study of Revamping Projects in Kuwait Oil Company Fields Area

Authors: Yousef S. Al-Othman

Abstract:

The growth and expansion of the industrial facilities comes proportional to the market increasing demand of products and services. Furthermore, raw material producers such as oil companies usually undergo massive revamping projects to maintain a synchronized supply. These revamping projects are usually delivered through challenging construction projects held and associated with daily site risks related to the construction process. Henceforth, a case study related to these risks and corresponding on-spot corrective measurements has been made on a certain number of construction project contractors at Kuwait Oil Company (KOC) to derive the benefits and overall effectiveness of the on-spot corrective measurements during the construction phase of a project, and how would the same help in avoiding major incidents, ensuring a smooth, cost effective and on time delivery of the project. Findings of this case study shall have an added value to the overall risk management process by minimizing the daily site risks that may affect the project lead time, resulting in an undisturbed on-site construction process.

Keywords: Oil and gas, risk management, construction projects, project lead time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 832
7647 A FE-Based Scheme for Computing Wave Interaction with Nonlinear Damage and Generation of Harmonics in Layered Composite Structures

Authors: R. K. Apalowo, D. Chronopoulos

Abstract:

A Finite Element (FE) based scheme is presented for quantifying guided wave interaction with Localised Nonlinear Structural Damage (LNSD) within structures of arbitrary layering and geometric complexity. The through-thickness mode-shape of the structure is obtained through a wave and finite element method. This is applied in a time domain FE simulation in order to generate time harmonic excitation for a specific wave mode. Interaction of the wave with LNSD within the system is computed through an element activation and deactivation iteration. The scheme is validated against experimental measurements and a WFE-FE methodology for calculating wave interaction with damage. Case studies for guided wave interaction with crack and delamination are presented to verify the robustness of the proposed method in classifying and identifying damage.

Keywords: Layered Structures, nonlinear ultrasound, wave interaction with nonlinear damage, wave finite element, finite element.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 489
7646 A Methodology for Reducing the BGP Convergence Time

Authors: Eatedal A. Alabdulkreem, Hamed S. Al-Raweshidy, Maysam F. Abbod

Abstract:

Border Gateway Protocol (BGP) is the standard routing protocol between various autonomous systems (AS) in the internet. In the event of failure, a considerable delay in the BGP convergence has been shown by empirical measurements. During the convergence time the BGP will repeatedly advertise new routes to some destination and withdraw old ones until it reach a stable state. It has been found that the KEEPALIVE message timer and the HOLD time are tow parameters affecting the convergence speed. This paper aims to find the optimum value for the KEEPALIVE timer and the HOLD time that maximally reduces the convergence time without increasing the traffic. The KEEPALIVE message timer optimal value founded by this paper is 30 second instead of 60 seconds, and the optimal value for the HOLD time is 90 seconds instead of 180 seconds.

Keywords: BGP, Convergence Time, HOLD time, Keep alive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011
7645 Statically Fused Unbiased Converted Measurements Kalman Filter

Authors: Zhengkun Guo, Yanbin Li, Wenqing Wang, Bo Zou

Abstract:

Active radar and sonar systems often report Doppler measurements in addition to the position measurements such as range and bearing. The tracker can perform better by making full use of the Doppler measurements. However, due to the high nonlinearity of the Doppler measurements with respect to the target state in the Cartesian coordinate systems, those measurements are not always fully exploited. This paper mainly focuses on dealing with the Doppler measurements as well as the position measurements in Polar coordinates. The Statically Fused Converted Position and Doppler Measurements Kalman Filter (SF-CMKF) with additive debiased measurement conversion has been presented. However, the exact compensation for the bias of the measurement conversion are multiplicative and depend on the statistics of the cosine of the angle measurement errors. As a result, the consistency and performance of the SF-CMKF may be suboptimal in the large angle error situations. In this paper, the multiplicative unbiased position and Doppler measurement conversion for two-dimensional (Polar-to-Cartesian) tracking are derived, and the SF-CMKF is improved by using those conversion. Monte Carlo simulations are presented to demonstrate the statistic consistency of the multiplicative unbiased conversion and the superior performance of the modified SF-CMKF (SF-UCMKF).

Keywords: Measurement conversion, Doppler, Kalman filter, estimation, tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 321
7644 Selective Intra Prediction Mode Decision for H.264/AVC Encoders

Authors: Jun Sung Park, Hyo Jung Song

Abstract:

H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards such as MPEG-2, but computational complexity is increased significantly. In this paper, we propose selective mode decision schemes for fast intra prediction mode selection. The objective is to reduce the computational complexity of the H.264/AVC encoder without significant rate-distortion performance degradation. In our proposed schemes, the intra prediction complexity is reduced by limiting the luma and chroma prediction modes using the directional information of the 16×16 prediction mode. Experimental results are presented to show that the proposed schemes reduce the complexity by up to 78% maintaining the similar PSNR quality with about 1.46% bit rate increase in average.

Keywords: Video encoding, H.264, Intra prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3434
7643 Dynamic Data Partition Algorithm for a Parallel H.264 Encoder

Authors: Juntae Kim, Jaeyoung Park, Kyoungkun Lee, Jong Tae Kim

Abstract:

The H.264/AVC standard is a highly efficient video codec providing high-quality videos at low bit-rates. As employing advanced techniques, the computational complexity has been increased. The complexity brings about the major problem in the implementation of a real-time encoder and decoder. Parallelism is the one of approaches which can be implemented by multi-core system. We analyze macroblock-level parallelism which ensures the same bit rate with high concurrency of processors. In order to reduce the encoding time, dynamic data partition based on macroblock region is proposed. The data partition has the advantages in load balancing and data communication overhead. Using the data partition, the encoder obtains more than 3.59x speed-up on a four-processor system. This work can be applied to other multimedia processing applications.

Keywords: H.264/AVC, video coding, thread-level parallelism, OpenMP, multimedia

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768
7642 Identification of Regulatory Mechanism of Orthostatic Response

Authors: E. Hlavacova, J. Chrenova, Z. Rausova, M. Vlcek, A. Penesova, L. Dedik

Abstract:

En bloc assumes modeling all phases of the orthostatic test with the only one mathematical model, which allows the complex parametric view of orthostatic response. The work presents the implementation of a mathematical model for processing of the measurements of systolic, diastolic blood pressure and heart rate performed on volunteers during orthostatic test. The original assumption of model hypothesis that every postural change means only one Stressor, did not complying with the measurements of physiological circulation factor-time profiles. Results of the identification support the hypothesis that second postural change of orthostatic test causes induced Stressors, with the observation of a physiological regulation mechanism. Maximal demonstrations are on the heart rate and diastolic blood pressure-time profile, minimal are for the measurements of the systolic blood pressure. Presented study gives a new view on orthostatic test with impact on clinical practice.

Keywords: En bloc modeling, physiological circulatory factor, postural change, stressor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
7641 Artificial Neural Networks and Multi-Class Support Vector Machines for Classifying Magnetic Measurements in Tokamak Reactors

Authors: A. Greco, N. Mammone, F.C. Morabito, M.Versaci

Abstract:

This paper is mainly concerned with the application of a novel technique of data interpretation for classifying measurements of plasma columns in Tokamak reactors for nuclear fusion applications. The proposed method exploits several concepts derived from soft computing theory. In particular, Artificial Neural Networks and Multi-Class Support Vector Machines have been exploited to classify magnetic variables useful to determine shape and position of the plasma with a reduced computational complexity. The proposed technique is used to analyze simulated databases of plasma equilibria based on ITER geometry configuration. As well as demonstrating the successful recovery of scalar equilibrium parameters, we show that the technique can yield practical advantages compared with earlier methods.

Keywords: Tokamak, Classification, Artificial Neural Network, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1242
7640 A New H.264-Based Rate Control Algorithm for Stereoscopic Video Coding

Authors: Yi Liao, Wencheng Yang, Gangyi Jiang

Abstract:

According to investigating impact of complexity of stereoscopic frame pairs on stereoscopic video coding and transmission, a new rate control algorithm is presented. The proposed rate control algorithm is performed on three levels: stereoscopic group of pictures (SGOP) level, stereoscopic frame (SFrame) level and frame level. A temporal-spatial frame complexity model is firstly established, in the bits allocation stage, the frame complexity, position significance and reference property between the left and right frames are taken into account. Meanwhile, the target buffer is set according to the frame complexity. Experimental results show that the proposed method can efficiently control the bitrates, and it outperforms the fixed quantization parameter method from the rate distortion perspective, and average PSNR gain between rate-distortion curves (BDPSNR) is 0.21dB.

Keywords: Stereoscopic video coding, rate control, stereoscopic group of pictures, complexity of stereoscopic frame pairs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
7639 Urdu Nastaleeq Optical Character Recognition

Authors: Zaheer Ahmad, Jehanzeb Khan Orakzai, Inam Shamsher, Awais Adnan

Abstract:

This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average.

Keywords: Cursive Script, OCR, Urdu.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2741
7638 Recursive Wiener-Khintchine Theorem

Authors: Khalid M. Aamir, Mohammad A. Maud

Abstract:

Power Spectral Density (PSD) computed by taking the Fourier transform of auto-correlation functions (Wiener-Khintchine Theorem) gives better result, in case of noisy data, as compared to the Periodogram approach. However, the computational complexity of Wiener-Khintchine approach is more than that of the Periodogram approach. For the computation of short time Fourier transform (STFT), this problem becomes even more prominent where computation of PSD is required after every shift in the window under analysis. In this paper, recursive version of the Wiener-Khintchine theorem has been derived by using the sliding DFT approach meant for computation of STFT. The computational complexity of the proposed recursive Wiener-Khintchine algorithm, for a window size of N, is O(N).

Keywords: Power Spectral Density (PSD), Wiener-KhintchineTheorem, Periodogram, Short Time Fourier Transform (STFT), TheSliding DFT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2443
7637 Performance Complexity Measurement of Tightening Equipment Based on Kolmogorov Entropy

Authors: Guoliang Fan, Aiping Li, Xuemei Liu, Liyun Xu

Abstract:

The performance of the tightening equipment will decline with the working process in manufacturing system. The main manifestations are the randomness and discretization degree increasing of the tightening performance. To evaluate the degradation tendency of the tightening performance accurately, a complexity measurement approach based on Kolmogorov entropy is presented. At first, the states of performance index are divided for calibrating the discrete degree. Then the complexity measurement model based on Kolmogorov entropy is built. The model describes the performance degradation tendency of tightening equipment quantitatively. At last, a study case is applied for verifying the efficiency and validity of the approach. The research achievement shows that the presented complexity measurement can effectively evaluate the degradation tendency of the tightening equipment. It can provide theoretical basis for preventive maintenance and life prediction of equipment.

Keywords: Complexity measurement, Kolmogorov entropy, manufacturing system, performance evaluation, tightening equipment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
7636 Identifying Chaotic Architecture: Origins of Nonlinear Design Theory

Authors: Mohammadsadegh Zanganehfar

Abstract:

Through the emergence of modern architecture, an aggressive desire for new design theories appeared through the works of architects and critics. The discourse of complexity and volumetric composition happened to be an important and controversial issue in the discipline of architecture which was discussed through a general point of view in Robert Venturi and Denise Scott Brown's book “Complexity and contradiction in architecture” in 1966, this paper attempts to identify chaos theory as a scientific model of complexity and its relation to architecture design theory by conducting a qualitative analysis and multidisciplinary critical approach through architecture and basic sciences resources. Accordingly, we identify chaotic architecture as the correlation between chaos theory and the discipline of architecture, and as an independent nonlinear design theory with specific characteristics and properties.

Keywords: Architecture complexity, chaos theory, fractals, nonlinear dynamic systems, nonlinear ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 980
7635 Soft Real-Time Fuzzy Task Scheduling for Multiprocessor Systems

Authors: Mahdi Hamzeh, Sied Mehdi Fakhraie, Caro Lucas

Abstract:

All practical real-time scheduling algorithms in multiprocessor systems present a trade-off between their computational complexity and performance. In real-time systems, tasks have to be performed correctly and timely. Finding minimal schedule in multiprocessor systems with real-time constraints is shown to be NP-hard. Although some optimal algorithms have been employed in uni-processor systems, they fail when they are applied in multiprocessor systems. The practical scheduling algorithms in real-time systems have not deterministic response time. Deterministic timing behavior is an important parameter for system robustness analysis. The intrinsic uncertainty in dynamic real-time systems increases the difficulties of scheduling problem. To alleviate these difficulties, we have proposed a fuzzy scheduling approach to arrange real-time periodic and non-periodic tasks in multiprocessor systems. Static and dynamic optimal scheduling algorithms fail with non-critical overload. In contrast, our approach balances task loads of the processors successfully while consider starvation prevention and fairness which cause higher priority tasks have higher running probability. A simulation is conducted to evaluate the performance of the proposed approach. Experimental results have shown that the proposed fuzzy scheduler creates feasible schedules for homogeneous and heterogeneous tasks. It also and considers tasks priorities which cause higher system utilization and lowers deadline miss time. According to the results, it performs very close to optimal schedule of uni-processor systems.

Keywords: Computational complexity, Deadline, Feasible scheduling, Fuzzy scheduling, Priority, Real-time multiprocessor systems, Robustness, System utilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2089
7634 Towards a Simulation Model to Ensure the Availability of Machines in Maintenance Activities

Authors: Maryam Gallab, Hafida Bouloiz, Youness Chater, Mohamed Tkiouat

Abstract:

The aim of this paper is to present a model based on multi-agent systems in order to manage the maintenance activities and to ensure the reliability and availability of machines just with the required resources (operators, tools). The interest of the simulation is to solve the complexity of the system and to find results without cost or wasting time. An implementation of the model is carried out on the AnyLogic platform to display the defined performance indicators.

Keywords: Maintenance, complexity, simulation, multi-agent systems, AnyLogic platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475
7633 Time Synchronization between the eNBs in E-UTRAN under the Asymmetric IP Network

Authors: M. Kollar, A. Zieba

Abstract:

In this paper, we present a method for a time synchronization between the two eNodeBs (eNBs) in E-UTRAN (Evolved Universal Terrestrial Radio Access) network. The two eNBs are cooperating in so-called inter eNB CA (Carrier Aggregation) case and connected via asymmetrical IP network. We solve the problem by using broadcasting signals generated in E-UTRAN as synchronization signals. The results show that the time synchronization with the proposed method is possible with the error significantly less than 1 ms which is sufficient considering the time transmission interval is 1 ms in E-UTRAN. This makes this method (with low complexity) more suitable than Network Time Protocol (NTP) in the mobile applications with generated broadcasting signals where time synchronization in asymmetrical network is required.

Keywords: E-UTRAN, IP scheduled throughput, initial burst delay, synchronization, NTP, delay, asymmetric network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 660
7632 Computing the Loop Bound in Iterative Data Flow Graphs Using Natural Token Flow

Authors: Ali Shatnawi

Abstract:

Signal processing applications which are iterative in nature are best represented by data flow graphs (DFG). In these applications, the maximum sampling frequency is dependent on the topology of the DFG, the cyclic dependencies in particular. The determination of the iteration bound, which is the reciprocal of the maximum sampling frequency, is critical in the process of hardware implementation of signal processing applications. In this paper, a novel technique to compute the iteration bound is proposed. This technique is different from all previously proposed techniques, in the sense that it is based on the natural flow of tokens into the DFG rather than the topology of the graph. The proposed algorithm has lower run-time complexity than all known algorithms. The performance of the proposed algorithm is illustrated through analytical analysis of the time complexity, as well as through simulation of some benchmark problems.

Keywords: Data flow graph, Iteration period bound, Rateoptimalscheduling, Recursive DSP algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2536