Search results for: time complexity measurements.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7681

Search results for: time complexity measurements.

7531 Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem

Authors: First G.M. Karthik, Second Ramachandra.V.Pujeri, Dr.

Abstract:

Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.

Keywords: Constraint Based Mining, FP tree, Data mining, GCS problem, CBFP mining technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
7530 Software Reliability Prediction Model Analysis

Authors: L. Mirtskhulava, M. Khunjgurua, N. Lomineishvili, K. Bakuria

Abstract:

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

Keywords: Exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
7529 An Automated Test Setup for the Characterization of Antenna in CATR

Authors: Faisal Amin, Abdul Mueed, Xu Jiadong

Abstract:

This paper describes the development of a fully automated measurement software for antenna radiation pattern measurements in a Compact Antenna Test Range (CATR). The CATR has a frequency range from 2-40 GHz and the measurement hardware includes a Network Analyzer for transmitting and Receiving the microwave signal and a Positioner controller to control the motion of the Styrofoam column. The measurement process includes Calibration of CATR with a Standard Gain Horn (SGH) antenna followed by Gain versus angle measurement of the Antenna under test (AUT). The software is designed to control a variety of microwave transmitter / receiver and two axis Positioner controllers through the standard General Purpose interface bus (GPIB) interface. Addition of new Network Analyzers is supported through a slight modification of hardware control module. Time-domain gating is implemented to remove the unwanted signals and get the isolated response of AUT. The gated response of the AUT is compared with the calibration data in the frequency domain to obtain the desired results. The data acquisition and processing is implemented in Agilent VEE and Matlab. A variety of experimental measurements with SGH antennas were performed to validate the accuracy of software. A comparison of results with existing commercial softwares is presented and the measured results are found to be within .2 dBm.

Keywords: Antenna measurement, calibration, time-domain gating, VNA, Positioner controller

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930
7528 A Comparison of Adaline and MLP Neural Network based Predictors in SIR Estimation in Mobile DS/CDMA Systems

Authors: Nahid Ardalani, Ahmadreza Khoogar, H. Roohi

Abstract:

In this paper we compare the response of linear and nonlinear neural network-based prediction schemes in prediction of received Signal-to-Interference Power Ratio (SIR) in Direct Sequence Code Division Multiple Access (DS/CDMA) systems. The nonlinear predictor is Multilayer Perceptron MLP and the linear predictor is an Adaptive Linear (Adaline) predictor. We solve the problem of complexity by using the Minimum Mean Squared Error (MMSE) principle to select the optimal predictors. The optimized Adaline predictor is compared to optimized MLP by employing noisy Rayleigh fading signals with 1.8 GHZ carrier frequency in an urban environment. The results show that the Adaline predictor can estimates SIR with the same error as MLP when the user has the velocity of 5 km/h and 60 km/h but by increasing the velocity up-to 120 km/h the mean squared error of MLP is two times more than Adaline predictor. This makes the Adaline predictor (with lower complexity) more suitable than MLP for closed-loop power control where efficient and accurate identification of the time-varying inverse dynamics of the multi path fading channel is required.

Keywords: Power control, neural networks, DS/CDMA mobilecommunication systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2466
7527 Structure of the Working Time of Nurses in Emergency Departments in Polish Hospitals

Authors: Jadwiga Klukow, Anna Ksykiewicz-Dorota

Abstract:

An analysis of the distribution of nurses’ working time constitutes vital information for the management in planning employment. The objective of the study was to analyze the distribution of nurses’ working time in an emergency department. The study was conducted in an emergency department of a teaching hospital in Lublin, in Southeast Poland. The catalogue of activities performed by nurses was compiled by means of continuous observation. Identified activities were classified into four groups: Direct care, indirect care, coordination of work in the department and personal activities. Distribution of nurses’ working time was determined by work sampling observation (Tippett) at random intervals. The research project was approved by the Research Ethics Committee by the Medical University of Lublin (Protocol 0254/113/2010). On average, nurses spent 31% of their working time on direct care, 47% on indirect care, 12% on coordinating work in the department and 10% on personal activities. The most frequently performed direct care tasks were diagnostic activities – 29.23% and treatment-related activities – 27.69%. The study has provided information on the complexity of performed activities and utilization of nurses’ working time. Enhancing the effectiveness of nursing actions requires working out a strategy for improved management of the time nurses spent at work. Increasing the involvement of auxiliary staff and optimizing communication processes within the team may lead to reduction of the time devoted to indirect care for the benefit of direct care.

Keywords: Emergency nurses, nursing care, workload, work sampling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1432
7526 Promoting Complex Systems Learning through the use of Computer Modeling

Authors: Kamel Hashem, David Mioduser

Abstract:

This paper describes part of a project about Learningby- Modeling (LbM). Studying complex systems is increasingly important in teaching and learning many science domains. Many features of complex systems make it difficult for students to develop deep understanding. Previous research indicates that involvement with modeling scientific phenomena and complex systems can play a powerful role in science learning. Some researchers argue with this view indicating that models and modeling do not contribute to understanding complexity concepts, since these increases the cognitive load on students. This study will investigate the effect of different modes of involvement in exploring scientific phenomena using computer simulation tools, on students- mental model from the perspective of structure, behavior and function. Quantitative and qualitative methods are used to report about 121 freshmen students that engaged in participatory simulations about complex phenomena, showing emergent, self-organized and decentralized patterns. Results show that LbM plays a major role in students' concept formation about complexity concepts.

Keywords: Complexity, Educational technology, Learning by modeling, Mental models

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
7525 Evaluating Complexity – Ethical Challenges in Computational Design Processes

Authors: J.Partanen

Abstract:

Complexity, as a theoretical background has made it easier to understand and explain the features and dynamic behavior of various complex systems. As the common theoretical background has confirmed, borrowing the terminology for design from the natural sciences has helped to control and understand urban complexity. Phenomena like self-organization, evolution and adaptation are appropriate to describe the formerly inaccessible characteristics of the complex environment in unpredictable bottomup systems. Increased computing capacity has been a key element in capturing the chaotic nature of these systems. A paradigm shift in urban planning and architectural design has forced us to give up the illusion of total control in urban environment, and consequently to seek for novel methods for steering the development. New methods using dynamic modeling have offered a real option for more thorough understanding of complexity and urban processes. At best new approaches may renew the design processes so that we get a better grip on the complex world via more flexible processes, support urban environmental diversity and respond to our needs beyond basic welfare by liberating ourselves from the standardized minimalism. A complex system and its features are as such beyond human ethics. Self-organization or evolution is either good or bad. Their mechanisms are by nature devoid of reason. They are common in urban dynamics in both natural processes and gas. They are features of a complex system, and they cannot be prevented. Yet their dynamics can be studied and supported. The paradigm of complexity and new design approaches has been criticized for a lack of humanity and morality, but the ethical implications of scientific or computational design processes have not been much discussed. It is important to distinguish the (unexciting) ethics of the theory and tools from the ethics of computer aided processes based on ethical decisions. Urban planning and architecture cannot be based on the survival of the fittest; however, the natural dynamics of the system cannot be impeded on grounds of being “non-human". In this paper the ethical challenges of using the dynamic models are contemplated in light of a few examples of new architecture and dynamic urban models and literature. It is suggested that ethical challenges in computational design processes could be reframed under the concepts of responsibility and transparency.

Keywords: urban planning, architecture, dynamic modeling, ethics, complexity theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849
7524 Aerodynamic Interaction between Two Speed Skaters Measured in a Closed Wind Tunnel

Authors: Ola Elfmark, Lars M. Bardal, Luca Oggiano, H˚avard Myklebust

Abstract:

Team pursuit is a relatively new event in international long track speed skating. For a single speed skater the aerodynamic drag will account for up to 80% of the braking force, thus reducing the drag can greatly improve the performance. In a team pursuit the interactions between athletes in near proximity will also be essential, but is not well studied. In this study, systematic measurements of the aerodynamic drag, body posture and relative positioning of speed skaters have been performed in the low speed wind tunnel at the Norwegian University of Science and Technology, in order to investigate the aerodynamic interaction between two speed skaters. Drag measurements of static speed skaters drafting, leading, side-by-side, and dynamic drag measurements in a synchronized and unsynchronized movement at different distances, were performed. The projected frontal area was measured for all postures and movements and a blockage correction was performed, as the blockage ratio ranged from 5-15% in the different setups. The static drag measurements where performed on two test subjects in two different postures, a low posture and a high posture, and two different distances between the test subjects 1.5T and 3T where T being the length of the torso (T=0.63m). A drag reduction was observed for all distances and configurations, from 39% to 11.4%, for the drafting test subject. The drag of the leading test subject was only influenced at -1.5T, with the biggest drag reduction of 5.6%. An increase in drag was seen for all side-by-side measurements, the biggest increase was observed to be 25.7%, at the closest distance between the test subjects, and the lowest at 2.7% with ∼ 0.7 m between the test subjects. A clear aerodynamic interaction between the test subjects and their postures was observed for most measurements during static measurements, with results corresponding well to recent studies. For the dynamic measurements, the leading test subject had a drag reduction of 3% even at -3T. The drafting showed a drag reduction of 15% when being in a synchronized (sync) motion with the leading test subject at 4.5T. The maximal drag reduction for both the leading and the drafting test subject were observed when being as close as possible in sync, with a drag reduction of 8.5% and 25.7% respectively. This study emphasize the importance of keeping a synchronized movement by showing that the maximal gain for the leading and drafting dropped to 3.2% and 3.3% respectively when the skaters are in opposite phase. Individual differences in technique also appear to influence the drag of the other test subject.

Keywords: Aerodynamic interaction, drag cycle, drag force, frontal area, speed skating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 986
7523 MIMCA: A Modelling and Simulation Approach in Support of the Design and Construction of Manufacturing Control Systems Using Modular Petri net

Authors: S. Ariffin, K. Hasnan, R.H. Weston

Abstract:

A new generation of manufacturing machines so-called MIMCA (modular and integrated machine control architecture) capable of handling much increased complexity in manufacturing control-systems is presented. Requirement for more flexible and effective control systems for manufacturing machine systems is investigated and dimensioned-which highlights a need for improved means of coordinating and monitoring production machinery and equipment used to- transport material. The MIMCA supports simulation based on machine modeling, was conceived by the authors to address the issues. Essentially MIMCA comprises an organized unification of selected architectural frameworks and modeling methods, which include: NISTRCS, UMC and Colored Timed Petri nets (CTPN). The unification has been achieved; to support the design and construction of hierarchical and distributed machine control which realized the concurrent operation of reusable and distributed machine control components; ability to handle growing complexity; and support requirements for real- time control systems. Thus MIMCA enables mapping between 'what a machine should do' and 'how the machine does it' in a well-defined but flexible way designed to facilitate reconfiguration of machine systems.

Keywords: Machine control, architectures, Petri nets, modularity, modeling, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
7522 Short Time Identification of Feed Drive Systems using Nonlinear Least Squares Method

Authors: M.G.A. Nassef, Linghan Li, C. Schenck, B. Kuhfuss

Abstract:

Design and modeling of nonlinear systems require the knowledge of all inside acting parameters and effects. An empirical alternative is to identify the system-s transfer function from input and output data as a black box model. This paper presents a procedure using least squares algorithm for the identification of a feed drive system coefficients in time domain using a reduced model based on windowed input and output data. The command and response of the axis are first measured in the first 4 ms, and then least squares are applied to predict the transfer function coefficients for this displacement segment. From the identified coefficients, the next command response segments are estimated. The obtained results reveal a considerable potential of least squares method to identify the system-s time-based coefficients and predict accurately the command response as compared to measurements.

Keywords: feed drive systems, least squares algorithm, onlineparameter identification, short time window

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
7521 Scalable Systolic Multiplier over Binary Extension Fields Based on Two-Level Karatsuba Decomposition

Authors: Chiou-Yng Lee, Wen-Yo Lee, Chieh-Tsai Wu, Cheng-Chen Yang

Abstract:

Shifted polynomial basis (SPB) is a variation of polynomial basis representation. SPB has potential for efficient bit level and digi -level implementations of multiplication over binary extension fields with subquadratic space complexity. For efficient implementation of pairing computation with large finite fields, this paper presents a new SPB multiplication algorithm based on Karatsuba schemes, and used that to derive a novel scalable multiplier architecture. Analytical results show that the proposed multiplier provides a trade-off between space and time complexities. Our proposed multiplier is modular, regular, and suitable for very large scale integration (VLSI) implementations. It involves less area complexity compared to the multipliers based on traditional decomposition methods. It is therefore, more suitable for efficient hardware implementation of pairing based cryptography and elliptic curve cryptography (ECC) in constraint driven applications.

Keywords: Digit-serial systolic multiplier, elliptic curve cryptography (ECC), Karatsuba algorithm (KA), shifted polynomial basis (SPB), pairing computation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2026
7520 Design and Characteristics of New Test Facility for Flat Plate Boundary Layer Research

Authors: N. Patten, T. M. Young, P. Griffin

Abstract:

Preliminary results for a new flat plate test facility are presented here in the form of Computational Fluid Dynamics (CFD), flow visualisation, pressure measurements and thermal anemometry. The results from the CFD and flow visualisation show the effectiveness of the plate design, with the trailing edge flap anchoring the stagnation point on the working surface and reducing the extent of the leading edge separation. The flow visualization technique demonstrates the two-dimensionality of the flow in the location where the thermal anemometry measurements are obtained. Measurements of the boundary layer mean velocity profiles compare favourably with the Blasius solution, thereby allowing for comparison of future measurements with the wealth of data available on zero pressure gradient Blasius flows. Results for the skin friction, boundary layer thickness, frictional velocity and wall shear stress are shown to agree well with the Blasius theory, with a maximum experimental deviation from theory of 5%. Two turbulence generating grids have been designed and characterized and it is shown that the turbulence decay downstream of both grids agrees with established correlations. It is also demonstrated that there is little dependence of turbulence on the freestream velocity.

Keywords: CFD, Flow Visualisation, Thermal Anemometry, Turbulence Grids.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737
7519 Optimal Solution of Constraint Satisfaction Problems

Authors: Jeffrey L. Duffany

Abstract:

An optimal solution for a large number of constraint satisfaction problems can be found using the technique of substitution and elimination of variables analogous to the technique that is used to solve systems of equations. A decision function f(A)=max(A2) is used to determine which variables to eliminate. The algorithm can be expressed in six lines and is remarkable in both its simplicity and its ability to find an optimal solution. However it is inefficient in that it needs to square the updated A matrix after each variable elimination. To overcome this inefficiency the algorithm is analyzed and it is shown that the A matrix only needs to be squared once at the first step of the algorithm and then incrementally updated for subsequent steps, resulting in significant improvement and an algorithm complexity of O(n3).

Keywords: Algorithm, complexity, constraint, np-complete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
7518 Physical Verification Flow on Multiple Foundries

Authors: R. Abdul Wahab, R. Mohd Fuad Tengku Aziz, N. Othman, S. Saleh, N. Razali, M. Al Baqir Zinal Abidin, M. Hanif Md Nasir

Abstract:

This paper will discuss how we optimize our physical verification flow in our IC Design Department having various rule decks from multiple foundries. Our ultimate goal is to achieve faster time to tape-out and avoid schedule delay. Currently the physical verification runtimes and memory usage have drastically increased with the increasing number of design rules, design complexity, and the size of the chips to be verified. To manage design violations, we use a number of solutions to reduce the amount of violations needed to be checked by physical verification engineers. The most important functions in physical verifications are DRC (design rule check), LVS (layout vs. schematic), and XRC (extraction). Since we have a multiple number of foundries for our design tape-outs, we need a flow that improve the overall turnaround time and ease of use of the physical verification process. The demand for fast turnaround time is even more critical since the physical design is the last stage before sending the layout to the foundries.

Keywords: Physical verification, DRC, LVS, XRC, flow, foundry, runset.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3169
7517 Simple Agents Benefit Only from Simple Brains

Authors: Valeri A. Makarov, Nazareth P. Castellanos, Manuel G. Velarde

Abstract:

In order to answer the general question: “What does a simple agent with a limited life-time require for constructing a useful representation of the environment?" we propose a robot platform including the simplest probabilistic sensory and motor layers. Then we use the platform as a test-bed for evaluation of the navigational capabilities of the robot with different “brains". We claim that a protocognitive behavior is not a consequence of highly sophisticated sensory–motor organs but instead emerges through an increment of the internal complexity and reutilization of the minimal sensory information. We show that the most fundamental robot element, the short-time memory, is essential in obstacle avoidance. However, in the simplest conditions of no obstacles the straightforward memoryless robot is usually superior. We also demonstrate how a low level action planning, involving essentially nonlinear dynamics, provides a considerable gain to the robot performance dynamically changing the robot strategy. Still, however, for very short life time the brainless robot is superior. Accordingly we suggest that small organisms (or agents) with short life-time does not require complex brains and even can benefit from simple brain-like (reflex) structures. To some extend this may mean that controlling blocks of modern robots are too complicated comparative to their life-time and mechanical abilities.

Keywords: Neural network, probabilistic control, robot navigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394
7516 Estimation of Component Reusability through Reusability Metrics

Authors: Aditya Pratap Singh, Pradeep Tomar

Abstract:

Software reusability is an essential characteristic of Component-Based Software (CBS). The component reusability is an important assess for the effective reuse of components in CBS. The attributes of reusability proposed by various researchers are studied and four of them are identified as potential factors affecting reusability. This paper proposes metric for reusability estimation of black-box software component along with metrics for Interface Complexity, Understandability, Customizability and Reliability. An experiment is performed for estimation of reusability through a case study on a sample web application using a real world component.

Keywords: Component-based software, component reusability, customizability, interface complexity, reliability, understandability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3016
7515 Hardness Variations as Affected by Bar Diameter of AISI 4140 Steel

Authors: Hamad K. Al-Khalid, Ayman M. Alaskari, Samy E. Oraby

Abstract:

Hardness of the widely used structural steel is of vital importance since it may help in the determination of many mechanical properties of a material under loading situations. In order to obtain reliable information for design, properties homogeneity should be validated. In the current study the hardness variation over the different diameters of the same AISI 4140 bar is investigated. Measurements were taken on the two faces of the stock at equally spaced eight sectors and fifteen layers. Statistical and graphical analysis are performed to asses the distribution of hardness measurements over the specified area. Hardness measurements showed some degree of dispersion with about ± 10% of its nominal value provided by manufacturer. Hardness value is found to have a slight decrease trend as the diameter is reduced. However, an opposite behavior is noticed regarding the sequence of the sector indicating a nonuniform distribution over the same area either on the same face or considering the corresponding sector on the other face (cross section) of the same material bar.

Keywords: Hardness; Hardness variation; AISI 4140 steel; Bardiameter; Statistical Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2890
7514 Measures and Influence of a Baw Filter on Digital Radio-Communications Signals

Authors: A. Diet, M. Villegas, G. Baudoin

Abstract:

This work concerns the measurements of a Bulk Acoustic Waves (BAW) emission filter S parameters and compare with prototypes simulated types. Thanks to HP-ADS, a co-simulation of filters- characteristics in a digital radio-communication chain is performed. Four cases of modulation schemes are studied in order to illustrate the impact of the spectral occupation of the modulated signal. Results of simulations and co-simulation are given in terms of Error Vector Measurements to be useful for a general sensibility analysis of 4th/3rd Generation (G.) emitters (wideband QAM and OFDM signals)

Keywords: RF architectures, BAW filters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2731
7513 Parametric Design as an Approach to Respond to Complexity

Authors: Sepideh Jabbari Behnam, Zahrasadat Saide Zarabadi

Abstract:

A city is an intertwined texture from the relationship of different components in a whole which is united in a one, so designing the whole complex and its planning is not an easy matter. By considering that a city is a complex system with infinite components and communications, providing flexible layouts that can respond to the unpredictable character of the city, which is a result of its complexity, is inevitable. Parametric design approach as a new approach can produce flexible and transformative layouts in any stage of design. This study aimed to introduce parametric design as a modern approach to respond to complex urban issues by using descriptive and analytical methods. This paper firstly introduces complex systems and then giving a brief characteristic of complex systems. The flexible design and layout flexibility is another matter in response and simulation of complex urban systems that should be considered in design, which is discussed in this study. In this regard, after describing the nature of the parametric approach as a flexible approach, as well as a tool and appropriate way to respond to features such as limited predictability, reciprocating nature, complex communications, and being sensitive to initial conditions and hierarchy, this paper introduces parametric design.

Keywords: Complexity theory, complex system, flexibility, parametric design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1264
7512 Unscented Transformation for Estimating the Lyapunov Exponents of Chaotic Time Series Corrupted by Random Noise

Authors: K. Kamalanand, P. Mannar Jawahar

Abstract:

Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.

Keywords: Lyapunov exponents, unscented transformation, chaos theory, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1942
7511 Multi Switched Split Vector Quantization of Narrowband Speech Signals

Authors: M. Satya Sai Ram, P. Siddaiah, M. Madhavi Latha

Abstract:

Vector quantization is a powerful tool for speech coding applications. This paper deals with LPC Coding of speech signals which uses a new technique called Multi Switched Split Vector Quantization (MSSVQ), which is a hybrid of Multi, switched, split vector quantization techniques. The spectral distortion performance, computational complexity, and memory requirements of MSSVQ are compared to split vector quantization (SVQ), multi stage vector quantization(MSVQ) and switched split vector quantization (SSVQ) techniques. It has been proved from results that MSSVQ has better spectral distortion performance, lower computational complexity and lower memory requirements when compared to all the above mentioned product code vector quantization techniques. Computational complexity is measured in floating point operations (flops), and memory requirements is measured in (floats).

Keywords: Linear predictive Coding, Multi stage vectorquantization, Switched Split vector quantization, Split vectorquantization, Line Spectral Frequencies (LSF).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621
7510 A New Analytical Approach to Reconstruct Residual Stresses Due to Turning Process

Authors: G.H. Farrahi, S.A. Faghidian, D.J. Smith

Abstract:

A thin layer on the component surface can be found with high tensile residual stresses, due to turning operations, which can dangerously affect the fatigue performance of the component. In this paper an analytical approach is presented to reconstruct the residual stress field from a limited incomplete set of measurements. Airy stress function is used as the primary unknown to directly solve the equilibrium equations and satisfying the boundary conditions. In this new method there exists the flexibility to impose the physical conditions that govern the behavior of residual stress to achieve a meaningful complete stress field. The analysis is also coupled to a least squares approximation and a regularization method to provide stability of the inverse problem. The power of this new method is then demonstrated by analyzing some experimental measurements and achieving a good agreement between the model prediction and the results obtained from residual stress measurement.

Keywords: Residual stress, Limited measurements, Inverse problems, Turning process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384
7509 Preserved Relative Differences between Regions of Different Thermal Scans

Authors: Tahir Majeed, Michael Handschuh, René Meier

Abstract:

Rheumatoid Arthritis patients have swelling and pain in joints of the hand. The regions where the patient feels pain also show increased body temperature. Thermal cameras can be used to detect the rise in temperature of the affected regions. To monitor the progression of Rheumatoid Arthritis, patients must visit the clinic regularly for scanning and examination. After scanning and evaluation, the dosage of the medicine is regulated accordingly. To monitor the disease progression over time, the correlation of the images between different visits must be established. It has been observed that the thermal measurements do not remain the same over time, even within a single scanning, when low-cost thermal cameras are used. In some situations, temperatures can vary as much as 2◦C within the same scanning sequence. In this paper, it has been shown that although the absolute temperature varies over time, the relative difference between different regions remains similar. Results have been computed over four scanning sequences and are presented.

Keywords: Relative thermal difference, rheumatoid arthritis, thermal imaging, thermal sensors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 442
7508 New Hybrid Method to Correct for Wind Tunnel Wall- and Support Interference On-line

Authors: B. J. C. Horsten, L. L. M. Veldhuis

Abstract:

Because support interference corrections are not properly understood, engineers mostly rely on expensive dummy measurements or CFD calculations. This paper presents a method based on uncorrected wind tunnel measurements and fast calculation techniques (it is a hybrid method) to calculate wall interference, support interference and residual interference (when e.g. a support member closely approaches the wind tunnel walls) for any type of wind tunnel and support configuration. The method provides with a simple formula for the calculation of the interference gradient. This gradient is based on the uncorrected measurements and a successive calculation of the slopes of the interference-free aerodynamic coefficients. For the latter purpose a new vortex-lattice routine is developed that corrects the slopes for viscous effects. A test case of a measurement on a wing proves the value of this hybrid method as trends and orders of magnitudes of the interference are correctly determined.

Keywords: Hybrid method, support interference, wall interference, wind tunnel corrections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906
7507 Enhanced Shell Sorting Algorithm

Authors: Basit Shahzad, Muhammad Tanvir Afzal

Abstract:

Many algorithms are available for sorting the unordered elements. Most important of them are Bubble sort, Heap sort, Insertion sort and Shell sort. These algorithms have their own pros and cons. Shell Sort which is an enhanced version of insertion sort, reduces the number of swaps of the elements being sorted to minimize the complexity and time as compared to insertion sort. Shell sort improves the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worst-case time is O(n1.5). It performs certain iterations. In each iteration it swaps some elements of the array in such a way that in last iteration when the value of h is one, the number of swaps will be reduced. Donald L. Shell invented a formula to calculate the value of ?h?. this work focuses to identify some improvement in the conventional Shell sort algorithm. ''Enhanced Shell Sort algorithm'' is an improvement in the algorithm to calculate the value of 'h'. It has been observed that by applying this algorithm, number of swaps can be reduced up to 60 percent as compared to the existing algorithm. In some other cases this enhancement was found faster than the existing algorithms available.

Keywords: Algorithm, Computation, Shell, Sorting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3094
7506 CoSP2P: A Component-Based Service Model for Peer-to-Peer Systems

Authors: Candido Alcaide, Manuel Dıaz, Luis Llopis, Antonio Marquez, Bartolome Rubio, Enrique Soler

Abstract:

The increasing complexity of software development based on peer to peer networks makes necessary the creation of new frameworks in order to simplify the developer-s task. Additionally, some applications, e.g. fire detection or security alarms may require real-time constraints and the high level definition of these features eases the application development. In this paper, a service model based on a component model with real-time features is proposed. The high-level model will abstract developers from implementation tasks, such as discovery, communication, security or real-time requirements. The model is oriented to deploy services on small mobile devices, such as sensors, mobile phones and PDAs, where the computation is light-weight. Services can be composed among them by means of the port concept to form complex ad-hoc systems and their implementation is carried out using a component language called UM-RTCOM. In order to apply our proposals a fire detection application is described.

Keywords: Peer-to-peer, mobile systems, real-time, service-oriented architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
7505 Applying Kinect on the Development of a Customized 3D Mannequin

Authors: Shih-Wen Hsiao, Rong-Qi Chen

Abstract:

In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.

Keywords: 3D Mannequin, kinect scanner, interactive closest point, shape morphing, subdivision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2025
7504 Coding based Synchronization Algorithm for Secondary Synchronization Channel in WCDMA

Authors: Deng Liao, Dongyu Qiu, Ahmed K. Elhakeem

Abstract:

A new code synchronization algorithm is proposed in this paper for the secondary cell-search stage in wideband CDMA systems. Rather than using the Cyclically Permutable (CP) code in the Secondary Synchronization Channel (S-SCH) to simultaneously determine the frame boundary and scrambling code group, the new synchronization algorithm implements the same function with less system complexity and less Mean Acquisition Time (MAT). The Secondary Synchronization Code (SSC) is redesigned by splitting into two sub-sequences. We treat the information of scrambling code group as data bits and use simple time diversity BCH coding for further reliability. It avoids involved and time-costly Reed-Solomon (RS) code computations and comparisons. Analysis and simulation results show that the Synchronization Error Rate (SER) yielded by the new algorithm in Rayleigh fading channels is close to that of the conventional algorithm in the standard. This new synchronization algorithm reduces system complexities, shortens the average cell-search time and can be implemented in the slot-based cell-search pipeline. By taking antenna diversity and pipelining correlation processes, the new algorithm also shows its flexible application in multiple antenna systems.

Keywords: WCDMA cell-search, synchronization algorithm, secondary synchronization channel, antenna diversity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341
7503 Hardware Implementation of Local Binary Pattern Based Two-Bit Transform Motion Estimation

Authors: Seda Yavuz, Anıl Çelebi, Aysun Taşyapı Çelebi, Oğuzhan Urhan

Abstract:

Nowadays, demand for using real-time video transmission capable devices is ever-increasing. So, high resolution videos have made efficient video compression techniques an essential component for capturing and transmitting video data. Motion estimation has a critical role in encoding raw video. Hence, various motion estimation methods are introduced to efficiently compress the video. Low bit‑depth representation based motion estimation methods facilitate computation of matching criteria and thus, provide small hardware footprint. In this paper, a hardware implementation of a two-bit transformation based low-complexity motion estimation method using local binary pattern approach is proposed. Image frames are represented in two-bit depth instead of full-depth by making use of the local binary pattern as a binarization approach and the binarization part of the hardware architecture is explained in detail. Experimental results demonstrate the difference between the proposed hardware architecture and the architectures of well-known low-complexity motion estimation methods in terms of important aspects such as resource utilization, energy and power consumption.

Keywords: Binarization, hardware architecture, local binary pattern, motion estimation, two-bit transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1331
7502 Some Characteristics of Systolic Arrays

Authors: Halil Snopce, Ilir Spahiu

Abstract:

In this paper is investigated a possible optimization of some linear algebra problems which can be solved by parallel processing using the special arrays called systolic arrays. In this paper are used some special types of transformations for the designing of these arrays. We show the characteristics of these arrays. The main focus is on discussing the advantages of these arrays in parallel computation of matrix product, with special approach to the designing of systolic array for matrix multiplication. Multiplication of large matrices requires a lot of computational time and its complexity is O(n3 ). There are developed many algorithms (both sequential and parallel) with the purpose of minimizing the time of calculations. Systolic arrays are good suited for this purpose. In this paper we show that using an appropriate transformation implicates in finding more optimal arrays for doing the calculations of this type.

Keywords: Data dependences, matrix multiplication, systolicarray, transformation matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1474