Search results for: Adaptive System
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8831

Search results for: Adaptive System

8441 Development of Sustainable Building Environmental Model (SBEM) in Hong Kong

Authors: Kwok W. Mui, Ling T. Wong, F. Xiao, Chin T. Cheung, Ho C. Yu

Abstract:

This study addresses a concept of the Sustainable Building Environmental Model (SBEM) developed to optimize energy consumption in air conditioning and ventilation (ACV) systems without any deterioration of indoor environmental quality (IEQ). The SBEM incorporates two main components: an adaptive comfort temperature control module (ACT) and a new carbon dioxide demand control module (nDCV). These two modules take an innovative approach to maintain satisfaction of the Indoor Environmental Quality (IEQ) with optimum energy consumption; they provide a rational basis of effective control. A total of 2133 sets of measurement data of indoor air temperature (Ta), relative humidity (Rh) and carbon dioxide concentration (CO2) were conducted in some Hong Kong offices to investigate the potential of integrating the SBEM. A simulation was used to evaluate the dynamic performance of the energy and air conditioning system with the integration of the SBEM in an air-conditioned building. It allows us make a clear picture of the control strategies and performed any pre-tuned of controllers before utilized in real systems. With the integration of SBEM, it was able to save up to 12.3% in simulation of overall electricity consumption, and maintain the average carbon dioxide concentration within 1000ppm and occupant dissatisfaction in 20%. 

Keywords: —Sustainable building environmental model (SBEM), adaptive comfort temperature (ACT), new demand control ventilation (nDCV), energy saving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825
8440 Optimizing Dialogue Strategy Learning Using Learning Automata

Authors: G. Kumaravelan, R. Sivakumar

Abstract:

Modeling the behavior of the dialogue management in the design of a spoken dialogue system using statistical methodologies is currently a growing research area. This paper presents a work on developing an adaptive learning approach to optimize dialogue strategy. At the core of our system is a method formalizing dialogue management as a sequential decision making under uncertainty whose underlying probabilistic structure has a Markov Chain. Researchers have mostly focused on model-free algorithms for automating the design of dialogue management using machine learning techniques such as reinforcement learning. But in model-free algorithms there exist a dilemma in engaging the type of exploration versus exploitation. Hence we present a model-based online policy learning algorithm using interconnected learning automata for optimizing dialogue strategy. The proposed algorithm is capable of deriving an optimal policy that prescribes what action should be taken in various states of conversation so as to maximize the expected total reward to attain the goal and incorporates good exploration and exploitation in its updates to improve the naturalness of humancomputer interaction. We test the proposed approach using the most sophisticated evaluation framework PARADISE for accessing to the railway information system.

Keywords: Dialogue management, Learning automata, Reinforcement learning, Spoken dialogue system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612
8439 Automatic Authentication of Handwritten Documents via Low Density Pixel Measurements

Authors: Abhijit Mitra, Pranab Kumar Banerjee, C. Ardil

Abstract:

We introduce an effective approach for automatic offline au- thentication of handwritten samples where the forgeries are skillfully done, i.e., the true and forgery sample appearances are almost alike. Subtle details of temporal information used in online verification are not available offline and are also hard to recover robustly. Thus the spatial dynamic information like the pen-tip pressure characteristics are considered, emphasizing on the extraction of low density pixels. The points result from the ballistic rhythm of a genuine signature which a forgery, however skillful that may be, always lacks. Ten effective features, including these low density points and den- sity ratio, are proposed to make the distinction between a true and a forgery sample. An adaptive decision criteria is also derived for better verification judgements.

Keywords: Handwritten document verification, Skilled forgeries, Low density pixels, Adaptive decision boundary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717
8438 A new Adaptive Approach for Histogram based Mouth Segmentation

Authors: Axel Panning, Robert Niese, Ayoub Al-Hamadi, Bernd Michaelis

Abstract:

The segmentation of mouth and lips is a fundamental problem in facial image analyisis. In this paper we propose a method for lip segmentation based on rg-color histogram. Statistical analysis shows, using the rg-color-space is optimal for this purpose of a pure color based segmentation. Initially a rough adaptive threshold selects a histogram region, that assures that all pixels in that region are skin pixels. Based on that pixels we build a gaussian model which represents the skin pixels distribution and is utilized to obtain a refined, optimal threshold. We are not incorporating shape or edge information. In experiments we show the performance of our lip pixel segmentation method compared to the ground truth of our dataset and a conventional watershed algorithm.

Keywords: Feature extraction, Segmentation, Image processing, Application

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1788
8437 Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences

Authors: Yuan-Hsiang Chang, Pin-Chi Lin, Li-Der Jeng

Abstract:

Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.). 

Keywords: Motion detection, motion tracking, trajectory analysis, video surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732
8436 A Model for Bidding Markup Decisions Making based-on Agent Learning

Authors: W. Hou, X. Shan, X. Ye

Abstract:

Bidding is a very important business function to find latent contractors of construction projects. Moreover, bid markup is one of the most important decisions for a bidder to gain a reasonable profit. Since the bidding system is a complex adaptive system, bidding agent need a learning process to get more valuable knowledge for a bid, especially from past public bidding information. In this paper, we proposed an iterative agent leaning model for bidders to make markup decisions. A classifier for public bidding information named PIBS is developed to make full use of history data for classifying new bidding information. The simulation and experimental study is performed to show the validity of the proposed classifier. Some factors that affect the validity of PIBS are also analyzed at the end of this work.

Keywords: bidding markup, decision making, agent learning, information similarity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2416
8435 Modeling of Single-Particle Impact in Abrasive Water Jet Machining

Authors: S. Y. Ahmadi-Brooghani, H. Hassanzadeh, P. Kahhal

Abstract:

This work presents a study on the abrasive water jet (AWJ) machining. An explicit finite element analysis (FEA) of single abrasive particle impact on stainless steel 1.4304 (AISI 304) is conducted. The abrasive water jet machining is modeled by FEA software ABAQUS/CAE. Shapes of craters in FEM simulation results were used and compared with the previous experimental and FEM works by means of crater sphericity. The influence of impact angle and particle velocity was observed. Adaptive mesh domain is used to model the impact zone. Results are in good agreement with those obtained from the experimental and FEM simulation. The crater-s depth is also obtained for different impact angle and abrasive particle velocities.

Keywords: Abrasive water jet machining, Adaptive meshcontrol, Explicit finite elements analysis, Single-particle impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2834
8434 Empirical Mode Decomposition Based Multiscale Analysis of Physiological Signal

Authors: Young-Seok Choi

Abstract:

We present a refined multiscale Shannon entropy for analyzing electroencephalogram (EEG), which reflects the underlying dynamics of EEG over multiple scales. The rationale behind this method is that neurological signals such as EEG possess distinct dynamics over different spectral modes. To deal with the nonlinear and nonstationary nature of EEG, the recently developed empirical mode decomposition (EMD) is incorporated, allowing a decomposition of EEG into its inherent spectral components, referred to as intrinsic mode functions (IMFs). By calculating the Shannon entropy of IMFs in a time-dependent manner and summing them over adaptive multiple scales, it results in an adaptive subscale entropy measure of EEG. Simulation and experimental results show that the proposed entropy properly reveals the dynamical changes over multiple scales.

Keywords: EEG, subscale entropy, Empirical mode decomposition, Intrinsic mode function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717
8433 Fitness Action Recognition Based on MediaPipe

Authors: Zixuan Xu, Yichun Lou, Yang Song, Zihuai Lin

Abstract:

MediaPipe is an open-source machine learning computer vision framework that can be ported into a multi-platform environment, which makes it easier to use it to recognize human activity. Based on this framework, many human recognition systems have been created, but the fundamental issue is the recognition of human behavior and posture. In this paper, two methods are proposed to recognize human gestures based on MediaPipe, the first one uses the Adaptive Boosting algorithm to recognize a series of fitness gestures, and the second one uses the Fast Dynamic Time Warping algorithm to recognize 413 continuous fitness actions. These two methods are also applicable to any human posture movement recognition.

Keywords: Computer Vision, MediaPipe, Adaptive Boosting, Fast Dynamic Time Warping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 860
8432 Application of a SubIval Numerical Solver for Fractional Circuits

Authors: Marcin Sowa

Abstract:

The paper discusses the subinterval-based numerical method for fractional derivative computations. It is now referred to by its acronym – SubIval. The basis of the method is briefly recalled. The ability of the method to be applied in time stepping solvers is discussed. The possibility of implementing a time step size adaptive solver is also mentioned. The solver is tested on a transient circuit example. In order to display the accuracy of the solver – the results have been compared with those obtained by means of a semi-analytical method called gcdAlpha. The time step size adaptive solver applying SubIval has been proven to be very accurate as the results are very close to the referential solution. The solver is currently able to solve FDE (fractional differential equations) with various derivative orders for each equation and any type of source time functions.

Keywords: Numerical method, SubIval, fractional calculus, numerical solver, circuit analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 678
8431 Efficient CT Image Volume Rendering for Diagnosis

Authors: HaeNa Lee, Sun K. Yoo

Abstract:

Volume rendering is widely used in medical CT image visualization. Applying 3D image visualization to diagnosis application can require accurate volume rendering with high resolution. Interpolation is important in medical image processing applications such as image compression or volume resampling. However, it can distort the original image data because of edge blurring or blocking effects when image enhancement procedures were applied. In this paper, we proposed adaptive tension control method exploiting gradient information to achieve high resolution medical image enhancement in volume visualization, where restored images are similar to original images as much as possible. The experimental results show that the proposed method can improve image quality associated with the adaptive tension control efficacy.

Keywords: Tension control, Interpolation, Ray-casting, Medical imaging analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2374
8430 System Identification with General Dynamic Neural Networks and Network Pruning

Authors: Christian Endisch, Christoph Hackl, Dierk Schröder

Abstract:

This paper presents an exact pruning algorithm with adaptive pruning interval for general dynamic neural networks (GDNN). GDNNs are artificial neural networks with internal dynamics. All layers have feedback connections with time delays to the same and to all other layers. The structure of the plant is unknown, so the identification process is started with a larger network architecture than necessary. During parameter optimization with the Levenberg- Marquardt (LM) algorithm irrelevant weights of the dynamic neural network are deleted in order to find a model for the plant as simple as possible. The weights to be pruned are found by direct evaluation of the training data within a sliding time window. The influence of pruning on the identification system depends on the network architecture at pruning time and the selected weight to be deleted. As the architecture of the model is changed drastically during the identification and pruning process, it is suggested to adapt the pruning interval online. Two system identification examples show the architecture selection ability of the proposed pruning approach.

Keywords: System identification, dynamic neural network, recurrentneural network, GDNN, optimization, Levenberg Marquardt, realtime recurrent learning, network pruning, quasi-online learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
8429 ANFIS Approach for Locating Faults in Underground Cables

Authors: Magdy B. Eteiba, Wael Ismael Wahba, Shimaa Barakat

Abstract:

This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system.

Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.

Keywords: ANFIS, Fault location, Underground Cable, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2742
8428 Design of FIR Filter for Water Level Detection

Authors: Sakol Udomsiri, Masahiro Iwahashi

Abstract:

This paper proposes a new design of spatial FIR filter to automatically detect water level from a video signal of various river surroundings. A new approach in this report applies "addition" of frames and a "horizontal" edge detector to distinguish water region and land region. Variance of each line of a filtered video frame is used as a feature value. The water level is recognized as a boundary line between the land region and the water region. Edge detection filter essentially demarcates between two distinctly different regions. However, the conventional filters are not automatically adaptive to detect water level in various lighting conditions of river scenery. An optimized filter is purposed so that the system becomes robust to changes of lighting condition. More reliability of the proposed system with the optimized filter is confirmed by accuracy of water level detection.

Keywords: water level, video, filter, detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2220
8427 FPGA Implementation of Adaptive Clock Recovery for TDMoIP Systems

Authors: Semih Demir, Anil Celebi

Abstract:

Circuit switched networks widely used until the end of the 20th century have been transformed into packages switched networks. Time Division Multiplexing over Internet Protocol (TDMoIP) is a system that enables Time Division Multiplexing (TDM) traffic to be carried over packet switched networks (PSN). In TDMoIP systems, devices that send TDM data to the PSN and receive it from the network must operate with the same clock frequency. In this study, it was aimed to implement clock synchronization process in Field Programmable Gate Array (FPGA) chips using time information attached to the packages received from PSN. The designed hardware is verified using the datasets obtained for the different carrier types and comparing the results with the software model. Field tests are also performed by using the real time TDMoIP system.

Keywords: Clock recovery on TDMoIP, FPGA, MATLAB reference model, clock synchronization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1468
8426 An AK-Chart for the Non-Normal Data

Authors: Chia-Hau Liu, Tai-Yue Wang

Abstract:

Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.

Keywords: Multivariate control chart, statistical process control, one-class classification method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2271
8425 Modeling and Analysis of Adaptive Buffer Sharing Scheme for Consecutive Packet Loss Reduction in Broadband Networks

Authors: Sakshi Kausha, R.K Sharma

Abstract:

High speed networks provide realtime variable bit rate service with diversified traffic flow characteristics and quality requirements. The variable bit rate traffic has stringent delay and packet loss requirements. The burstiness of the correlated traffic makes dynamic buffer management highly desirable to satisfy the Quality of Service (QoS) requirements. This paper presents an algorithm for optimization of adaptive buffer allocation scheme for traffic based on loss of consecutive packets in data-stream and buffer occupancy level. Buffer is designed to allow the input traffic to be partitioned into different priority classes and based on the input traffic behavior it controls the threshold dynamically. This algorithm allows input packets to enter into buffer if its occupancy level is less than the threshold value for priority of that packet. The threshold is dynamically varied in runtime based on packet loss behavior. The simulation is run for two priority classes of the input traffic – realtime and non-realtime classes. The simulation results show that Adaptive Partial Buffer Sharing (ADPBS) has better performance than Static Partial Buffer Sharing (SPBS) and First In First Out (FIFO) queue under the same traffic conditions.

Keywords: Buffer Management, Consecutive packet loss, Quality-of-Service, Priority based packet discarding, partial buffersharing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1638
8424 Design of QFT-Based Self-Tuning Deadbeat Controller

Authors: H. Mansor, S. B. Mohd Noor

Abstract:

This paper presents a design method of self-tuning Quantitative Feedback Theory (QFT) by using improved deadbeat control algorithm. QFT is a technique to achieve robust control with pre-defined specifications whereas deadbeat is an algorithm that could bring the output to steady state with minimum step size. Nevertheless, usually there are large peaks in the deadbeat response. By integrating QFT specifications into deadbeat algorithm, the large peaks could be tolerated. On the other hand, emerging QFT with adaptive element will produce a robust controller with wider coverage of uncertainty. By combining QFT-based deadbeat algorithm and adaptive element, superior controller that is called selftuning QFT-based deadbeat controller could be achieved. The output response that is fast, robust and adaptive is expected. Using a grain dryer plant model as a pilot case-study, the performance of the proposed method has been evaluated and analyzed. Grain drying process is very complex with highly nonlinear behaviour, long delay, affected by environmental changes and affected by disturbances. Performance comparisons have been performed between the proposed self-tuning QFT-based deadbeat, standard QFT and standard dead-beat controllers. The efficiency of the self-tuning QFTbased dead-beat controller has been proven from the tests results in terms of controller’s parameters are updated online, less percentage of overshoot and settling time especially when there are variations in the plant.

Keywords: Deadbeat control, quantitative feedback theory (QFT), robust control, self-tuning control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2335
8423 Variable Regularization Parameter Normalized Least Mean Square Adaptive Filter

Authors: Young-Seok Choi

Abstract:

We present a normalized LMS (NLMS) algorithm with robust regularization. Unlike conventional NLMS with the fixed regularization parameter, the proposed approach dynamically updates the regularization parameter. By exploiting a gradient descent direction, we derive a computationally efficient and robust update scheme for the regularization parameter. In simulation, we demonstrate the proposed algorithm outperforms conventional NLMS algorithms in terms of convergence rate and misadjustment error.

Keywords: Regularization, normalized LMS, system identification, robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1879
8422 Adaptive Gaussian Mixture Model for Skin Color Segmentation

Authors: Reza Hassanpour, Asadollah Shahbahrami, Stephan Wong

Abstract:

Skin color based tracking techniques often assume a static skin color model obtained either from an offline set of library images or the first few frames of a video stream. These models can show a weak performance in presence of changing lighting or imaging conditions. We propose an adaptive skin color model based on the Gaussian mixture model to handle the changing conditions. Initial estimation of the number and weights of skin color clusters are obtained using a modified form of the general Expectation maximization algorithm, The model adapts to changes in imaging conditions and refines the model parameters dynamically using spatial and temporal constraints. Experimental results show that the method can be used in effectively tracking of hand and face regions.

Keywords: Face detection, Segmentation, Tracking, Gaussian Mixture Model, Adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2416
8421 Simulation of Online Communities Using MAS Social and Spatial Organisations

Authors: Maya Rupert, Salima Hassas, Carlos Li, John Sherwood

Abstract:

Online Communities are an example of sociallyaware, self-organising, complex adaptive computing systems. The multi-agent systems (MAS) paradigm coordinated by self-organisation mechanisms has been used as an effective way for the simulation and modeling of such systems. In this paper, we propose a model for simulating an online health community using a situated multi-agent system approach, governed by the co-evolution of the social and spatial organisations of the agents.

Keywords: multi-agent systems, organizations, online communities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1366
8420 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: Adaptive sampling, batch bulk methyl methacrylate polymerization, large margin nearest neighbor regression, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
8419 Advanced Travel Information System in Heterogeneous Networks

Authors: Hsu-Yung Cheng, Victor Gau, Chih-Wei Huang, Jenq-Neng Hwang, Chih-Chang Yu

Abstract:

In order to achieve better road utilization and traffic efficiency, there is an urgent need for a travel information delivery mechanism to assist the drivers in making better decisions in the emerging intelligent transportation system applications. In this paper, we propose a relayed multicast scheme under heterogeneous networks for this purpose. In the proposed system, travel information consisting of summarized traffic conditions, important events, real-time traffic videos, and local information service contents is formed into layers and multicasted through an integration of WiMAX infrastructure and Vehicular Ad hoc Networks (VANET). By the support of adaptive modulation and coding in WiMAX, the radio resources can be optimally allocated when performing multicast so as to dynamically adjust the number of data layers received by the users. In addition to multicast supported by WiMAX, a knowledge propagation and information relay scheme by VANET is designed. The experimental results validate the feasibility and effectiveness of the proposed scheme.

Keywords: Intelligent Transportation Systems, RelayedMulticast, WiMAX, Vehicular Ad hoc Networks (VANET).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
8418 Post-Compression Consideration in Video Watermarking for Wireless Communication

Authors: Chuen-Ching Wang, Yao-Tang Chang, Yu-Chang Hsu

Abstract:

A simple but effective digital watermarking scheme utilizing a context adaptive variable length coding (CAVLC) method is presented for wireless communication system. In the proposed approach, the watermark bits are embedded in the final non-zero quantized coefficient of each DCT block, thereby yielding a potential reduction in the length of the coded block. As a result, the watermarking scheme not only provides the means to check the authenticity and integrity of the video stream, but also improves the compression ratio and therefore reduces both the transmission time and the storage space requirements of the coded video sequence. The results confirm that the proposed scheme enables the detection of malicious tampering attacks and reduces the size of the coded H.264 file. Therefore, the current study is feasible to apply in the video applications of wireless communication such as 3G system

Keywords: 3G, wireless communication, CAVLC, digitalwatermarking, motion compensation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
8417 Wavelet-Based Despeckling of Synthetic Aperture Radar Images Using Adaptive and Mean Filters

Authors: Syed Musharaf Ali, Muhammad Younus Javed, Naveed Sarfraz Khattak

Abstract:

In this paper we introduced new wavelet based algorithm for speckle reduction of synthetic aperture radar images, which uses combination of undecimated wavelet transformation, wiener filter (which is an adaptive filter) and mean filter. Further more instead of using existing thresholding techniques such as sure shrinkage, Bayesian shrinkage, universal thresholding, normal thresholding, visu thresholding, soft and hard thresholding, we use brute force thresholding, which iteratively run the whole algorithm for each possible candidate value of threshold and saves each result in array and finally selects the value for threshold that gives best possible results. That is why it is slow as compared to existing thresholding techniques but gives best results under the given algorithm for speckle reduction.

Keywords: Brute force thresholding, directional smoothing, direction dependent mask, undecimated wavelet transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2882
8416 Coordinated Multi-Point Scheme Based On Channel State Information in MIMO-OFDM System

Authors: Su-Hyun Jung, Chang-Bin Ha, Hyoung-Kyu Song

Abstract:

Recently, increasing the quality of experience (QoE) is an important issue. Since performance degradation at cell edge extremely reduces the QoE, several techniques are defined at LTE/LTE-A standard to remove inter-cell interference (ICI). However, the conventional techniques have disadvantage because there is a trade-off between resource allocation and reliable communication. The proposed scheme reduces the ICI more efficiently by using channel state information (CSI) smartly. It is shown that the proposed scheme can reduce the ICI with fewer resources.

Keywords: Adaptive beam forming, CoMP, LTE-A, ICI reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2515
8415 Virtual Learning Environments in Spanish Traditional Universities

Authors: Leire Urcola, Amaia Altuzarra

Abstract:

This communication is intended to provide some issues for thought on the importance of implementation of Blended Learning in traditional universities, particularly in the Spanish university system. In this respect, we believe that virtual environments are likely to meet some of the needs raised by the Bologna agreement, trying to maintain the quality of teaching and at the same time taking advantage of the functionalities that virtual learning platforms offer. We are aware that an approach of learning from an open and constructivist nature in universities is a complex process that faces significant technological, administrative and human barriers. Therefore, in order to put plans in our universities, it is necessary to analyze the state of the art of some indicators relating to the use of ICT, with special attention to virtual teaching and learning, so that we can identify the main obstacles and design adaptive strategies for their full integration in the education system. Finally, we present major initiatives launched in the European and state framework for the effective implementation of new virtual environments in the area of higher education.

Keywords: Blended learning, e-Learning, ICT, Virtual LearningEnvironments

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439
8414 A Diffusion Least-Mean Square Algorithm for Distributed Estimation over Sensor Networks

Authors: Amir Rastegarnia, Mohammad Ali Tinati, Azam Khalili

Abstract:

In this paper we consider the issue of distributed adaptive estimation over sensor networks. To deal with more realistic scenario, different variance for observation noise is assumed for sensors in the network. To solve the problem of different variance of observation noise, the proposed method is divided into two phases: I) Estimating each sensor-s observation noise variance and II) using the estimated variances to obtain the desired parameter. Our proposed algorithm is based on a diffusion least mean square (LMS) implementation with linear combiner model. In the proposed algorithm, the step-size parameter the coefficients of linear combiner are adjusted according to estimated observation noise variances. As the simulation results show, the proposed algorithm considerably improves the diffusion LMS algorithm given in literature.

Keywords: Adaptive filter, distributed estimation, sensor network, diffusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867
8413 Noise Removal from Surface Respiratory EMG Signal

Authors: Slim Yacoub, Kosai Raoof

Abstract:

The aim of this study was to remove the two principal noises which disturb the surface electromyography signal (Diaphragm). These signals are the electrocardiogram ECG artefact and the power line interference artefact. The algorithm proposed focuses on a new Lean Mean Square (LMS) Widrow adaptive structure. These structures require a reference signal that is correlated with the noise contaminating the signal. The noise references are then extracted : first with a noise reference mathematically constructed using two different cosine functions; 50Hz (the fundamental) function and 150Hz (the first harmonic) function for the power line interference and second with a matching pursuit technique combined to an LMS structure for the ECG artefact estimation. The two removal procedures are attained without the use of supplementary electrodes. These techniques of filtering are validated on real records of surface diaphragm electromyography signal. The performance of the proposed methods was compared with already conducted research results.

Keywords: Surface EMG, Adaptive, Matching Pursuit, Powerline interference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4328
8412 Image Magnification Using Adaptive Interpolationby Pixel Level Data-Dependent Geometrical Shapes

Authors: Muhammad Sajjad, Naveed Khattak, Noman Jafri

Abstract:

World has entered in 21st century. The technology of computer graphics and digital cameras is prevalent. High resolution display and printer are available. Therefore high resolution images are needed in order to produce high quality display images and high quality prints. However, since high resolution images are not usually provided, there is a need to magnify the original images. One common difficulty in the previous magnification techniques is that of preserving details, i.e. edges and at the same time smoothing the data for not introducing the spurious artefacts. A definitive solution to this is still an open issue. In this paper an image magnification using adaptive interpolation by pixel level data-dependent geometrical shapes is proposed that tries to take into account information about the edges (sharp luminance variations) and smoothness of the image. It calculate threshold, classify interpolation region in the form of geometrical shapes and then assign suitable values inside interpolation region to the undefined pixels while preserving the sharp luminance variations and smoothness at the same time. The results of proposed technique has been compared qualitatively and quantitatively with five other techniques. In which the qualitative results show that the proposed method beats completely the Nearest Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The quantitative results are competitive and consistent with NN, BL, BC and others.

Keywords: Adaptive, digital image processing, imagemagnification, interpolation, geometrical shapes, qualitative &quantitative analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802