Search results for: Dual BCH Codes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 526

Search results for: Dual BCH Codes

346 Fragile Watermarking for Color Images Using Thresholding Technique

Authors: Kuo-Cheng Liu

Abstract:

In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.

Keywords: thresholding technique, tamper proofing, tamper recovery

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1607
345 Dual-Link Hierarchical Cluster-Based Interconnect Architecture for 3D Network on Chip

Authors: Guang Sun, Yong Li, Yuanyuan Zhang, Shijun Lin, Li Su, Depeng Jin, Lieguang zeng

Abstract:

Network on Chip (NoC) has emerged as a promising on chip communication infrastructure. Three Dimensional Integrate Circuit (3D IC) provides small interconnection length between layers and the interconnect scalability in the third dimension, which can further improve the performance of NoC. Therefore, in this paper, a hierarchical cluster-based interconnect architecture is merged with the 3D IC. This interconnect architecture significantly reduces the number of long wires. Since this architecture only has approximately a quarter of routers in 3D mesh-based architecture, the average number of hops is smaller, which leads to lower latency and higher throughput. Moreover, smaller number of routers decreases the area overhead. Meanwhile, some dual links are inserted into the bottlenecks of communication to improve the performance of NoC. Simulation results demonstrate our theoretical analysis and show the advantages of our proposed architecture in latency, throughput and area, when compared with 3D mesh-based architecture.

Keywords: Network on Chip (NoC), interconnect architecture, performance, area, Three Dimensional Integrate Circuit (3D IC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
344 Investigations on the Seismic Performance of Hot-Finished Hollow Steel Sections

Authors: Paola Pannuzzo, Tak-Ming Chan

Abstract:

In seismic applications, hollow steel sections show, beyond undeniable esthetical appeal, promising structural advantages since, unlike open section counterparts, they are not susceptible to weak-axis and lateral-torsional buckling. In particular, hot-finished hollow steel sections have homogeneous material properties and favorable ductility but have been underutilized for cyclic bending. The main reason is that the parameters affecting their hysteretic behaviors are not yet well understood and, consequently, are not well exploited in existing codes of practice. Therefore, experimental investigations have been conducted on a wide range of hot-finished rectangular hollow section beams with the aim to providing basic knowledge for evaluating their seismic performance. The section geometry (width-to-thickness and depth-to-thickness ratios) and the type of loading (monotonic and cyclic) have been chosen as the key parameters to investigate the cyclic effect on the rotational capacity and to highlight the differences between monotonic and cyclic load conditions. The test results provide information on the parameters that affect the cyclic performance of hot-finished hollow steel beams and can be used to assess the design provisions stipulated in the current seismic codes of practice.

Keywords: Hot-finished steel, hollow sections, cyclic tests, bending.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 528
343 Image Transmission: A Case Study on Combined Scheme of LDPC-STBC in Asynchronous Cooperative MIMO Systems

Authors: Shan Ding, Lijia Zhang, Hongming Xu

Abstract:

this paper presents a novel scheme which is capable of reducing the error rate and improves the transmission performance in the asynchronous cooperative MIMO systems. A case study of image transmission is applied to prove the efficient of scheme. The linear dispersion structure is employed to accommodate the cooperative wireless communication network in the dynamic topology of structure, as well as to achieve higher throughput than conventional space–time codes based on orthogonal designs. The LDPC encoder without girth-4 and the STBC encoder with guard intervals are respectively introduced. The experiment results show that the combined coder of LDPC-STBC with guard intervals can be the good error correcting coders and BER performance in the asynchronous cooperative communication. In the case study of image transmission, the results show that in the transmission process, the image quality which is obtained by applied combined scheme is much better than it which is not applied the scheme in the asynchronous cooperative MIMO systems.

Keywords: Cooperative MIMO, image transmission, lineardispersion codes, Low-Density Parity-Check (LDPC)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1905
342 Designing of Multi-Agent Rescue Robot: Development and Basic Experiments of Master-Slave Type Rescue Robots

Authors: J. Lin, T. C. Kuo, C. -Y. Gau, K. C. Liu, Y. J. Huang, J. D. Yu, Y. W. Lin

Abstract:

A multi-agent type robot for disaster response in calamity scene is proposed in this paper. The proposed grouped rescue robots can perform cooperative reconnaissance and surveillance to achieve a given rescue mission. The multi-agent rescue of dual set robot consists of one master set and three slave units. The research for this rescue robot system is going to detect at harmful environment where human is unreachable, such as the building is infected with virus or the factory has hazardous liquid in effluent. As a dual set robot, with Bluetooth and communication network, the master set can connect with slave units and send information back to computer by wireless and monitor. Therefore, rescuer can be informed the real-time information in a calamity area. Furthermore, each slave robot is able to obstacle avoidance by ultrasonic sensors, and encodes distance and location by compass. The master robot can integrate every devices information to increase the efficiency of prospected and research unknown area.

Keywords: Designing of multi-agent rescue robot, development and basic experiments of master-slave type rescue robots.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1514
341 Capacity of Overloaded DS-CDMA System on Rayleigh Fading Channel with Timing Error

Authors: Preetam Kumar

Abstract:

The number of users supported in a DS-CDMA cellular system is typically less than spreading factor (N), and the system is said to be underloaded. Overloading is a technique to accommodate more number of users than the spreading factor N. In O/O overloading scheme, the first set is assigned to the N synchronous users and the second set is assigned to the additional synchronous users. An iterative multistage soft decision interference cancellation (SDIC) receiver is used to remove high level of interference between the two sets. Performance is evaluated in terms of the maximum number acceptable users so that the system performance is degraded slightly compared to the single user performance at a specified BER. In this paper, the capacity of CDMA based O/O overloading scheme is evaluated with SDIC receiver. It is observed that O/O scheme using orthogonal Gold codes provides 25% channel overloading (N=64) for synchronous DS-CDMA system on an AWGN channel in the uplink at a BER of 1e-5.For a Rayleigh faded channel, the critical capacity is 40% at a BER of 5e-5 assuming synchronous users. But in practical systems, perfect chip timing is very difficult to maintain in the uplink.. We have shown that the overloading performance reduces to 11% for a timing synchronization error of 0.02Tc for a BER of 1e-5.

Keywords: DS-CDMA, Interference Cancellation, MultiuserDetection, Orthogonal codes, Overloading.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
340 Analysis and Design of Dual-Polarization Antennas for Wireless Communication Systems

Authors: Vladimir Veremey

Abstract:

The paper describes the design and simulation of dual-polarization antennas that use the resonance and radiating properties of the H00 mode of metal open waveguides. The proposed antennas are formed by two orthogonal slots in a finite conducting ground plane. The slots are backed by metal screens connected to the ground plane forming open waveguides. It has been shown that the antenna designs can be efficiently used in mm-wave bands. The antenna single mode operational bandwidth is higher than 10%. The antenna designs are very simple and low-cost. They allow flush installation and can be efficiently used in various communication and remote sensing devices on fast moving carriers. Mutual coupling between antennas of the proposed design is very low. Thus, multiple antenna structures with proposed antennas can be efficiently employed in multi-band and in multiple-input-multiple-output (MIMO) systems.

Keywords: Antenna, antenna arrays, multiple-input-multiple-output, MIMO, millimeter wave bands, slot antenna, flush installation, directivity, open waveguide, conformal antennas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718
339 Alignment of Emission Gamma Ray Sources with Nai(Ti) Scintillation Detectors by Two Laser Beams to Pre-Operation using Alternating Minimization Technique

Authors: Abbas Ali Mahmood Karwi

Abstract:

Accurate timing alignment and stability is important to maximize the true counts and minimize the random counts in positron emission tomography So signals output from detectors must be centering with the two isotopes to pre-operation and fed signals into four units of pulse-processing units, each unit can accept up to eight inputs. The dual source computed tomography consist two units on the left for 15 detector signals of Cs-137 isotope and two units on the right are for 15 detectors signals of Co-60 isotope. The gamma spectrum consisting of either single or multiple photo peaks. This allows for the use of energy discrimination electronic hardware associated with the data acquisition system to acquire photon counts data with a specific energy, even if poor energy resolution detectors are used. This also helps to avoid counting of the Compton scatter counts especially if a single discrete gamma photo peak is emitted by the source as in the case of Cs-137. In this study the polyenergetic version of the alternating minimization algorithm is applied to the dual energy gamma computed tomography problem.

Keywords: Alignment, Spectrum, Laser, Detectors, Image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1579
338 Analysis of Message Authentication in Turbo Coded Halftoned Images using Exit Charts

Authors: Andhe Dharani, P. S. Satyanarayana, Andhe Pallavi

Abstract:

Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).

Keywords: Halftoning, Turbo codes, security, operationallifetime, Turbo based stego system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1479
337 Comparison between Turbo Code and Convolutional Product Code (CPC) for Mobile WiMAX

Authors: Ahmed Ebian, Mona Shokair, Kamal Awadalla

Abstract:

Mobile WiMAX is a broadband wireless solution that enables convergence of mobile and fixed broadband networks through a common wide area broadband radio access technology and flexible network architecture. It adopts Orthogonal Frequency Division Multiple Access (OFDMA) for improved multi-path performance in Non-Line-Of-Sight (NLOS) environments. Scalable OFDMA (SOFDMA) is introduced in the IEEE 802e[1]. WIMAX system uses one of different types of channel coding but The mandatory channel coding scheme is based on binary nonrecursive Convolutional Coding (CC). There are other several optional channel coding schemes such as block turbo codes, convolutional turbo codes, and low density parity check (LDPC). In this paper a comparison between the performance of WIMAX using turbo code and using convolutional product code (CPC) [2] is made. Also a combination between them had been done. The CPC gives good results at different SNR values compared to both the turbo system, and the combination between them. For example, at BER equal to 10-2 for 128 subcarriers, the amount of improvement in SNR equals approximately 3 dB higher than turbo code and equals approximately 2dB higher than the combination respectively. Several results are obtained at different modulating schemes (16QAM and 64QAM) and different numbers of sub-carriers (128 and 512).

Keywords: Turbo Code, Convolutional Product Code (CPC), Convolutional Product Code (CPC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3351
336 A PN Sequence Generator based on Residue Arithmetic for Multi-User DS-CDMA Applications

Authors: Chithra R, Pallab Maji, Sarat Kumar Patra, Girija Sankar Rath

Abstract:

The successful use of CDMA technology is based on the construction of large families of encoding sequences with good correlation properties. This paper discusses PN sequence generation based on Residue Arithmetic with an effort to improve the performance of existing interference-limited CDMA technology for mobile cellular systems. All spreading codes with residual number system proposed earlier did not consider external interferences, multipath propagation, Doppler effect etc. In literature the use of residual arithmetic in DS-CDMA was restricted to encoding of already spread sequence; where spreading of sequence is done by some existing techniques. The novelty of this paper is the use of residual number system in generation of the PN sequences which is used to spread the message signal. The significance of cross-correlation factor in alleviating multi-access interference is also discussed. The RNS based PN sequence has superior performance than most of the existing codes that are widely used in DS-CDMA applications. Simulation results suggest that the performance of the proposed system is superior to many existing systems.

Keywords: Direct-Sequence Code Division Multiple Access (DSCDMA), Multiple-Access Interference (MAI), PN Sequence, Residue Number System (RNS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2393
335 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem

Authors: Thuraya M. Qaradaghi, Newroz N. Abdulrazaq

Abstract:

The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.

Keywords: McEliece cryptosystem, Goppa code, separable, irreducible.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2167
334 Effect of Transmission Codes on Hybrid SC/MRC Diversity Reception MQAM system over Rayleigh Fading Channels

Authors: J.S. Ubhi, M.S. Patterh, T.S. Kamal

Abstract:

In this paper, the effect of transmission codes on the performance of coherent square M-ary quadrature amplitude modulation (CSMQAM) under hybrid selection/maximal-ratio combining (H-S/MRC) diversity is analysed. The fading channels are modeled as frequency non-selective slow independent and identically distributed Rayleigh fading channels corrupted by additive white Gaussian noise (AWGN). The results for coded MQAM are computed numerically for the case of (24,12) extended Golay code and compared with uncoded MQAM under H-S/MRC diversity by plotting error probabilities versus average signal to noise ratio (SNR) for various values L and N in order to examine the improvement in the performance of the digital communications system as the number of selected diversity branches is increased. The results for no diversity, conventional SC and Lth order MRC schemes are also plotted for comparison. Closed form analytical results derived in this paper are sufficiently simple and therefore can be computed numerically without any approximations. The analytical results presented in this paper are expected to provide useful information needed for design and analysis of digital communication systems over wireless fading channels.

Keywords: Error probability, diversity reception, Rayleigh fading channels, wireless digital communications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
333 Effects of Dual Inoculation of Azotobacter and Mycorrhiza with Nitrogen and Phosphorus Fertilizer Rates on Grain Yield and Some of Characteristics of Spring Safflower

Authors: M.Mirzakhani, M.R.Ardakani , A.Aeene Band , A.H. Shirani Rad, F.Rejali

Abstract:

In order to evaluate the Effects of dual inoculation of Azotobacter and Mycorrhiza with Nitrogen and Phosphorus levels on yield and yield components of spring safflower, this study was carried out in field of Farahan university in Markazi province in 2007. A factorial in a randomized complete block design with three replications was used inoculation of Azotobacter (with inoculation and without inoculation) and Mycorrhiza (with inoculation and without inoculation ) with Nitrogen and Phosphorus levels [F0= N0+ P0 (kg.ha-1), F1= N50+ P25(kg.ha-1), F2= N100+ P50(kg.ha-1) and F3= N150+ P75 (kg.ha-1)] on spring safflower (cultivar IL-111). In this study characteristics such as: Harvest index, Hectolitre weight, Root dry weight, Seed yield, Mycorrhizal Colonization Root, Number of days to maturity were assessed. Results indicated that treatment (A0M1F3) with grain yield (1556 kg.ha-1) and treatment (A0M1F0) with grain yield (918 kg.ha-1) were significantly superior to the other treatments and according to calculated, inoculation seeds in plantig date with Azotobacter and Mycorrhiza to cause increase grain yield about 5/38 percentage. we can by inoculation safflower seeds with Azotobacter and Mycorrhiza too easily at the time sowing date. The purpose of this research, study and evaluation the role of biological fixation N and P, to provide for feeds plants.

Keywords: Spring safflower, grain yield, inoculation, Azotobacter and Mycorrhiza.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1816
332 Influence of Deficient Materials on the Reliability of Reinforced Concrete Members

Authors: Sami W. Tabsh

Abstract:

The strength of reinforced concrete depends on the member dimensions and material properties. The properties of concrete and steel materials are not constant but random variables. The variability of concrete strength is due to batching errors, variations in mixing, cement quality uncertainties, differences in the degree of compaction and disparity in curing. Similarly, the variability of steel strength is attributed to the manufacturing process, rolling conditions, characteristics of base material, uncertainties in chemical composition, and the microstructure-property relationships. To account for such uncertainties, codes of practice for reinforced concrete design impose resistance factors to ensure structural reliability over the useful life of the structure. In this investigation, the effects of reductions in concrete and reinforcing steel strengths from the nominal values, beyond those accounted for in the structural design codes, on the structural reliability are assessed. The considered limit states are flexure, shear and axial compression based on the ACI 318-11 structural concrete building code. Structural safety is measured in terms of a reliability index. Probabilistic resistance and load models are compiled from the available literature. The study showed that there is a wide variation in the reliability index for reinforced concrete members designed for flexure, shear or axial compression, especially when the live-to-dead load ratio is low. Furthermore, variations in concrete strength have minor effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and sever effect on the reliability of columns in axial compression. On the other hand, changes in steel yield strength have great effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and mild effect on the reliability of columns in axial compression. Based on the outcome, it can be concluded that the reliability of beams is sensitive to changes in the yield strength of the steel reinforcement, whereas the reliability of columns is sensitive to variations in the concrete strength. Since the embedded target reliability in structural design codes results in lower structural safety in beams than in columns, large reductions in material strengths compromise the structural safety of beams much more than they affect columns.

Keywords: Code, flexure, limit states, random variables, reinforced concrete, reliability, reliability index, shear, structural safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2539
331 Novel Dual Stage Membrane Bioreactor for the Continuous Remediation of Electroplating Wastewater

Authors: B. A. Q. Santos, S. K. O. Ntwampe, G. Muchatibaya

Abstract:

In this study, the designed dual stage membrane bioreactor (MBR) system was conceptualized for the treatment of cyanide and heavy metals in electroplating wastewater. The design consisted of a primary treatment stage to reduce the impact of fluctuations and the secondary treatment stage to remove the residual cyanide and heavy metal contaminants in the wastewater under alkaline pH conditions. The primary treatment stage contained hydrolyzed Citrus sinensis (C. sinensis) pomace and the secondary treatment stage contained active Aspergillus awamori (A. awamori) biomass, supplemented solely with C. sinensis pomace extract from the hydrolysis process. An average of 76.37%, 95.37%, 93.26 and 94.76% and 99.55%, 99.91%, 99.92% and 99.92% degradation efficiency for total cyanide (T-CN), including the sorption of nickel (Ni), zinc (Zn) and copper (Cu) were observed after the first and second treatment stages, respectively. Furthermore, cyanide conversion by-products degradation was 99.81% and 99.75 for both formate (CHOO-) and ammonium (NH4 +) after the second treatment stage. After the first, second and third regeneration cycles of the C. sinensis pomace in the first treatment stage, Ni, Zn and Cu removal achieved was 99.13%, 99.12% and 99.04% (first regeneration cycle), 98.94%, 98.92% and 98.41% (second regeneration cycle) and 98.46 %, 98.44% and 97.91% (third regeneration cycle), respectively. There was relatively insignificant standard deviation detected in all the measured parameters in the system which indicated reproducibility of the remediation efficiency in this continuous system.

Keywords: Aspergillus awamori, Citrus sinensis pomace, electroplating wastewater remediation, membrane bioreactor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2108
330 A Framework for Improving Trade Contractors’ Productivity Tracking Methods

Authors: Sophia Hayes, Kenny L. Liang, Sahil Sharma, Austin Shema, Mahmoud Bader, Mohamed Elbarkouky

Abstract:

Despite being one of the most significant economic contributors of the country, Canada’s construction industry is lagging behind other sectors when it comes to labor productivity improvements. The construction industry is very collaborative as a general contractor, will hire trade contractors to perform most of a project’s work; meaning low productivity from one contractor can have a domino effect on the shared success of a project. To address this issue and encourage trade contractors to improve their productivity tracking methods, an investigative study was done on the productivity views and tracking methods of various trade contractors. Additionally, an in-depth review was done on four standard tracking methods used in the construction industry: cost codes, benchmarking, the job productivity measurement (JPM) standard, and WorkFace Planning (WFP). The four tracking methods were used as a baseline in comparing the trade contractors’ responses, determining gaps within their current tracking methods, and for making improvement recommendations. 15 interviews were conducted with different trades to analyze how contractors value productivity. The results of these analyses indicated that there seem to be gaps within the construction industry when it comes to an understanding of the purpose and value in productivity tracking. The trade contractors also shared their current productivity tracking systems; which were then compared to the four standard tracking methods used in the construction industry. Gaps were identified in their various tracking methods and using a framework; recommendations were made based on the type of trade on how to improve how they track productivity.

Keywords: Trade contractors’ productivity, productivity tracking, cost codes, benchmarking, job productivity measurement, JPM, workface planning WFP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 831
329 Moving Area Filter to Detect Object in Video Sequence from Moving Platform

Authors: Sallama Athab, Hala Bahjat

Abstract:

Detecting object in video sequence is a challenging mission for identifying, tracking moving objects. Background removal considered as a basic step in detected moving objects tasks. Dual static cameras placed in front and rear moving platform gathered information which is used to detect objects. Background change regarding with speed and direction moving platform, so moving objects distinguished become complicated. In this paper, we propose framework allows detection moving object with variety of speed and direction dynamically. Object detection technique built on two levels the first level apply background removal and edge detection to generate moving areas. The second level apply Moving Areas Filter (MAF) then calculate Correlation Score (CS) for adjusted moving area. Merging moving areas with closer CS and marked as moving object. Experiment result is prepared on real scene acquired by dual static cameras without overlap in sense. Results showing accuracy in detecting objects compared with optical flow and Mixture Module Gaussian (MMG), Accurate ratio produced to measure accurate detection moving object.

Keywords: Background Removal, Correlation, Mixture Module Gaussian, Moving Platform, Object Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2080
328 Scale Time Offset Robust Modulation (STORM) in a Code Division Multiaccess Environment

Authors: David M. Jenkins Jr.

Abstract:

Scale Time Offset Robust Modulation (STORM) [1]– [3] is a high bandwidth waveform design that adds time-scale to embedded reference modulations using only time-delay [4]. In an environment where each user has a specific delay and scale, identification of the user with the highest signal power and that user-s phase is facilitated by the STORM processor. Both of these parameters are required in an efficient multiuser detection algorithm. In this paper, the STORM modulation approach is evaluated with a direct sequence spread quadrature phase shift keying (DS-QPSK) system. A misconception of the STORM time scale modulation is that a fine temporal resolution is required at the receiver. STORM will be applied to a QPSK code division multiaccess (CDMA) system by modifying the spreading codes. Specifically, the in-phase code will use a typical spreading code, and the quadrature code will use a time-delayed and time-scaled version of the in-phase code. Subsequently, the same temporal resolution in the receiver is required before and after the application of STORM. In this paper, the bit error performance of STORM in a synchronous CDMA system is evaluated and compared to theory, and the bit error performance of STORM incorporated in a single user WCDMA downlink is presented to demonstrate the applicability of STORM in a modern communication system.

Keywords: Pseudonoise coded communication, Cyclic codes, Code division multiaccess

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597
327 Unbalanced Distribution Optimal Power Flow to Minimize Losses with Distributed Photovoltaic Plants

Authors: Malinwo Estone Ayikpa

Abstract:

Electric power systems are likely to operate with minimum losses and voltage meeting international standards. This is made possible generally by control actions provide by automatic voltage regulators, capacitors and transformers with on-load tap changer (OLTC). With the development of photovoltaic (PV) systems technology, their integration on distribution networks has increased over the last years to the extent of replacing the above mentioned techniques. The conventional analysis and simulation tools used for electrical networks are no longer able to take into account control actions necessary for studying distributed PV generation impact. This paper presents an unbalanced optimal power flow (OPF) model that minimizes losses with association of active power generation and reactive power control of single-phase and three-phase PV systems. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. The unbalance OPF is formulated by current balance equations and solved by primal-dual interior point method. Several simulation cases have been carried out varying the size and location of PV systems and the results show a detailed view of the impact of PV distributed generation on distribution systems.

Keywords: Distribution system, losses, photovoltaic generation, primal-dual interior point method, reactive power control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1032
326 Embodiment Design of an Azimuth-Altitude Solar Tracker

Authors: M. Culman, O. Lengerke

Abstract:

To provide an efficient solar generation system, the embodiment design of a two axis solar tracker for an array of photovoltaic (PV) panels destiny to supply the power demand on off-the-grid areas was developed. Photovoltaic cells have high costs in relation to t low efficiency; and while a lot of research and investment has been made to increases its efficiency a few points, there is a profitable solution that increases by 30-40% the annual power production: two axis solar trackers. A solar tracker is a device that supports a load in a perpendicular position toward the sun during daylight. Mounted on solar trackers, the solar panels remain perpendicular to the incoming sunlight at day and seasons so the maximum amount of energy is outputted. Through a preview research done it was justified why the generation of solar energy through photovoltaic panels mounted on dual axis structures is an attractive solution to bring electricity to remote off-the-grid areas. The work results are the embodiment design of an azimuth-altitude solar tracker to guide an array of photovoltaic panels based on a specific design methodology. The designed solar tracker is mounted on a pedestal that uses two slewing drives‚ with a nominal torque of 1950 Nm‚ to move a solar array that provides 3720 W from 12 PV panels.

Keywords: Azimuth-altitude sun tracker, dual-axis solar tracker, photovoltaic system, solar energy, stand-alone power system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1751
325 Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences

Authors: Yuan-Hsiang Chang, Pin-Chi Lin, Li-Der Jeng

Abstract:

Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.). 

Keywords: Motion detection, motion tracking, trajectory analysis, video surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
324 Design and Performance Analysis of One Dimensional Zero Cross-Correlation Coding Technique for a Fixed Wavelength Hopping SAC-OCDMA

Authors: Satyasen Panda, Urmila Bhanja

Abstract:

This paper presents a SAC-OCDMA code with zero cross correlation property to minimize the Multiple Access Interface (MAI) as New Zero Cross Correlation code (NZCC), which is found to be more scalable compared to the other existing SAC-OCDMA codes. This NZCC code is constructed using address segment and data segment. In this work, the proposed NZCC code is implemented in an optical system using the Opti-System software for the spectral amplitude coded optical code-division multiple-access (SAC-OCDMA) scheme. The main contribution of the proposed NZCC code is the zero cross correlation, which reduces both the MAI and PIIN noises. The proposed NZCC code reveals properties of minimum cross-correlation, flexibility in selecting the code parameters and supports a large number of users, combined with high data rate and longer fiber length. Simulation results reveal that the optical code division multiple access system based on the proposed NZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower Bit Error Rates (BER) and longer travelling distance without any signal quality degradation, as compared to the former existing SAC-OCDMA codes.

Keywords: Cross Correlation, Optical Code Division Multiple Access, Spectral Amplitude Coding Optical Code Division Multiple Access, Multiple Access Interference, Phase Induced Intensity Noise, New Zero Cross Correlation code.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2201
323 The Coexistence of Dual Form of Malnutrition among Portuguese Institutionalized Elderly People

Authors: C. Caçador, M. J. Reis Lima, J. Oliveira, M. J. Veiga, M. Teixeira Veríssimo, F. Ramos, M. C. Castilho, E. Teixeira-Lemos

Abstract:

In the present study we evaluated the nutritional status of 214 institutionalized elderly residents of both genders, aged 65 years and older of 11 care homes located in the district of Viseu (center of Portugal). The evaluation was based on anthropometric measurements and the Mini Nutritional Assessment (MNA) score.

The mean age of the subjects was 82.3 ± 6.1 years-old. Most of the elderly residents were female (72.0%). The majority had 4 years of formal education (51.9%) and was widowed (74.3%) or married (14.0%).

Men presented a mean age of 81.2±8.5 years-old, weight 69.3±14.5 kg and BMI 25.33±6.5 kg/m2. In women, the mean age was 84.5±8.2 years-old, weight 61.2±14.7 kg and BMI 27.43±5.6 kg/m2.

The evaluation of the nutritional status using the MNA score showed that 24.0% of the residents show a risk of undernutrition and 76.0% of them were well nourished.

There was a high prevalence of obese (24.8%) and overweight residents (33.2%) according to the BMI. 7.5% were considered underweight.

We also found that according to their waist circumference measurements 88.3% of the residents were at risk for cardiovascular disease (CVD) and 64.0% of them presented very high risk for CVD (WC≥88 cm for women and WC ≥102 cm for men).

The present study revealed the coexistence of a dual form of malnutrition (undernourished and overweight) among the institutionalized Portuguese concomitantly with an excess of abdominal adiposity. The high prevalence of residents at high risk for CVD should not be overlooked.

Given the vulnerability of the group of institutionalized elderly, our study highlights the importance of the classification of nutritional status based on both instruments: the BMI and the MNA.

Keywords: Nutritional status, MNA, BMI, elderly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1920
322 Developing Laser Spot Position Determination and PRF Code Detection with Quadrant Detector

Authors: Mohamed Fathy Heweage, Xiao Wen, Ayman Mokhtar, Ahmed Eldamarawy

Abstract:

In this paper, we are interested in modeling, simulation, and measurement of the laser spot position with a quadrant detector. We enhance detection and tracking of semi-laser weapon decoding system based on microcontroller. The system receives the reflected pulse through quadrant detector and processes the laser pulses through a processing circuit, a microcontroller decoding laser pulse reflected by the target. The seeker accuracy will be enhanced by the decoding system, the laser detection time based on the receiving pulses number is reduced, a gate is used to limit the laser pulse width. The model is implemented based on Pulse Repetition Frequency (PRF) technique with two microcontroller units (MCU). MCU1 generates laser pulses with different codes. MCU2 decodes the laser code and locks the system at the specific code. The codes EW selected based on the two selector switches. The system is implemented and tested in Proteus ISIS software. The implementation of the full position determination circuit with the detector is produced. General system for the spot position determination was performed with the laser PRF for incident radiation and the mechanical system for adjusting system at different angles. The system test results show that the system can detect the laser code with only three received pulses based on the narrow gate signal, and good agreement between simulation and measured system performance is obtained.

Keywords: 4-quadrant detector, pulse code detection, laser guided weapons, pulse repetition frequency, ATmega 32 microcontrollers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1463
321 Determining the Maximum Lateral Displacement Due to Sever Earthquakes without Using Nonlinear Analysis

Authors: Mussa Mahmoudi

Abstract:

For Seismic design, it is important to estimate, maximum lateral displacement (inelastic displacement) of the structures due to sever earthquakes for several reasons. Seismic design provisions estimate the maximum roof and storey drifts occurring in major earthquakes by amplifying the drifts of the structures obtained by elastic analysis subjected to seismic design load, with a coefficient named “displacement amplification factor" which is greater than one. Here, this coefficient depends on various parameters, such as ductility and overstrength factors. The present research aims to evaluate the value of the displacement amplification factor in seismic design codes and then tries to propose a value to estimate the maximum lateral structural displacement from sever earthquakes, without using non-linear analysis. In seismic codes, since the displacement amplification is related to “force reduction factor" hence; this aspect has been accepted in the current study. Meanwhile, two methodologies are applied to evaluate the value of displacement amplification factor and its relation with the force reduction factor. In the first methodology, which is applied for all structures, the ratio of displacement amplification and force reduction factors is determined directly. Whereas, in the second methodology that is applicable just for R/C moment resisting frame, the ratio is obtained by calculating both factors, separately. The acquired results of these methodologies are alike and estimate the ratio of two factors from 1 to 1.2. The results indicate that the ratio of the displacement amplification factor and the force reduction factor differs to those proposed by seismic provisions such as NEHRP, IBC and Iranian seismic code (standard no. 2800).

Keywords: Displacement amplification factor, Ductility factor, Force reduction factor, Maximum lateral displacement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2850
320 Simulation of Static Frequency Converter for Synchronous Machine Operation and Investigation of Shaft Voltage

Authors: Arun Kumar Datta, M. A. Ansari, N. R. Mondal, B. V. Raghavaiah, Manisha Dubey, Shailendra Jain

Abstract:

This study is carried out to understand the effects of Static frequency converter (SFC) on large machine. SFC has a feature of four quadrant operations. By virtue of this it can be implemented to run a synchronous machine either as a motor or alternator. This dual mode operation helps a single machine to start & run as a motor and then it can be converted as an alternator whenever required. One such dual purpose machine is taken here for study. This machine is installed at a laboratory carrying out short circuit test on high power electrical equipment. SFC connected with this machine is broadly described in this paper. The same SFC has been modeled with the MATLAB/Simulink software. The data applied on this virtual model are the actual parameters from SFC and synchronous machine. After running the model, simulated machine voltage and current waveforms are validated with the real measurements. Processing of these waveforms is done through Fast Fourier Transformation (FFT) which reveals that the waveforms are not sinusoidal rather they contain number of harmonics. These harmonics are the major cause of generating shaft voltage. It is known that bearings of electrical machine are vulnerable to current flow through it due to shaft voltage. A general discussion on causes of shaft voltage in perspective with this machine is presented in this paper.

Keywords: Alternators, AC-DC power conversion, capacitive coupling, electric discharge machining, frequency converter, Fourier transforms, inductive coupling, simulation, Shaft voltage, synchronous machines, static excitation, thyristor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5988
319 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material

Authors: S. Boria

Abstract:

In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.

Keywords: Composite material, crashworthiness, finite element analysis, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1080
318 Beam Coding with Orthogonal Complementary Golay Codes for Signal to Noise Ratio Improvement in Ultrasound Mammography

Authors: Y. Kumru, K. Enhos, H. Köymen

Abstract:

In this paper, we report the experimental results on using complementary Golay coded signals at 7.5 MHz to detect breast microcalcifications of 50 µm size. Simulations using complementary Golay coded signals show perfect consistence with the experimental results, confirming the improved signal to noise ratio for complementary Golay coded signals. For improving the success on detecting the microcalcifications, orthogonal complementary Golay sequences having cross-correlation for minimum interference are used as coded signals and compared to tone burst pulse of equal energy in terms of resolution under weak signal conditions. The measurements are conducted using an experimental ultrasound research scanner, Digital Phased Array System (DiPhAS) having 256 channels, a phased array transducer with 7.5 MHz center frequency and the results obtained through experiments are validated by Field-II simulation software. In addition, to investigate the superiority of coded signals in terms of resolution, multipurpose tissue equivalent phantom containing series of monofilament nylon targets, 240 µm in diameter, and cyst-like objects with attenuation of 0.5 dB/[MHz x cm] is used in the experiments. We obtained ultrasound images of monofilament nylon targets for the evaluation of resolution. Simulation and experimental results show that it is possible to differentiate closely positioned small targets with increased success by using coded excitation in very weak signal conditions.

Keywords: Coded excitation, complementary Golay codes, DiPhAS, medical ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 848
317 Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework

Authors: T. P. Athira, Gibin Chacko George

Abstract:

This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.

Keywords: Dual-tree complex wavelet transform (DT-CWT), denoising, interpolation, optimal estimation, super resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2131