Search results for: Detection Rate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4119

Search results for: Detection Rate

3279 Decoder Design for a New Single Error Correcting/Double Error Detecting Code

Authors: M. T. Anwar, P. K. Lala, P. Thenappan

Abstract:

This paper presents the decoder design for the single error correcting and double error detecting code proposed by the authors in an earlier paper. The speed of error detection and correction of a code is largely dependent upon the associated encoder and decoder circuits. The complexity and the speed of such circuits are determined by the number of 1?s in the parity check matrix (PCM). The number of 1?s in the parity check matrix for the code proposed by the authors are fewer than in any currently known single error correcting/double error detecting code. This results in simplified encoding and decoding circuitry for error detection and correction.

Keywords: Decoder, Hsiao code, Parity Check Matrix, Syndrome Pattern.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073
3278 Autonomously Determining the Parameters for SVDD with RBF Kernel from a One-Class Training Set

Authors: Andreas Theissler, Ian Dear

Abstract:

The one-class support vector machine “support vector data description” (SVDD) is an ideal approach for anomaly or outlier detection. However, for the applicability of SVDD in real-world applications, the ease of use is crucial. The results of SVDD are massively determined by the choice of the regularisation parameter C and the kernel parameter  of the widely used RBF kernel. While for two-class SVMs the parameters can be tuned using cross-validation based on the confusion matrix, for a one-class SVM this is not possible, because only true positives and false negatives can occur during training. This paper proposes an approach to find the optimal set of parameters for SVDD solely based on a training set from one class and without any user parameterisation. Results on artificial and real data sets are presented, underpinning the usefulness of the approach.

Keywords: Support vector data description, anomaly detection, one-class classification, parameter tuning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2929
3277 Structural Damage Detection via Incomplete Modal Data Using Output Data Only

Authors: Ahmed Noor Al-Qayyim, Barlas Ozden Caglayan

Abstract:

Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on to obtain very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. The study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using ‘Two Points Condensation (TPC) technique’. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices obtain from optimization the equation of motion using the measured test data. The current stiffness matrices compare with original (undamaged) stiffness matrices. The large percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element, where two cases consider. The method detects the damage and determines its location accurately in both cases. In addition, the results illustrate these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can be used also for big structures.

Keywords: Damage detection, two points–condensation, structural health monitoring, signals processing, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2690
3276 Structural Health Monitoring of Offshore Structures Using Wireless Sensor Networking under Operational and Environmental Variability

Authors: Srinivasan Chandrasekaran, Thailammai Chithambaram, Shihas A. Khader

Abstract:

The early-stage damage detection in offshore structures requires continuous structural health monitoring and for the large area the position of sensors will also plays an important role in the efficient damage detection. Determining the dynamic behavior of offshore structures requires dense deployment of sensors. The wired Structural Health Monitoring (SHM) systems are highly expensive and always needs larger installation space to deploy. Wireless sensor networks can enhance the SHM system by deployment of scalable sensor network, which consumes lesser space. This paper presents the results of wireless sensor network based Structural Health Monitoring method applied to a scaled experimental model of offshore structure that underwent wave loading. This method determines the serviceability of the offshore structure which is subjected to various environment loads. Wired and wireless sensors were installed in the model and the response of the scaled BLSRP model under wave loading was recorded. The wireless system discussed in this study is the Raspberry pi board with Arm V6 processor which is programmed to transmit the data acquired by the sensor to the server using Wi-Fi adapter, the data is then hosted in the webpage. The data acquired from the wireless and wired SHM systems were compared and the design of the wireless system is verified.

Keywords: Condition assessment, damage detection, structural health monitoring, structural response, wireless sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2965
3275 Demand and Supply Chain Simulation in Telecommunication Industry by Multi-Rate Expert Systems

Authors: Andrus Pedai, Igor Astrov

Abstract:

In modern telecommunications industry, demand & supply chain management (DSCM) needs reliable design and versatile tools to control the material flow. The objective for efficient DSCM is reducing inventory, lead times and related costs in order to assure reliable and on-time deliveries from manufacturing units towards customers. In this paper the multi-rate expert system based methodology for developing simulation tools that would enable optimal DSCM for multi region, high volume and high complexity manufacturing environment was proposed.

Keywords: Demand & supply chain management, expert systems, inventory control, multi-rate control, performance metrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
3274 Perceptual Framework for a Modern Left-Turn Collision Warning System

Authors: E. Dabbour, S. M. Easa

Abstract:

Most of the collision warning systems currently available in the automotive market are mainly designed to warn against imminent rear-end and lane-changing collisions. No collision warning system is commercially available to warn against imminent turning collisions at intersections, especially for left-turn collisions when a driver attempts to make a left-turn at either a signalized or non-signalized intersection, conflicting with the path of other approaching vehicles traveling on the opposite-direction traffic stream. One of the major factors that lead to left-turn collisions is the human error and misjudgment of the driver of the turning vehicle when perceiving the speed and acceleration of other vehicles traveling on the opposite-direction traffic stream; therefore, using a properly-designed collision warning system will likely reduce, or even eliminate, this type of collisions by reducing human error. This paper introduces perceptual framework for a proposed collision warning system that can detect imminent left-turn collisions at intersections. The system utilizes a commercially-available detection sensor (either a radar sensor or a laser detector) to detect approaching vehicles traveling on the opposite-direction traffic stream and calculate their speeds and acceleration rates to estimate the time-tocollision and compare that time to the time required for the turning vehicle to clear the intersection. When calculating the time required for the turning vehicle to clear the intersection, consideration is given to the perception-reaction time of the driver of the turning vehicle, which is the time required by the driver to perceive the message given by the warning system and react to it by engaging the throttle. A regression model was developed to estimate perception-reaction time based on age and gender of the driver of the host vehicle. Desired acceleration rate selected by the driver of the turning vehicle, when making the left-turn movement, is another human factor that is considered by the system. Another regression model was developed to estimate the acceleration rate selected by the driver of the turning vehicle based on driver-s age and gender as well as on the location and speed of the nearest approaching vehicle along with the maximum acceleration rate provided by the mechanical characteristics of the turning vehicle. By comparing time-to-collision with the time required for the turning vehicle to clear the intersection, the system displays a message to the driver of the turning vehicle when departure is safe. An application example is provided to illustrate the logic algorithm of the proposed system.

Keywords: Collision warning systems, intelligent transportationsystems, vehicle safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2048
3273 Global Security Using Human Face Understanding under Vision Ubiquitous Architecture System

Authors: A. Jalal, S. Kim

Abstract:

Different methods containing biometric algorithms are presented for the representation of eigenfaces detection including face recognition, are identification and verification. Our theme of this research is to manage the critical processing stages (accuracy, speed, security and monitoring) of face activities with the flexibility of searching and edit the secure authorized database. In this paper we implement different techniques such as eigenfaces vector reduction by using texture and shape vector phenomenon for complexity removal, while density matching score with Face Boundary Fixation (FBF) extracted the most likelihood characteristics in this media processing contents. We examine the development and performance efficiency of the database by applying our creative algorithms in both recognition and detection phenomenon. Our results show the performance accuracy and security gain with better achievement than a number of previous approaches in all the above processes in an encouraging mode.

Keywords: Ubiquitous architecture, verification, Identification, recognition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1328
3272 Oxidation of Amitriptyline by Bromamine-T in Acidic Buffer Medium: A Kinetic and Mechanistic Approach

Authors: Chandrashekar, R. T. Radhika, B. M. Venkatesha, S. Ananda, Shivalingegowda, T. S. Shashikumar, H. Ramachandra

Abstract:

The kinetics of the oxidation of amitriptyline (AT) by sodium N-bromotoluene sulphonamide (C6H5SO2NBrNa) has been studied in an acidic buffer medium of pH 1.2 at 303 K. The oxidation reaction of AT was followed spectrophotometrically at maximum wavelength, 410 nm. The reaction rate shows a first order dependence each on concentration of AT and concentration of sodium N-bromotoluene sulphonamide. The reaction also shows an inverse fractional order dependence at low or high concentration of HCl. The dielectric constant of the solvent shows negative effect on the rate of reaction. The addition of halide ions and the reduction product of BAT have no significant effect on the rate. The rate is unchanged with the variation in the ionic strength (NaClO4) of the medium. Addition of reaction mixtures to be aqueous acrylamide solution did not initiate polymerization, indicating the absence of free radical species. The stoichiometry of the reaction was found to be 1:1 and oxidation product of AT is identified. The Michaelis-Menton type of kinetics has been proposed. The CH3C6H5SO2NHBr has been assumed to be the reactive oxidizing species. Thermodynamical parameters were computed by studying the reactions at different temperatures. A mechanism consistent with observed kinetics is presented.

Keywords: Amitriptyline, bromamine-T, kinetics, oxidation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
3271 Wavelet Entropy Based Algorithm for Fault Detection and Classification in FACTS Compensated Transmission Line

Authors: Amany M. El-Zonkoly, Hussein Desouki

Abstract:

Distance protection of transmission lines including advanced flexible AC transmission system (FACTS) devices has been a very challenging task. FACTS devices of interest in this paper are static synchronous series compensators (SSSC) and unified power flow controller (UPFC). In this paper, a new algorithm is proposed to detect and classify the fault and identify the fault position in a transmission line with respect to a FACTS device placed in the midpoint of the transmission line. Discrete wavelet transformation and wavelet entropy calculations are used to analyze during fault current and voltage signals of the compensated transmission line. The proposed algorithm is very simple and accurate in fault detection and classification. A variety of fault cases and simulation results are introduced to show the effectiveness of such algorithm.

Keywords: Entropy calculation, FACTS, SSSC, UPFC, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
3270 An Advanced Stereo Vision Based Obstacle Detection with a Robust Shadow Removal Technique

Authors: Saeid Fazli, Hajar Mohammadi D., Payman Moallem

Abstract:

This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.

Keywords: obstacle detection, stereo vision, shadowremoval, color, stereo matching

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2065
3269 High Performance Liquid Chromatography Determination of Urinary Hippuric Acid and Benzoic Acid as Indices for Glue Sniffer Urine

Authors: Abdul Rahim Yacob, Mohamad Raizul Zinalibdin

Abstract:

A simple method for the simultaneous determination of hippuric acid and benzoic acid in urine using reversed-phase high performance liquid chromatography was described. Chromatography was performed on a Nova-Pak C18 (3.9 x 150 mm) column with a mobile phase of mixed solution methanol: water: acetic acid (20:80:0.2) and UV detection at 254 nm. The calibration curve was linear within concentration range at 0.125 to 6.0 mg/ml of hippuric acid and benzoic acid. The recovery, accuracy and coefficient variance of hippuric acid were 104.54%, 0.2% and 0.2% respectively and for benzoic acid were 98.48%, 1.25% and 0.60% respectively. The detection limit of this method was 0.01ng/l for hippuric acid and 0.06ng/l for benzoic acid. This method has been applied to the analysis of urine samples from the suspected of toluene abuser or glue sniffer among secondary school students at Johor Bahru.

Keywords: Glue sniffer, High Performance LiquidChromatography, Hippuric Acid, Toluene, Urine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3359
3268 Design of Electromagnetic Drive Module for Micro-gyroscope

Authors: Nan-Chyuan Tsai, Jiun-Sheng Liou, Chih-Che Lin, Tuan Li

Abstract:

For micro-gyroscopes, the angular rate detection components have to oscillate forwards and backwards alternatively. An innovative design of micro-electromagnetic drive module is proposed to make a Π-type disc reciprocally and efficiently rotate within a certain of angular interval. Twelve Electromagnetic poles enclosing the thin disc are designed to provide the magnetic drive power. Isotropic etching technique is employed to fabricate the high-aspect-ratio trench, so that the contact angle of wire against trench can be increased and the potential defect of cavities and pores within the wire can be prevented. On the other hand, a Π-type thin disc is designed to conduct the pitch motion as an angular excitation, in addition to spinning, is exerted on the gyroscope. The efficacy of the micro-magnetic drive module is verified by the commercial software, Ansoft Maxewll. In comparison with the conventional planar windings in micro-scale systems, the magnetic drive force is increased by 150%.

Keywords: Micro-gyroscope, micro-electromagnetic, micro actuator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
3267 Real Time Detection, Tracking and Recognition of Medication Intake

Authors: H. H. Huynh, J. Meunier, J.Sequeira, M.Daniel

Abstract:

In this paper, the detection and tracking of face, mouth, hands and medication bottles in the context of medication intake monitoring with a camera is presented. This is aimed at recognizing medication intake for elderly in their home setting to avoid an inappropriate use. Background subtraction is used to isolate moving objects, and then, skin and bottle segmentations are done in the RGB normalized color space. We use a minimum displacement distance criterion to track skin color regions and the R/G ratio to detect the mouth. The color-labeled medication bottles are simply tracked based on the color space distance to their mean color vector. For the recognition of medication intake, we propose a three-level hierarchal approach, which uses activity-patterns to recognize the normal medication intake activity. The proposed method was tested with three persons, with different medication intake scenarios, and gave an overall precision of over 98%.

Keywords: Activity recognition, background subtraction, tracking, medication intake, video surveillance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
3266 Fragile Watermarking for Color Images Using Thresholding Technique

Authors: Kuo-Cheng Liu

Abstract:

In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.

Keywords: thresholding technique, tamper proofing, tamper recovery

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628
3265 Multi-Rate Exact Discretization based on Diagonalization of a Linear System - A Multiple-Real-Eigenvalue Case

Authors: T. Sakamoto, N. Hori

Abstract:

A multi-rate discrete-time model, whose response agrees exactly with that of a continuous-time original at all sampling instants for any sampling periods, is developed for a linear system, which is assumed to have multiple real eigenvalues. The sampling rates can be chosen arbitrarily and individually, so that their ratios can even be irrational. The state space model is obtained as a combination of a linear diagonal state equation and a nonlinear output equation. Unlike the usual lifted model, the order of the proposed model is the same as the number of sampling rates, which is less than or equal to the order of the original continuous-time system. The method is based on a nonlinear variable transformation, which can be considered as a generalization of linear similarity transformation, which cannot be applied to systems with multiple eigenvalues in general. An example and its simulation result show that the proposed multi-rate model gives exact responses at all sampling instants.

Keywords: Multi-rate discretization, linear systems, triangularization, similarity transformation, diagonalization, exponential transformation, multiple eigenvalues

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
3264 Analysis of a WDM System for Tanzania

Authors: Shaban Pazi, Chris Chatwin, Rupert Young, Philip Birch

Abstract:

Internet infrastructures in most places of the world have been supported by the advancement of optical fiber technology, most notably wavelength division multiplexing (WDM) system. Optical technology by means of WDM system has revolutionized long distance data transport and has resulted in high data capacity, cost reductions, extremely low bit error rate, and operational simplification of the overall Internet infrastructure. This paper analyses and compares the system impairments, which occur at data transmission rates of 2.5Gb/s and 10 Gb/s per wavelength channel in our proposed optical WDM system for Internet infrastructure in Tanzania. The results show that the data transmission rate of 2.5 Gb/s has minimum system impairments compared with a rate of 10 Gb/s per wavelength channel, and achieves a sufficient system performance to provide a good Internet access service.

Keywords: Internet infrastructure, WDM system, standard single mode fibers, system impairments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
3263 Fault Detection of Broken Rotor Bars Using Stator Current Spectrum for the Direct Torque Control Induction Motor

Authors: Ridha Kechida, Arezki Menacer, Abdelhamid Benakcha

Abstract:

The numerous qualities of squirrel cage induction machines enhance their use in industry. However, various faults can occur, such as stator short-circuits and rotor failures. In this paper, we use a technique based on the spectral analysis of stator current in order to detect the fault in the machine: broken rotor bars. Thus, the number effect of the breaks has been highlighted. The effect is highlighted by considering the machine controlled by the Direct Torque Control (DTC). The key to fault detection is the development of a simplified dynamic model of a squirrel cage induction motor taking account the broken bars fault and the stator current spectrum analysis (FFT).

Keywords: Rotor faults, diagnosis, induction motor, DTC, statorcurrent spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3115
3262 Segmental and Subsegmental Lung Vessel Segmentation in CTA Images

Authors: H. Özkan

Abstract:

In this paper, a novel and fast algorithm for segmental and subsegmental lung vessel segmentation is introduced using Computed Tomography Angiography images. This process is quite important especially at the detection of pulmonary embolism, lung nodule, and interstitial lung disease. The applied method has been realized at five steps. At the first step, lung segmentation is achieved. At the second one, images are threshold and differences between the images are detected. At the third one, left and right lungs are gathered with the differences which are attained in the second step and Exact Lung Image (ELI) is achieved. At the fourth one, image, which is threshold for vessel, is gathered with the ELI. Lastly, identifying and segmentation of segmental and subsegmental lung vessel have been carried out thanks to image which is obtained in the fourth step. The performance of the applied method is found quite well for radiologists and it gives enough results to the surgeries medically.

Keywords: Computed tomography angiography (CTA), Computer aided detection (CAD), Lung segmentation, Lung vessel segmentation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2172
3261 Delay and Packet Loss Analysis for Handovers between MANETs and NEMO Networks

Authors: Jirawat Thaenthong, Steven Gordon

Abstract:

MANEMO is the integration of Network Mobility (NEMO) and Mobile Ad Hoc Network (MANET). A MANEMO node has an interface to both a MANET and NEMO network, and therefore should choose the optimal interface for packet delivery, however such a handover between interfaces will introduce packet loss. We define the steps necessary for a MANEMO handover, using Mobile IP and NEMO to signal the new binding to the relevant Home Agent(s). The handover steps aim to minimize the packet loss by avoiding waiting for Duplicate Address Detection and Neighbour Unreachability Detection. We present expressions for handover delay and packet loss, and then use numerical examples to evaluate a MANEMO handover. The analysis shows how the packet loss depends on level of nesting within NEMO, the delay between Home Agents and the load on the MANET, and hence can be used to developing optimal MANEMO handover algorithms.

Keywords: IP mobility, handover, MANET, network mobility

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2080
3260 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: Computer vision, deep learning, object detection, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 815
3259 A New Preconditioned AOR Method for Z-matrices

Authors: Guangbin Wang, Ning Zhang, Fuping Tan

Abstract:

In this paper, we present a preconditioned AOR-type iterative method for solving the linear systems Ax = b, where A is a Z-matrix. And give some comparison theorems to show that the rate of convergence of the preconditioned AOR-type iterative method is faster than the rate of convergence of the AOR-type iterative method.

Keywords: Z-matrix, AOR-type iterative method, precondition, comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
3258 Analysis of Linear Equalizers for Cooperative Multi-User MIMO Based Reporting System

Authors: S. Hariharan, P. Muthuchidambaranathan

Abstract:

In this paper, we consider a multi user multiple input multiple output (MU-MIMO) based cooperative reporting system for cognitive radio network. In the reporting network, the secondary users forward the primary user data to the common fusion center (FC). The FC is equipped with linear equalizers and an energy detector to make the decision about the spectrum. The primary user data are considered to be a digital video broadcasting - terrestrial (DVB-T) signal. The sensing channel and the reporting channel are assumed to be an additive white Gaussian noise and an independent identically distributed Raleigh fading respectively. We analyzed the detection probability of MU-MIMO system with linear equalizers and arrived at the closed form expression for average detection probability. Also the system performance is investigated under various MIMO scenarios through Monte Carlo simulations.

Keywords: Cooperative MU-MIMO, DVB-T, Linear Equalizers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014
3257 DHT-LMS Algorithm for Sensorineural Loss Patients

Authors: Sunitha S. L., V. Udayashankara

Abstract:

Hearing impairment is the number one chronic disability affecting many people in the world. Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Hartley Transform Power Normalized Least Mean Square algorithm (DHT-LMS) to improve the SNR and to reduce the convergence rate of the Least Means Square (LMS) for sensorineural loss patients. The DHT transforms n real numbers to n real numbers, and has the convenient property of being its own inverse. It can be effectively used for noise cancellation with less convergence time. The simulated result shows the superior characteristics by improving the SNR at least 9 dB for input SNR with zero dB and faster convergence rate (eigenvalue ratio 12) compare to time domain method and DFT-LMS.

Keywords: Hearing Impairment, DHT-LMS, Convergence rate, SNR improvement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
3256 Detection of Oxidative Stress Induced by Mobile Phone Radiation in Tissues of Mice using 8-Oxo-7, 8-Dihydro-2'-Deoxyguanosine as a Biomarker

Authors: Ahmad M. Khalil, Ahmad M. Alshamali, Marwan H. Gagaa

Abstract:

We investigated oxidative DNA damage caused by radio frequency radiation using 8-oxo-7, 8-dihydro-2'- deoxyguanosine (8-oxodG) generated in mice tissues after exposure to 900 MHz mobile phone radio frequency in three independent experiments. The RF was generated by a Global System for Mobile Communication (GSM) signal generator. The radio frequency field was adjusted to 25 V/m. The whole body specific absorption rate (SAR) was 1.0 W/kg. Animals were exposed to this field for 30 min daily for 30 days. 24 h post-exposure, blood serum, brain and spleen were removed and DNA was isolated. Enzyme-linked immunosorbent assay (ELISA) was used to measure 8-oxodG concentration. All animals survived the whole experimental period. The body weight of animals did not change significantly at the end of the experiment. No statistically significant differences observed in the levels of oxidative stress. Our results are not in favor of the hypothesis that 900 MHz RF induces oxidative damage.

Keywords: Mice, Mobile phone radiation, oxidative stress, 8-oxo-7, 8-dihydro-2'-deoxyguanosine

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039
3255 Diagnostic Evaluation of Urinary Angiogenin (ANG) and Clusterin (CLU) as Biomarker for Bladder Cancer

Authors: Marwa I. Shabayek, Ola A. Said, Hanan A. Attaia, Heba A. Awida

Abstract:

Bladder carcinoma is an important worldwide health problem. Both cystoscopy and urine cytology used in detecting bladder cancer suffer from drawbacks where cystoscopy is an invasive method and urine cytology shows low sensitivity in low grade tumors. This study validates easier and less time-consuming techniques to evaluate the value of combined use of angiogenin and clusterin in comparison and combination with voided urine cytology in the detection of bladder cancer patients. This study includes malignant (bladder cancer patients, n= 50), benign (n=20) and healthy (n=20) groups. The studied groups were subjected to cystoscopic examination, detection of bilharzial antibodies, urine cytology, and estimation of urinary angiogenin and clusterin by ELISA. The overall sensitivity and specificity were 66% and 75% for angiogenin, 70% and 82.5% for clusterin and 46% and 80% for voided urine cytology. Combined sensitivity of angiogenin and clusterin with urine cytology increased from 82 to 88%. 

Keywords: Angiogenin, Bladder Cancer, Clusterin, Cytology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
3254 A New Approach to Design an Efficient CIC Decimator Using Signed Digit Arithmetic

Authors: Vishal Awasthi, Krishna Raj

Abstract:

Any digital processing performed on a signal with larger nyquist interval requires more computation than signal processing performed on smaller nyquist interval. The sampling rate alteration generates the unwanted effects in the system such as spectral aliasing and spectral imaging during signal processing. Multirate-multistage implementation of digital filter can result a significant computational saving than single rate filter designed for sample rate conversion. In this paper, we presented an efficient cascaded integrator comb (CIC) decimation filter that perform fast down sampling using signed digit adder algorithm with compensated frequency droop that arises due to aliasing effect during the decimation process. This proposed compensated CIC decimation filter structure with a hybrid signed digit (HSD) fast adder provide an improved performance in terms of down sampling speed by 65.15% than ripple carry adder (RCA) and reduced area and power by 57.5% and 0.01 % than signed digit (SD) adder algorithms respectively.

Keywords: Sampling rate conversion, Multirate Filtering, Compensation Theory, Decimation filter, CIC filter, Redundant signed digit arithmetic, Fast adders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4888
3253 Construct the Fur Input Mixed Model with Activity-Based Benefit Assessment Approach of Leather Industry

Authors: M. F. Wu, F. T. Cheng

Abstract:

Leather industry is the most important traditional industry to provide the leather products in the world for thousand years. The fierce global competitive environment and common awareness of global carbon reduction make livestock supply quantities falling, salt and wet blue leather material reduces and the price skyrockets significantly. Exchange rate fluctuation led sales revenue decreasing which due to the differences of export exchanges and compresses the overall profitability of leather industry. This paper applies activity-based benefit assessment approach to build up fitness fur input mixed model, fur is Wet Blue, which concerned with four key factors: the output rate of wet blue, unit cost of wet blue, yield rate and grade level of Wet Blue to achieve the low cost strategy under given unit price of leather product condition of the company. The research findings indicate that applying this model may improve the input cost structure, decrease numbers of leather product inventories and to raise the competitive advantages of the enterprise in the future.

Keywords: Activity-Based Benefit Assessment Approach, Input mixed, Output Rate, Wet Blue.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
3252 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: Co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 712
3251 Sub-Image Detection Using Fast Neural Processors and Image Decomposition

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

In this paper, an approach to reduce the computation steps required by fast neural networksfor the searching process is presented. The principle ofdivide and conquer strategy is applied through imagedecomposition. Each image is divided into small in sizesub-images and then each one is tested separately usinga fast neural network. The operation of fast neuralnetworks based on applying cross correlation in thefrequency domain between the input image and theweights of the hidden neurons. Compared toconventional and fast neural networks, experimentalresults show that a speed up ratio is achieved whenapplying this technique to locate human facesautomatically in cluttered scenes. Furthermore, fasterface detection is obtained by using parallel processingtechniques to test the resulting sub-images at the sametime using the same number of fast neural networks. Incontrast to using only fast neural networks, the speed upratio is increased with the size of the input image whenusing fast neural networks and image decomposition.

Keywords: Fast Neural Networks, 2D-FFT, CrossCorrelation, Image decomposition, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2172
3250 Comparison between Torsional Ultrasonic Assisted Drilling and Conventional Drilling of Bone: An in vitro Study

Authors: Nikoo Soleimani

Abstract:

Background: Reducing torque during bone drilling is one of the effective factors in reaching to an optimal drilling process. Methods: 15 bovine femurs were drilled in vitro with a drill bit with a diameter of 4 mm using two methods of torsional ultrasonic assisted drilling (T-UAD) and convent conventional drilling (CD) and the effects of changing the feed rate and rotational speed on the torque were compared in both methods. Results: There was no significant difference in the thrust force measured in both methods due to the direction of vibrations. Results showed that using T-UAD method for bone drilling at feed rates of 0.16, 0.24 and 0.32 mm/rev led for all rotational speeds to a decrease of at least 16.3% in torque compared to the CD method. Further, using T-UAD at rotational speeds of 355~1000 rpm with various feed rates resulted in a torque reduction of 16.3~50.5% compared to CD method. Conclusions: Reducing the feed rate and increasing the rotational speed, except for the rotational speed of 500 rpm and a feed rate of 0.32 mm/rev, resulted generally in torque reduction in both methods. However, T-UAD is a more effective and desirable option for bone drilling considering its significant torque reduction.

Keywords: Torsional ultrasonic assisted drilling, torque, bone drilling, rotational speed, feed rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 638