Search results for: high noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20169

Search results for: high noise

19929 Far-Field Noise Prediction of Tandem Cylinders Using Incompressible Large Eddy Simulation

Authors: Jesus Ruano, Francesc Xavier Trias, Asensi Oliva

Abstract:

A three-dimensional incompressible Large Eddy Simulation (LES) is performed to compute the hydrodynamic field around a pair of tandem cylinders. Symmetry-preserving schemes will be used during this simulation in conjunction with Finite Volume Method (FVM) to obtain the hydrodynamic field around the selected geometry. A set of results consisting of pressure and velocity and the combination of them will be stored at different surfaces near the cylinders as the initial input for the second part of the study. A post-processing of the obtained results based on Ffowcs-Williams and Hawkings (FWH) equation with a Fourier Transform of the acoustic sources will be used to compute noise at several probes located far away from the region where the hydrodynamics are computed. Directivities as well as spectral profile of the obtained acoustic field will be analyzed.

Keywords: far-field noise, Ffowcs-Williams and Hawkings, finite volume method, large eddy simulation, long-span bodies

Procedia PDF Downloads 342
19928 Large-Eddy Simulations for Aeronautical Systems

Authors: R. R. Mankbadi

Abstract:

There are several technologically-important flow situations in which there is a need to control the outcome of the fluid flow. This could include flow separation, drag, noise, as well as particulate separations, to list only a few. One possible approach is the passive control, in which the design geometry is changed. An alternative approach is the Active Flow Control (AFC) technology in which an actuator is embedded in the flow field to change the outcome. Examples of AFC are pulsed jets, synthetic jets, plasma actuators, heating, and cooling, etc. In this work will present an overview of the development of this field. Some examples will include Airfoil Noise Suppression: Large-Eddy Simulations (LES) is used to simulate the effect of synthetic jet actuator on controlling the far field sound of a transitional airfoil. The results show considerable suppression of the noise if the synthetic jet is operated at frequencies. Mixing Enhancement and suppression: Results will be presented to show that imposing acoustic excitations at the nozzle exit can lead to enhancement or reduction of the jet plume mixing. In vertical takeoff of Aircrafts or in Space Launch, we will present results on the effects of water injection on reducing noise, and on protecting the structure and payload from fatigue damage. Other applications will include airfoil-gust interaction and propulsion systems optimizations.

Keywords: aeroacoustics, flow control, aerodynamics, large eddy simulations

Procedia PDF Downloads 263
19927 Nonuniformity Correction Technique in Infrared Video Using Feedback Recursive Least Square Algorithm

Authors: Flavio O. Torres, Maria J. Castilla, Rodrigo A. Augsburger, Pedro I. Cachana, Katherine S. Reyes

Abstract:

In this paper, we present a scene-based nonuniformity correction method using a modified recursive least square algorithm with a feedback system on the updates. The feedback is designed to remove impulsive noise contamination images produced by a recursive least square algorithm by measuring the output of the proposed algorithm. The key advantage of the method is based on its capacity to estimate detectors parameters and then compensate for impulsive noise contamination image in a frame by frame basics. We define the algorithm and present several experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published recursive least square-based methods. We show that the proposed method removes impulsive noise contamination image.

Keywords: infrared focal plane arrays, infrared imaging, least mean square, nonuniformity correction

Procedia PDF Downloads 116
19926 An Energy Detection-Based Algorithm for Cooperative Spectrum Sensing in Rayleigh Fading Channel

Authors: H. Bakhshi, E. Khayyamian

Abstract:

Cognitive radios have been recognized as one of the most promising technologies dealing with the scarcity of the radio spectrum. In cognitive radio systems, secondary users are allowed to utilize the frequency bands of primary users when the bands are idle. Hence, how to accurately detect the idle frequency bands has attracted many researchers’ interest. Detection performance is sensitive toward noise power and gain fluctuation. Since signal to noise ratio (SNR) between primary user and secondary users are not the same and change over the time, SNR and noise power estimation is essential. In this paper, we present a cooperative spectrum sensing algorithm using SNR estimation to improve detection performance in the real situation.

Keywords: cognitive radio, cooperative spectrum sensing, energy detection, SNR estimation, spectrum sensing, rayleigh fading channel

Procedia PDF Downloads 425
19925 A Novel Search Pattern for Motion Estimation in High Efficiency Video Coding

Authors: Phong Nguyen, Phap Nguyen, Thang Nguyen

Abstract:

High Efficiency Video Coding (HEVC) or H.265 Standard fulfills the demand of high resolution video storage and transmission since it achieves high compression ratio. However, it requires a huge amount of calculation. Since Motion Estimation (ME) block composes about 80 % of calculation load of HEVC, there are a lot of researches to reduce the computation cost. In this paper, we propose a new algorithm to lower the number of Motion Estimation’s searching points. The number of computing points in search pattern is down from 77 for Diamond Pattern and 81 for Square Pattern to only 31. Meanwhile, the Peak Signal to Noise Ratio (PSNR) and bit rate are almost equal to those of conventional patterns. The motion estimation time of new algorithm reduces by at 68.23%, 65.83%compared to the recommended search pattern of diamond pattern, square pattern, respectively.

Keywords: motion estimation, wide diamond, search pattern, H.265, test zone search, HM software

Procedia PDF Downloads 567
19924 Threshold Sand Detection Limits for Acoustic Monitors in Multiphase Flow

Authors: Vinod Ponnagandla, Brenton McLaury, Siamack Shirazi

Abstract:

Sand production can lead to deposition of particles or erosion. Low production rates resulting in deposition can partially clog systems and cause under deposit corrosion. Commercially available nonintrusive acoustic sand detectors are attractive as they claim to detect sand production. Acoustic sand detectors are used during oil and gas production; however, operators often do not know the threshold detection limits of these devices. It is imperative to know the detection limits to appropriately plan for cleaning of separation equipment or examine risk of erosion. These monitors are based on detecting the acoustic signature of sand as the particles impact the pipe walls. The objective of this work is to determine threshold detection limits for acoustic sand monitors that are commercially available. The minimum threshold sand concentration that can be detected in a pipe are determined as a function of flowing gas and liquid velocities. A large scale flow loop with a 4-inch test section is utilized. Commercially available sand monitors (ClampOn and Roxar) are evaluated for different flow regimes, sand sizes and pipe orientation (vertical and horizontal). The manufacturers’ recommend that the monitors be placed on a bend to maximize the number of particle impacts, so results are shown for monitors placed at 45 and 90 degree positions in a bend. Acoustic sand monitors that clamp to the outside of pipe are passive and listen for solid particle impact noise. The threshold sand rate is calculated by eliminating the background noise created by the flow of gas and liquid in the pipe for various flow regimes that are generated in horizontal and vertical test sections. The average sand sizes examined are 150 and 300 microns. For stratified and bubbly flows the threshold sand rates are much higher than other flow regimes such as slug and annular flow regimes that are investigated. However, the background noise generated by slug flow regime is very high and cause a high uncertainty in detection limits. The threshold sand rates for annular flow and dry gas conditions are the lowest because of high gas velocities. The effects of monitor placement around elbows that are in vertical and horizontal pipes are also examined for 150 micron. The results show that the threshold sand rates that are detected in vertical orientation are generally lower for all various flow regimes that are investigated.

Keywords: acoustic monitor, sand, multiphase flow, threshold

Procedia PDF Downloads 373
19923 Active Surface Tracking Algorithm for All-Fiber Common-Path Fourier-Domain Optical Coherence Tomography

Authors: Bang Young Kim, Sang Hoon Park, Chul Gyu Song

Abstract:

A conventional optical coherence tomography (OCT) system has limited imaging depth, which is 1-2 mm, and suffers unwanted noise such as speckle noise. The motorized-stage-based OCT system, using a common-path Fourier-domain optical coherence tomography (CP-FD-OCT) configuration, provides enhanced imaging depth and less noise so that we can overcome these limitations. Using this OCT systems, OCT images were obtained from an onion, and their subsurface structure was observed. As a result, the images obtained using the developed motorized-stage-based system showed enhanced imaging depth than the conventional system, since it is real-time accurate depth tracking. Consequently, the developed CP-FD-OCT systems and algorithms have good potential for the further development of endoscopic OCT for microsurgery.

Keywords: common-path OCT, FD-OCT, OCT, tracking algorithm

Procedia PDF Downloads 354
19922 Methodology of Preliminary Design and Performance of a Axial-Flow Fan through CFD

Authors: Ramiro Gustavo Ramirez Camacho, Waldir De Oliveira, Eraldo Cruz Dos Santos, Edna Raimunda Da Silva, Tania Marie Arispe Angulo, Carlos Eduardo Alves Da Costa, Tânia Cristina Alves Dos Reis

Abstract:

It presents a preliminary design methodology of an axial fan based on the lift wing theory and the potential vortex hypothesis. The literature considers a study of acoustic and engineering expertise to model a fan with low noise. Axial fans with inadequate intake geometry, often suffer poor condition of the flow at the entrance, varying from velocity profiles spatially asymmetric to swirl floating with respect to time, this produces random forces acting on the blades. This produces broadband gust noise which in most cases triggers the tonal noise. The analysis of the axial flow fan will be conducted for the solution of the Navier-Stokes equations and models of turbulence in steady and transitory (RANS - URANS) 3-D, in order to find an efficient aerodynamic design, with low noise and suitable for industrial installation. Therefore, the process will require the use of computational optimization methods, aerodynamic design methodologies, and numerical methods as CFD- Computational Fluid Dynamics. The objective is the development of the methodology of the construction axial fan, provide of design the geometry of the blade, and evaluate aerodynamic performance

Keywords: Axial fan design, CFD, Preliminary Design, Optimization

Procedia PDF Downloads 354
19921 Evaluation and Analysis of Light Emitting Diode Distribution in an Indoor Visible Light Communication

Authors: Olawale J. Olaluyi, Ayodele S. Oluwole, O. Akinsanmi, Johnson O. Adeogo

Abstract:

Communication using visible light VLC is considered a cutting-edge technology used for data transmission and illumination since it uses less energy than radio frequency (RF) technology and has a large bandwidth, extended lifespan, and high security. The room's irregular distribution of small base stations, or LED array distribution, is the cause of the obscured area, minimum signal-to-noise ratio (SNR), and received power. In order to maximize the received power distribution and SNR at the center of the room for an indoor VLC system, the researchers offer an innovative model for the placement of eight LED array distributions in this work. We have investigated the arrangement of the LED array distribution with regard to receiving power to fill the open space in the center of the room. The suggested LED array distribution saved 36.2% of the transmitted power, according to the simulation findings. Aside from that, the entire room was equally covered. This leads to an increase in both received power and SNR.

Keywords: visible light communication (VLC), light emitted diodes (LED), optical power distribution, signal-to-noise ratio (SNR).

Procedia PDF Downloads 48
19920 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation

Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang

Abstract:

Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.

Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation

Procedia PDF Downloads 32
19919 Small Target Recognition Based on Trajectory Information

Authors: Saad Alkentar, Abdulkareem Assalem

Abstract:

Recognizing small targets has always posed a significant challenge in image analysis. Over long distances, the image signal-to-noise ratio tends to be low, limiting the amount of useful information available to detection systems. Consequently, visual target recognition becomes an intricate task to tackle. In this study, we introduce a Track Before Detect (TBD) approach that leverages target trajectory information (coordinates) to effectively distinguish between noise and potential targets. By reframing the problem as a multivariate time series classification, we have achieved remarkable results. Specifically, our TBD method achieves an impressive 97% accuracy in separating target signals from noise within a mere half-second time span (consisting of 10 data points). Furthermore, when classifying the identified targets into our predefined categories—airplane, drone, and bird—we achieve an outstanding classification accuracy of 96% over a more extended period of 1.5 seconds (comprising 30 data points).

Keywords: small targets, drones, trajectory information, TBD, multivariate time series

Procedia PDF Downloads 17
19918 Bit Error Rate (BER) Performance of Coherent Homodyne BPSK-OCDMA Network for Multimedia Applications

Authors: Morsy Ahmed Morsy Ismail

Abstract:

In this paper, the structure of a coherent homodyne receiver for the Binary Phase Shift Keying (BPSK) Optical Code Division Multiple Access (OCDMA) network is introduced based on the Multi-Length Weighted Modified Prime Code (ML-WMPC) for multimedia applications. The Bit Error Rate (BER) of this homodyne detection is evaluated as a function of the number of active users and the signal to noise ratio for different code lengths according to the multimedia application such as audio, voice, and video. Besides, the Mach-Zehnder interferometer is used as an external phase modulator in homodyne detection. Furthermore, the Multiple Access Interference (MAI) and the receiver noise in a shot-noise limited regime are taken into consideration in the BER calculations.

Keywords: OCDMA networks, bit error rate, multiple access interference, binary phase-shift keying, multimedia

Procedia PDF Downloads 144
19917 Using Squeezed Vacuum States to Enhance the Sensitivity of Ground Based Gravitational Wave Interferometers beyond the Standard Quantum Limit

Authors: Giacomo Ciani

Abstract:

This paper reviews the impact of quantum noise on modern gravitational wave interferometers and explains how squeezed vacuum states are used to push the noise below the standard quantum limit. With the first detection of gravitational waves from a pair of colliding black holes in September 2015 and subsequent detections including that of gravitational waves from a pair of colliding neutron stars, the ground-based interferometric gravitational wave observatories LIGO and VIRGO have opened the era of gravitational-wave and multi-messenger astronomy. Improving the sensitivity of the detectors is of paramount importance to increase the number and quality of the detections, fully exploiting this new information channel about the universe. Although still in the commissioning phase and not at nominal sensitivity, these interferometers are designed to be ultimately limited by a combination of shot noise and quantum radiation pressure noise, which define an envelope known as the standard quantum limit. Despite the name, this limit can be beaten with the use of advanced quantum measurement techniques, with the use of squeezed vacuum states being currently the most mature and promising. Different strategies for implementation of the technology in the large-scale detectors, in both their frequency-independent and frequency-dependent variations, are presented, together with an analysis of the main technological issues and expected sensitivity gain.

Keywords: gravitational waves, interferometers, squeezed vacuum, standard quantum limit

Procedia PDF Downloads 127
19916 Far-Field Acoustic Prediction of a Supersonic Expanding Jet Using Large Eddy Simulation

Authors: Jesus Ruano, Asensi Oliva

Abstract:

The hydrodynamic field generated by a jet expansion is computed via three dimensional compressible Large Eddy Simulation (LES). Finite Volume Method (FVM) will be the discretization used during this simulation as well as hybrid schemes based on Kinetic Energy Preserving (KEP) schemes and up-winding Godunov based schemes with instabilities detectors. Velocity and pressure fields will be stored at different surfaces near the jet, but far enough to enclose all the fluctuations, in order to use them as input for the acoustic solver. The acoustic field is obtained in the far-field region at several locations by means of a hybrid method based on Ffowcs-Williams and Hawkings (FWH) equation. This equation will be formulated in the spectral domain, via Fourier Transform of the acoustic sources, which are modeled from the results of the initial simulation. The obtained results will allow the study of the broadband noise generated as well as sound directivities.

Keywords: far-field noise, Ffowcs-Williams and Hawkings, finite volume method, large eddy simulation, jet noise

Procedia PDF Downloads 274
19915 An Experimental Investigation of the Cognitive Noise Influence on the Bistable Visual Perception

Authors: Alexander E. Hramov, Vadim V. Grubov, Alexey A. Koronovskii, Maria K. Kurovskaуa, Anastasija E. Runnova

Abstract:

The perception of visual signals in the brain was among the first issues discussed in terms of multistability which has been introduced to provide mechanisms for information processing in biological neural systems. In this work the influence of the cognitive noise on the visual perception of multistable pictures has been investigated. The study includes an experiment with the bistable Necker cube illusion and the theoretical background explaining the obtained experimental results. In our experiments Necker cubes with different wireframe contrast were demonstrated repeatedly to different people and the probability of the choice of one of the cubes projection was calculated for each picture. The Necker cube was placed at the middle of a computer screen as black lines on a white background. The contrast of the three middle lines centered in the left middle corner was used as one of the control parameter. Between two successive demonstrations of Necker cubes another picture was shown to distract attention and to make a perception of next Necker cube more independent from the previous one. Eleven subjects, male and female, of the ages 20 through 45 were studied. The choice of the Necker cube projection was detected with the Electroencephalograph-recorder Encephalan-EEGR-19/26, Medicom MTD. To treat the experimental results we carried out theoretical consideration using the simplest double-well potential model with the presence of noise that led to the Fokker-Planck equation for the probability density of the stochastic process. At the first time an analytical solution for the probability of the selection of one of the Necker cube projection for different values of wireframe contrast have been obtained. Furthermore, having used the results of the experimental measurements with the help of the method of least squares we have calculated the value of the parameter corresponding to the cognitive noise of the person being studied. The range of cognitive noise parameter values for studied subjects turned to be [0.08; 0.55]. It should be noted, that experimental results have a good reproducibility, the same person being studied repeatedly another day produces very similar data with very close levels of cognitive noise. We found an excellent agreement between analytically deduced probability and the results obtained in the experiment. A good qualitative agreement between theoretical and experimental results indicates that even such a simple model allows simulating brain cognitive dynamics and estimating important cognitive characteristic of the brain, such as brain noise.

Keywords: bistability, brain, noise, perception, stochastic processes

Procedia PDF Downloads 422
19914 Noise Mitigation Techniques to Minimize Electromagnetic Interference/Electrostatic Discharge Effects for the Lunar Mission Spacecraft

Authors: Vabya Kumar Pandit, Mudit Mittal, N. Prahlad Rao, Ramnath Babu

Abstract:

TeamIndus is the only Indian team competing for the Google Lunar XPRIZE(GLXP). The GLXP is a global competition to challenge the private entities to soft land a rover on the moon, travel minimum 500 meters and transmit high definition images and videos to Earth. Towards this goal, the TeamIndus strategy is to design and developed lunar lander that will deliver a rover onto the surface of the moon which will accomplish GLXP mission objectives. This paper showcases the various system level noise control techniques adopted by Electrical Distribution System (EDS), to achieve the required Electromagnetic Compatibility (EMC) of the spacecraft. The design guidelines followed to control Electromagnetic Interference by proper electronic package design, grounding, shielding, filtering, and cable routing within the stipulated mass budget, are explained. The paper also deals with the challenges of achieving Electromagnetic Cleanliness in presence of various Commercial Off-The-Shelf (COTS) and In-House developed components. The methods of minimizing Electrostatic Discharge (ESD) by identifying the potential noise sources, susceptible areas for charge accumulation and the methodology to prevent arcing inside spacecraft are explained. The paper then provides the EMC requirements matrix derived from the mission requirements to meet the overall Electromagnetic compatibility of the Spacecraft.

Keywords: electromagnetic compatibility, electrostatic discharge, electrical distribution systems, grounding schemes, light weight harnessing

Procedia PDF Downloads 270
19913 Performance Evaluation of Various Segmentation Techniques on MRI of Brain Tissue

Authors: U.V. Suryawanshi, S.S. Chowhan, U.V Kulkarni

Abstract:

Accuracy of segmentation methods is of great importance in brain image analysis. Tissue classification in Magnetic Resonance brain images (MRI) is an important issue in the analysis of several brain dementias. This paper portraits performance of segmentation techniques that are used on Brain MRI. A large variety of algorithms for segmentation of Brain MRI has been developed. The objective of this paper is to perform a segmentation process on MR images of the human brain, using Fuzzy c-means (FCM), Kernel based Fuzzy c-means clustering (KFCM), Spatial Fuzzy c-means (SFCM) and Improved Fuzzy c-means (IFCM). The review covers imaging modalities, MRI and methods for noise reduction and segmentation approaches. All methods are applied on MRI brain images which are degraded by salt-pepper noise demonstrate that the IFCM algorithm performs more robust to noise than the standard FCM algorithm. We conclude with a discussion on the trend of future research in brain segmentation and changing norms in IFCM for better results.

Keywords: image segmentation, preprocessing, MRI, FCM, KFCM, SFCM, IFCM

Procedia PDF Downloads 297
19912 Correlation between Speech Emotion Recognition Deep Learning Models and Noises

Authors: Leah Lee

Abstract:

This paper examines the correlation between deep learning models and emotions with noises to see whether or not noises mask emotions. The deep learning models used are plain convolutional neural networks (CNN), auto-encoder, long short-term memory (LSTM), and Visual Geometry Group-16 (VGG-16). Emotion datasets used are Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D), Toronto Emotional Speech Set (TESS), and Surrey Audio-Visual Expressed Emotion (SAVEE). To make it four times bigger, audio set files, stretch, and pitch augmentations are utilized. From the augmented datasets, five different features are extracted for inputs of the models. There are eight different emotions to be classified. Noise variations are white noise, dog barking, and cough sounds. The variation in the signal-to-noise ratio (SNR) is 0, 20, and 40. In summation, per a deep learning model, nine different sets with noise and SNR variations and just augmented audio files without any noises will be used in the experiment. To compare the results of the deep learning models, the accuracy and receiver operating characteristic (ROC) are checked.

Keywords: auto-encoder, convolutional neural networks, long short-term memory, speech emotion recognition, visual geometry group-16

Procedia PDF Downloads 41
19911 Analysis of the Acoustic Performance of Vertical Internal Seals with Pet Wool as NBR 15.575-4NO Green Towers Building-DF

Authors: Lucas Aerre, Wallesson Faria, Roberto Pimentel, Juliana Santos

Abstract:

An extremely disturbing and irritating element in the lives of people and organizations is the noise, the consequences that can bring us has a lot of connection with human health as well as financial and economic aspects. In order to improve the efficiency of buildings in Brazil in general, a performance standard was created, NBR 15.575 in which all buildings are seen in a more systemic and peculiar way, while following the requirements of the standard. The acoustic performance present in these buildings is one such requirement. Based on this, the present work was elaborated with the objective of evaluating through acoustic measurements the acoustic performance of vertical internal fences that are under the incidence of aerial noise of a building in the city of Brasilia-DF. A short theoretical basis is made and soon after the procedures of measurement are described through the control method established by the standard, and its results are evaluated according to the parameters of the same. The measurement performed between rooms of the same unit, presented a standardized sound pressure level difference (D nT, w) equal to 40 dB, thus being classified within the minimum performance required by the standard in question.

Keywords: airborne noise, performance standard, soundproofing, vertical seal

Procedia PDF Downloads 269
19910 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 123
19909 Video Compression Using Contourlet Transform

Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan

Abstract:

Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.

Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform

Procedia PDF Downloads 414
19908 The High Precision of Magnetic Detection with Microwave Modulation in Solid Spin Assembly of NV Centres in Diamond

Authors: Zongmin Ma, Shaowen Zhang, Yueping Fu, Jun Tang, Yunbo Shi, Jun Liu

Abstract:

Solid-state quantum sensors are attracting wide interest because of their high sensitivity at room temperature. In particular, spin properties of nitrogen–vacancy (NV) color centres in diamond make them outstanding sensors of magnetic fields, electric fields and temperature under ambient conditions. Much of the work on NV magnetic sensing has been done so as to achieve the smallest volume, high sensitivity of NV ensemble-based magnetometry using micro-cavity, light-trapping diamond waveguide (LTDW), nano-cantilevers combined with MEMS (Micro-Electronic-Mechanical System) techniques. Recently, frequency-modulated microwaves with continuous optical excitation method have been proposed to achieve high sensitivity of 6 μT/√Hz using individual NV centres at nanoscale. In this research, we built-up an experiment to measure static magnetic field through continuous wave optical excitation with frequency-modulated microwaves method under continuous illumination with green pump light at 532 nm, and bulk diamond sample with a high density of NV centers (1 ppm). The output of the confocal microscopy was collected by an objective (NA = 0.7) and detected by a high sensitivity photodetector. We design uniform and efficient excitation of the micro strip antenna, which is coupled well with the spin ensembles at 2.87 GHz for zero-field splitting of the NV centers. Output of the PD signal was sent to an LIA (Lock-In Amplifier) modulated signal, generated by the microwave source by IQ mixer. The detected signal is received by the photodetector, and the reference signal enters the lock-in amplifier to realize the open-loop detection of the NV atomic magnetometer. We can plot ODMR spectra under continuous-wave (CW) microwave. Due to the high sensitivity of the lock-in amplifier, the minimum detectable value of the voltage can be measured, and the minimum detectable frequency can be made by the minimum and slope of the voltage. The magnetic field sensitivity can be derived from η = δB√T corresponds to a 10 nT minimum detectable shift in the magnetic field. Further, frequency analysis of the noise in the system indicates that at 10Hz the sensitivity less than 10 nT/√Hz.

Keywords: nitrogen-vacancy (NV) centers, frequency-modulated microwaves, magnetic field sensitivity, noise density

Procedia PDF Downloads 411
19907 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations

Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh

Abstract:

Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.

Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy

Procedia PDF Downloads 69
19906 Subjective versus Objective Assessment for Magnetic Resonance (MR) Images

Authors: Heshalini Rajagopal, Li Sze Chow, Raveendran Paramesran

Abstract:

Magnetic Resonance Imaging (MRI) is one of the most important medical imaging modality. Subjective assessment of the image quality is regarded as the gold standard to evaluate MR images. In this study, a database of 210 MR images which contains ten reference images and 200 distorted images is presented. The reference images were distorted with four types of distortions: Rician Noise, Gaussian White Noise, Gaussian Blur and DCT compression. The 210 images were assessed by ten subjects. The subjective scores were presented in Difference Mean Opinion Score (DMOS). The DMOS values were compared with four FR-IQA metrics. We have used Pearson Linear Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) to validate the DMOS values. The high correlation values of PLCC and SROCC shows that the DMOS values are close to the objective FR-IQA metrics.

Keywords: medical resonance (MR) images, difference mean opinion score (DMOS), full reference image quality assessment (FR-IQA)

Procedia PDF Downloads 430
19905 Analysis of Noise Environment and Acoustics Material in Residential Building

Authors: Heruanda Alviana Giska Barabah, Hilda Rasnia Hapsari

Abstract:

Acoustic phenomena create an acoustic interpretation condition that describes the characteristics of the environment. In urban areas, the tendency of heterogeneous and simultaneous human activity form a soundscape that is different from other regions, one of the characteristics of urban areas that developing the soundscape is the presence of vertical model houses or residential building. Activities both within the building and surrounding environment are able to make the soundscape with certain characteristics. The acoustics comfort of residential building becomes an important aspect, those demand lead the building features become more diverse. Initial steps in mapping acoustic conditions in a soundscape are important, this is the method to determine uncomfortable condition. Noise generated by road traffic, railway, and plane is an important consideration, especially for urban people, therefore the proper design of the building becomes very important as an effort to bring appropriate acoustics comfort. In this paper the authors developed noise mapping on the location of the residential building. Mapping done by taking some point referring to the noise source. The mapping result become the basis for modeling the acoustics wave interacted with the building model. Material selection is done based on literature study and modeling simulation using Insul by considering the absorption coefficient and Sound Transmission Class. The analysis of acoustics rays is ray tracing method using Comsol simulator software that can show the movement of acoustics rays and their interaction with a boundary. The result of this study can be used to consider boundary material in residential building as well as consideration for improving the acoustic quality in the acoustics zones that are formed.

Keywords: residential building, noise, absorption coefficient, sound transmission class, ray tracing

Procedia PDF Downloads 227
19904 Low-Noise Amplifier Design for Improvement of Communication Range for Wake-Up Receiver Based Wireless Sensor Network Application

Authors: Ilef Ketata, Mohamed Khalil Baazaoui, Robert Fromm, Ahmad Fakhfakh, Faouzi Derbel

Abstract:

The integration of wireless communication, e. g. in real-or quasi-real-time applications, is related to many challenges such as energy consumption, communication range, latency, quality of service, and reliability. To minimize the latency without increasing energy consumption, wake-up receiver (WuRx) nodes have been introduced in recent works. Low-noise amplifiers (LNAs) are introduced to improve the WuRx sensitivity but increase the supply current severely. Different WuRx approaches exist with always-on, power-gated, or duty-cycled receiver designs. This paper presents a comparative study for improving communication range and decreasing the energy consumption of wireless sensor nodes.

Keywords: wireless sensor network, wake-up receiver, duty-cycled, low-noise amplifier, envelope detector, range study

Procedia PDF Downloads 80
19903 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 100
19902 A Three-modal Authentication Method for Industrial Robots

Authors: Luo Jiaoyang, Yu Hongyang

Abstract:

In this paper, we explore a method that can be used in the working scene of intelligent industrial robots to confirm the identity information of operators to ensure that the robot executes instructions in a sufficiently safe environment. This approach uses three information modalities, namely visible light, depth, and sound. We explored a variety of fusion modes for the three modalities and finally used the joint feature learning method to improve the performance of the model in the case of noise compared with the single-modal case, making the maximum noise in the experiment. It can also maintain an accuracy rate of more than 90%.

Keywords: multimodal, kinect, machine learning, distance image

Procedia PDF Downloads 53
19901 Performance Evaluation of a Very High-Resolution Satellite Telescope

Authors: Walid A. Attia, Taher M. Bazan, Fawzy Eltohamy, Mahmoud Fathy

Abstract:

System performance evaluation is an essential stage in the design of high-resolution satellite telescopes prior to the development process. In this paper, a system performance evaluation of a very high-resolution satellite telescope is investigated. The evaluated system has a Korsch optical scheme design. This design has been discussed in another paper with respect to three-mirror anastigmat (TMA) scheme design and the former configuration showed better results. The investigated system is based on the Korsch optical design integrated with a time-delay and integration charge coupled device (TDI-CCD) sensor to achieve a ground sampling distance (GSD) of 25 cm. The key performance metrics considered are the spatial resolution, the signal to noise ratio (SNR) and the total modulation transfer function (MTF) of the system. In addition, the national image interpretability rating scale (NIIRS) metric is assessed to predict the image quality according to the modified general image quality equation (GIQE). Based on the orbital, optical and detector parameters, the estimated GSD is found to be 25 cm. The SNR has been analyzed at different illumination conditions of target albedos, sun and sensor angles. The system MTF has been computed including diffraction, aberration, optical manufacturing, smear and detector sampling as the main contributors for evaluation the MTF. Finally, the system performance evaluation results show that the computed MTF value is found to be around 0.08 at the Nyquist frequency, the SNR value was found to be 130 at albedo 0.2 with a nadir viewing angles and the predicted NIIRS is in the order of 6.5 which implies a very good system image quality.

Keywords: modulation transfer function, national image interpretability rating scale, signal to noise ratio, satellite telescope performance evaluation

Procedia PDF Downloads 352
19900 External Noise Distillation in Quantum Holography with Undetected Light

Authors: Sebastian Töpfer, Jorge Fuenzalida, Marta Gilaberte Basset, Juan P. Torres, Markus Gräfe

Abstract:

This work presents an experimental and theoretical study about the noise resilience of quantum holography with undetected photons. Quantum imaging has become an important research topic in the recent years after its first publication in 2014. Following this research, advances towards different spectral ranges in detection and different optical geometries have been made. Especially an interest in the field of near infrared to mid infrared measurements has developed, because of the unique characteristic, that allows to sample a probe with photons in a different wavelength than the photons arriving at the detector. This promising effect can be used for medical applications, to measure in the so-called molecule fingerprint region, while using broadly available detectors for the visible spectral range. Further advance the development of quantum imaging methods have been made by new measurement and detection schemes. One of which is quantum holography with undetected light. It combines digital phase shifting holography with quantum imaging to extent the obtainable sample information, by measuring not only the object transmission, but also its influence on the phase shift experienced by the transmitted light. This work will present extended research for the quantum holography with undetected light scheme regarding the influence of external noise. It is shown experimentally and theoretically that the samples information can still be at noise levels of 250 times higher than the signal level, because of its information being transmitted by the interferometric pattern. A detailed theoretic explanation is also provided.

Keywords: distillation, quantum holography, quantum imaging, quantum metrology

Procedia PDF Downloads 36