Search results for: noise attenuation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1334

Search results for: noise attenuation

764 Seismic Hazard Study and Strong Ground Motion in Southwest Alborz, Iran

Authors: Fereshteh Pourmohammad, Mehdi Zare

Abstract:

The city of Karaj, having a population of 2.2 millions (est. 2022) is located in the South West of Alborz Mountain Belt in Northern Iran. The region is known to be a highly active seismic zone. This study is focused on the geological and seismological analyses within a radius of 200 km from the center of Karaj. There are identified five seismic zones and seven linear seismic sources. The maximum magnitude was calculated for the seismic zones. Scine tghe seismicity catalog is incomplete, we have used a parametric-historic algorithm and the Kijko and Sellevoll (1992) method was used to calculate seismicity parameters, and the return periods and the probability frequency of recurrence of the earthquake magnitude in each zone obtained for 475-years return period. According to the calculations, the highest and lowest earthquake magnitudes of 7.6 and 6.2 were respectively obtained in Zones 1 and 4. This result is a new and extremely important in view point of earthquake risk in a densely population city. The maximum strong horizontal ground motion for the 475-years return period 0.42g and for 2475-year return period 0.70g also the maximum strong vertical ground motion for 475-years return period 0.25g and 2475-years return period 0.44g was calculated using attenuation relationships. These acceleration levels are new, and are obtained to be about 25% higher than presented values in the Iranian building code.

Keywords: seismic zones, ground motion, return period, hazard analysis

Procedia PDF Downloads 95
763 Dynamic Behavior of Brain Tissue under Transient Loading

Authors: Y. J. Zhou, G. Lu

Abstract:

In this paper, an analytical study is made for the dynamic behavior of human brain tissue under transient loading. In this analytical model the Mooney-Rivlin constitutive law is coupled with visco-elastic constitutive equations to take into account both the nonlinear and time-dependent mechanical behavior of brain tissue. Five ordinary differential equations representing the relationships of five main parameters (radial stress, circumferential stress, radial strain, circumferential strain, and particle velocity) are obtained by using the characteristic method to transform five partial differential equations (two continuity equations, one motion equation, and two constitutive equations). Analytical expressions of the attenuation properties for spherical wave in brain tissue are analytically derived. Numerical results are obtained based on the five ordinary differential equations. The mechanical responses (particle velocity and stress) of brain are compared at different radii including 5, 6, 10, 15 and 25 mm under four different input conditions. The results illustrate that loading curves types of the particle velocity significantly influences the stress in brain tissue. The understanding of the influence by the input loading cures can be used to reduce the potentially injury to brain under head impact by designing protective structures to control the loading curves types.

Keywords: analytical method, mechanical responses, spherical wave propagation, traumatic brain injury

Procedia PDF Downloads 265
762 Construction of the Large Scale Biological Networks from Microarrays

Authors: Fadhl Alakwaa

Abstract:

One of the sustainable goals of the system biology is understanding gene-gene interactions. Hence, gene regulatory networks (GRN) need to be constructed for understanding the disease ontology and to reduce the cost of drug development. To construct gene regulatory from gene expression we need to overcome many challenges such as data denoising and dimensionality. In this paper, we develop an integrated system to reduce data dimension and remove the noise. The generated network from our system was validated via available interaction databases and was compared to previous methods. The result revealed the performance of our proposed method.

Keywords: gene regulatory network, biclustering, denoising, system biology

Procedia PDF Downloads 231
761 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique

Authors: Ahmet Karagoz, Irfan Karagoz

Abstract:

Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.

Keywords: automatic target recognition, sparse representation, image classification, SAR images

Procedia PDF Downloads 360
760 Global Stability Of Nonlinear Itô Equations And N. V. Azbelev's W-method

Authors: Arcady Ponosov., Ramazan Kadiev

Abstract:

The work studies the global moment stability of solutions of systems of nonlinear differential Itô equations with delays. A modified regularization method (W-method) for the analysis of various types of stability of such systems, based on the choice of the auxiliaryequations and applications of the theory of positive invertible matrices, is proposed and justified. Development of this method for deterministic functional differential equations is due to N.V. Azbelev and his students. Sufficient conditions for the moment stability of solutions in terms of the coefficients for sufficiently general as well as specific classes of Itô equations are given.

Keywords: asymptotic stability, delay equations, operator methods, stochastic noise

Procedia PDF Downloads 220
759 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 141
758 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 306
757 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach

Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz

Abstract:

Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.

Keywords: machine learning, noise reduction, preterm birth, sleep habit

Procedia PDF Downloads 143
756 Effect of Nanoscale Bismuth Oxide on Radiation Shielding and Interaction Characteristics of Polyvinyl Alcohol-Based Polymer for Medical Apron Design

Authors: E. O. Echeweozo

Abstract:

This study evaluated radiation shielding and interaction characteristics of polyvinyl alcohol (PVA) polymer separately doped with 10% and 20% nanoscale Bi₂O₃, respectively, for medical apron design and shielding special electronic installations. Prepared samples were characterized by scanning electron microscopy (SEM) and energy dispersive spectrometry (EDS). The EDS results showed that Carbon (C), Oxygen (O), and bismuth (Bi) elements were the predominant elements present in the prepared samples. The SEM result displaced surface irregularities due to a special bonding matrix between PVA and Bi₂O₃. Mass attenuation coefficient (MAC), effective atomic number (Zeff), Half value layer (HVL), Mean free path (MFP), Fast neutron removal cross-section (R), Total Mass Stopping Power (TSP), and photon Range (R) of the prepared polymer composites (PV-1Bi and PV-2Bi) were evaluated with XCOM and PHITS computer programs. Results showed that the MAC of the prepared polymer samples was significantly higher than some recently developed composites at 0.662MeV and 1.25MeV gamma energy. Therefore, polyvinyl alcohol (PVA) polymer doped with Bi₂O₃ should be deployed in medical apron design and shielding special electronic installations where flexibility and high adhesion ability are crucial.

Keywords: polyvinyl alcohol (PVA);, polymer composite, gamma-rays, charged particles

Procedia PDF Downloads 13
755 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 287
754 Distribution of Current Emerging Contaminants in South Africa Surface and Groundwater

Authors: Jou-An Chen, Julio Castillo, Errol Duncan Cason, Gabre Kemp, Leana Esterhuizen, Angel Valverde Portal, Esta Van Heerden

Abstract:

Emerging contaminants (EC) such as pharmaceutical and personal care products have been accumulating for years in water bodies all over the world. However, very little is known about the occurrences, levels, and effects of ECs in South African water resources. This study provides an initial assessment of the distribution of eight ECs (Acetaminophen, Atrazine, Terbuthlyazine, Carbamazepine, Phenyton, Sulfmethoxazole, Nevirapine and Fluconozole) in fifteen water sources from the Free State and Easter Cape provinces of South Africa. Overall, the physiochemical conditions were different in surface and groundwater samples, with concentrations of several elements such as B, Ca, Mg, Na, NO3, and TDS been statistically higher in groundwater. In contrast, ECs levels, quantified at ng/mL using the LC/MS/ESI, were much lower in groundwater samples. The ECs with higher contamination levels were Carbamazepine, Sulfmethoxazole, Nevirapine, and Terbuthlyazine, while the most widespread were Sulfmethoxazole and Fluconozole, detected in all surface and groundwater samples. Fecal and E. coli tests indicated that surface water was more contaminated than groundwater. Microbial communities, assessed using NGS, were dominated by the phyla Proteobacteria and Bacteroidetes, in both surface and groundwater. Actinobacteria, Planctomycetes, and Cyanobacteria, were more dominant in surface water, while Verrucomicrobia were overrepresented in groundwater. In conclusion, ECs contamination is closely associated with human activities (human wastes). The microbial diversity identified can suggest possible biodegradation processes.

Keywords: emerging contaminants, EC, personal care products, pharmaceuticals, natural attenuation process

Procedia PDF Downloads 211
753 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 146
752 Sigma-Delta ADCs Converter a Study Case

Authors: Thiago Brito Bezerra, Mauro Lopes de Freitas, Waldir Sabino da Silva Júnior

Abstract:

The Sigma-Delta A/D converters have been proposed as a practical application for A/D conversion at high rates because of its simplicity and robustness to imperfections in the circuit, also because the traditional converters are more difficult to implement in VLSI technology. These difficulties with conventional conversion methods need precise analog components in their filters and conversion circuits, and are more vulnerable to noise and interference. This paper aims to analyze the architecture, function and application of Analog-Digital converters (A/D) Sigma-Delta to overcome these difficulties, showing some simulations using the Simulink software and Multisim.

Keywords: analysis, oversampling modulator, A/D converters, sigma-delta

Procedia PDF Downloads 322
751 Study of Adaptive Filtering Algorithms and the Equalization of Radio Mobile Channel

Authors: Said Elkassimi, Said Safi, B. Manaut

Abstract:

This paper presented a study of three algorithms, the equalization algorithm to equalize the transmission channel with ZF and MMSE criteria, application of channel Bran A, and adaptive filtering algorithms LMS and RLS to estimate the parameters of the equalizer filter, i.e. move to the channel estimation and therefore reflect the temporal variations of the channel, and reduce the error in the transmitted signal. So far the performance of the algorithm equalizer with ZF and MMSE criteria both in the case without noise, a comparison of performance of the LMS and RLS algorithm.

Keywords: adaptive filtering second equalizer, LMS, RLS Bran A, Proakis (B) MMSE, ZF

Procedia PDF Downloads 309
750 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 133
749 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children

Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura

Abstract:

Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.

Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification

Procedia PDF Downloads 297
748 Inverse Scattering of Two-Dimensional Objects Using an Enhancement Method

Authors: A.R. Eskandari, M.R. Eskandari

Abstract:

A 2D complete identification algorithm for dielectric and multiple objects immersed in air is presented. The employed technique consists of initially retrieving the shape and position of the scattering object using a linear sampling method and then determining the electric permittivity and conductivity of the scatterer using adjoint sensitivity analysis. This inversion algorithm results in high computational speed and efficiency, and it can be generalized for any scatterer structure. Also, this method is robust with respect to noise. The numerical results clearly show that this hybrid approach provides accurate reconstructions of various objects.

Keywords: inverse scattering, microwave imaging, two-dimensional objects, Linear Sampling Method (LSM)

Procedia PDF Downloads 381
747 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI

Authors: Rutej R. Mehta, Michael A. Chappell

Abstract:

Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.

Keywords: arterial spin labelling, dispersion, MRI, perfusion

Procedia PDF Downloads 366
746 Affordable and Environmental Friendly Small Commuter Aircraft Improving European Mobility

Authors: Diego Giuseppe Romano, Gianvito Apuleo, Jiri Duda

Abstract:

Mobility is one of the most important societal needs for amusement, business activities and health. Thus, transport needs are continuously increasing, with the consequent traffic congestion and pollution increase. Aeronautic effort aims at smarter infrastructures use and in introducing greener concepts. A possible solution to address the abovementioned topics is the development of Small Air Transport (SAT) system, able to guarantee operability from today underused airfields in an affordable and green way, helping meanwhile travel time reduction, too. In the framework of Horizon2020, EU (European Union) has funded the Clean Sky 2 SAT TA (Transverse Activity) initiative to address market innovations able to reduce SAT operational cost and environmental impact, ensuring good levels of operational safety. Nowadays, most of the key technologies to improve passenger comfort and to reduce community noise, DOC (Direct Operating Costs) and pilot workload for SAT have reached an intermediate level of maturity TRL (Technology Readiness Level) 3/4. Thus, the key technologies must be developed, validated and integrated on dedicated ground and flying aircraft demonstrators to reach higher TRL levels (5/6). Particularly, SAT TA focuses on the integration at aircraft level of the following technologies [1]: 1)    Low-cost composite wing box and engine nacelle using OoA (Out of Autoclave) technology, LRI (Liquid Resin Infusion) and advance automation process. 2) Innovative high lift devices, allowing aircraft operations from short airfields (< 800 m). 3) Affordable small aircraft manufacturing of metallic fuselage using FSW (Friction Stir Welding) and LMD (Laser Metal Deposition). 4)       Affordable fly-by-wire architecture for small aircraft (CS23 certification rules). 5) More electric systems replacing pneumatic and hydraulic systems (high voltage EPGDS -Electrical Power Generation and Distribution System-, hybrid de-ice system, landing gear and brakes). 6) Advanced avionics for small aircraft, reducing pilot workload. 7) Advanced cabin comfort with new interiors materials and more comfortable seats. 8) New generation of turboprop engine with reduced fuel consumption, emissions, noise and maintenance costs for 19 seats aircraft. (9) Alternative diesel engine for 9 seats commuter aircraft. To address abovementioned market innovations, two different platforms have been designed: Reference and Green aircraft. Reference aircraft is a virtual aircraft designed considering 2014 technologies with an existing engine assuring requested take-off power; Green aircraft is designed integrating the technologies addressed in Clean Sky 2. Preliminary integration of the proposed technologies shows an encouraging reduction of emissions and operational costs of small: about 20% CO2 reduction, about 24% NOx reduction, about 10 db (A) noise reduction at measurement point and about 25% DOC reduction. Detailed description of the performed studies, analyses and validations for each technology as well as the expected benefit at aircraft level are reported in the present paper.

Keywords: affordable, European, green, mobility, technologies development, travel time reduction

Procedia PDF Downloads 95
745 Bioremediation Potential in Recalcitrant Areas of PCE in Alluvial Fan Deposits

Authors: J. Herrero, D. Puigserver, I. Nijenhuis, K. Kuntze, J. M. Carmona

Abstract:

In the transition zone between aquifers and basal aquitards, the perchloroethene (PCE)-pools are more recalcitrant than those elsewhere in the aquifer. Although biodegradation of chloroethenes occur in this zone, it is a slow process and a remediation strategy is needed. The aim of this study is to demonstrate that combined strategy of biostimulation and in situ chemical reduction (ISCR) is more efficient than the two separated strategies. Four different microcosm experiments with sediment and groundwater of a selected field site where an aged pool exists at the bottom of a transition zone were designed under i) natural conditions, ii) biostimulation with lactic acid, iii) ISCR with zero-value iron (ZVI) and under iv) a combined strategy with lactic acid and ZVI. Biotic and abiotic dehalogenation, terminal electron acceptor processes and evolution of microbial communities were determined for each experiment. The main results were: i) reductive dehalogenation of PCE-pools occurs under sulfate-reducing conditions; ii) biostimulation with lactic acid supports more pronounced reductive dehalogenation of PCE and trichloroethene (TCE), but results in an accumulation of 1,2-cis-dichloroethene (cDCE); iii) ISCR with ZVI produces a sustained dehalogenation of PCE and its metabolites iv) combined strategy of biostimulation and ISCR results in a fast dehalogenation of PCE and TCE and a sustained dehalogenation of cisDCE. These findings suggest that biostimulation and ISCR with ZVI are the most suitable strategies for a complete reductive dehalogenation of PCE-pools in the transition zone and further to enable the dissolution of dense non-aqueous phase liquids.

Keywords: aged PCE-pool, anaerobic microcosm experiment, biostimulation, in situ chemical reduction, natural attenuation

Procedia PDF Downloads 191
744 A Horn Antenna Loaded with FSS of Crossed Dipoles

Authors: Ibrahim Mostafa El-Mongy, Abdelmegid Allam

Abstract:

In this article analysis and investigation of the effect of loading a horn antenna with frequency selective surface (FSS) of crossed dipoles of finite size is presented. It is fabricated on Rogers RO4350 (lossy) of relative permittivity 3.33, thickness 1.524 mm and loss tangent 0.004. Basically it is applied for filtering and minimizing the interference and noise in the desired band. The filtration is carried out using a finite FSS of crossed dipoles of overall dimensions 98x58 mm2. The filtration is shown by limiting the transmission bandwidth from 4 GHz (8–12 GHz) to 0.25 GHz (10.75–11 GHz). It is simulated using CST MWS and measured using network analyzer. There is a good agreement between the simulated and measured results.

Keywords: antenna, filtenna, frequency selective surface (FSS), horn

Procedia PDF Downloads 454
743 Limits of Phase Modulated Frequency Shifted Holographic Vibrometry at Low Amplitudes of Vibrations

Authors: Pavel Psota, Vít Lédl, Jan Václavík, Roman Doleček, Pavel Mokrý, Petr Vojtíšek

Abstract:

This paper presents advanced time average digital holography by means of frequency shift and phase modulation. This technique can measure amplitudes of vibrations at ultimate dynamic range while the amplitude distribution evaluation is done independently in every pixel. The main focus of the paper is to gain insight into behavior of the method at low amplitudes of vibrations. In order to reach that, a set of experiments was performed. Results of the experiments together with novel noise suppression show the limit of the method to be below 0.1 nm.

Keywords: acusto-optical modulator, digital holography, low amplitudes, vibrometry

Procedia PDF Downloads 407
742 Effects of Hierarchy on Poisson’s Ratio and Phononic Bandgaps of Two-Dimensional Honeycomb Structures

Authors: Davood Mousanezhad, Ashkan Vaziri

Abstract:

As a traditional cellular structure, hexagonal honeycombs are known for their high strength-to-weight ratio. Here, we introduce a class of fractal-appearing hierarchical metamaterials by replacing the vertices of the original non-hierarchical hexagonal grid with smaller hexagons and iterating this process to achieve higher levels of hierarchy. It has been recently shown that the isotropic in-plane Young's modulus of this hierarchical structure at small deformations becomes 25 times greater than its regular counterpart with the same mass. At large deformations, we find that hierarchy-dependent elastic buckling introduced at relatively early stages of deformation decreases the value of Poisson's ratio as the structure is compressed uniaxially leading to auxeticity (i.e., negative Poisson's ratio) in subsequent stages of deformation. We also show that the topological hierarchical architecture and instability-induced pattern transformations of the structure under compression can be effectively used to tune the propagation of elastic waves within the structure. We find that the hierarchy tends to shift the existing phononic bandgaps (defined as frequency ranges of strong wave attenuation) to lower frequencies while opening up new bandgaps. Deformation is also demonstrated as another mechanism for opening more bandgaps in hierarchical structures. The results provide new insights into the role of structural organization and hierarchy in regulating mechanical properties of materials at both the static and dynamic regimes.

Keywords: cellular structures, honeycombs, hierarchical structures, metamaterials, multifunctional structures, phononic crystals, auxetic structures

Procedia PDF Downloads 345
741 Analysis and Experimental Research on the Influence of Lubricating Oil on the Transmission Efficiency of New Energy Vehicle Gearbox

Authors: Chen Yong, Bi Wangyang, Zang Libin, Li Jinkai, Cheng Xiaowei, Liu Jinmin, Yu Miao

Abstract:

New energy vehicle power transmission systems continue to develop in the direction of high torque, high speed, and high efficiency. The cooling and lubrication of the motor and the transmission system are integrated, and new requirements are placed on the lubricants for the transmission system. The effects of traditional lubricants and special lubricants for new energy vehicles on transmission efficiency were studied through experiments and simulation methods. A mathematical model of the transmission efficiency of the lubricating oil in the gearbox was established. The power loss of each part was analyzed according to the working conditions. The relationship between the speed and the characteristics of different lubricating oil products on the power loss of the stirring oil was discussed. The minimum oil film thickness was required for the life of the gearbox. The accuracy of the calculation results was verified by the transmission efficiency test conducted on the two-motor integrated test bench. The results show that the efficiency increases first and then decreases with the increase of the speed and decreases with the increase of the kinematic viscosity of the lubricant. The increase of the kinematic viscosity amplifies the transmission power loss caused by the high speed. New energy vehicle special lubricants have less attenuation of transmission efficiency in the range above mid-speed. The research results provide a theoretical basis and guidance for the evaluation and selection of transmission efficiency of gearbox lubricants for new energy vehicles.

Keywords: new energy vehicles, lubricants, transmission efficiency, kinematic viscosity, test and simulation

Procedia PDF Downloads 128
740 Procedural Protocol for Dual Energy Computed Tomography (DECT) Inversion

Authors: Rezvan Ravanfar Haghighi, S. Chatterjee, Pratik Kumar, V. C. Vani, Priya Jagia, Sanjiv Sharma, Susama Rani Mandal, R. Lakshmy

Abstract:

The dual energy computed tomography (DECT) aims at noting the HU(V) values for the sample at two different voltages V=V1, V2 and thus obtain the electron densities (ρe) and effective atomic number (Zeff) of the substance. In the present paper, we aim to obtain a numerical algorithm by which (ρe, Zeff) can be obtained from the HU(100) and HU(140) data, where V=100, 140 kVp. The idea is to use this inversion method to characterize and distinguish between the lipid and fibrous coronary artery plaques.With the idea to develop the inversion algorithm for low Zeff materials, as is the case with non calcified coronary artery plaque, we prepare aqueous samples whose calculated values of (ρe, Zeff) lie in the range (2.65×1023≤ ρe≤ 3.64×1023 per cc ) and (6.80≤ Zeff ≤ 8.90). We fill the phantom with these known samples and experimentally determine HU(100) and HU(140) for the same pixels. Knowing that the HU(V) values are related to the attenuation coefficient of the system, we present an algorithm by which the (ρe, Zeff) is calibrated with respect to (HU(100), HU(140)). The calibration is done with a known set of 20 samples; its accuracy is checked with a different set of 23 known samples. We find that the calibration gives the ρe with an accuracy of ± 4% while Zeff is found within ±1% of the actual value, the confidence being 95%.In this inversion method (ρe, Zeff) of the scanned sample can be found by eliminating the effects of the CT machine and also by ensuring that the determination of the two unknowns (ρe, Zeff) does not interfere with each other. It is found that this algorithm can be used for prediction of chemical characteristic (ρe, Zeff) of unknown scanned materials with 95% confidence level, by inversion of the DECT data.

Keywords: chemical composition, dual-energy computed tomography, inversion algorithm

Procedia PDF Downloads 436
739 Seismic Microzoning and Resonant Map for Urban Planning

Authors: F. Tahiri, F. Grajçevci

Abstract:

The cities are coping with permanent demands to extend their residential and economical capacity. The new urban zones are sometimes induced to be developed in more vulnerable environments. This study is aimed to identify and mitigate the seismic hazards in the stage of urban planning for new settlements, including the existing urban environments which initially have not considered the seismic hazard. Seismic microzoning shall study the amplification/attenuation of seismic excitations from the bedrock to the ground surface. Modification of the seismic excitation is governed from the site specific ground conditions, presented on ground surface as mean values of the ratio of maximum accelerations at the surface versus acceleration of subsoil media – presented with dynamic amplification factors (DAF). The values shall be used to create the maps with isolines of DAF and then seismic microzoning with expected maximum mean surface acceleration as a product of DAF with maximum accelerations at bedrock. Development of resonant map shall conglomerate the information’s obtained from seismic microzoning in regard to expected predominant ground periods of seismic excitation and periods of vibrations of designed/built structures. These information’s shall be used as indispensible tool in early stages of urban planning to determine the most optimal zones for construction, the constructive materials, structural systems, range of buildings height, etc. so the resonance of soil media with built structures is avoided. The information’s could be used also for assessment of seismic risk and vulnerability-damageability of existing urban environments.

Keywords: vulnerable environment, mitigation, seismic microzoning, resonant map, urban planning

Procedia PDF Downloads 509
738 A Rapid Code Acquisition Scheme in OOC-Based CDMA Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

We propose a code acquisition scheme called improved multiple-shift (IMS) for optical code division multiple access systems, where the optical orthogonal code is used instead of the pseudo noise code. Although the IMS algorithm has a similar process to that of the conventional MS algorithm, it has a better code acquisition performance than the conventional MS algorithm. We analyze the code acquisition performance of the IMS algorithm and compare the code acquisition performances of the MS and the IMS algorithms in single-user and multi-user environments.

Keywords: code acquisition, optical CDMA, optical orthogonal code, serial algorithm

Procedia PDF Downloads 534
737 A Radiofrequency Based Navigation Method for Cooperative Robotic Communities in Surface Exploration Missions

Authors: Francisco J. García-de-Quirós, Gianmarco Radice

Abstract:

When considering small robots working in a cooperative community for Moon surface exploration, navigation and inter-nodes communication aspects become a critical issue for the mission success. For this approach to succeed, it is necessary however to deploy the required infrastructure for the robotic community to achieve efficient self-localization as well as relative positioning and communications between nodes. In this paper, an exploration mission concept in which two cooperative robotic systems co-exist is presented. This paradigm hinges on a community of reference agents that provide support in terms of communication and navigation to a second agent community tasked with exploration goals. The work focuses on the role of the agent community in charge of the overall support and, more specifically, will focus on the positioning and navigation methods implemented in RF microwave bands, which are combined with the communication services. An analysis of the different methods for range and position calculation are presented, as well as the main limiting factors for precision and resolution, such as phase and frequency noise in RF reference carriers and drift mechanisms such as thermal drift and random walk. The effects of carrier frequency instability due to phase noise are categorized in different contributing bands, and the impact of these spectrum regions are considered both in terms of the absolute position and the relative speed. A mission scenario is finally proposed, and key metrics in terms of mass and power consumption for the required payload hardware are also assessed. For this purpose, an application case involving an RF communication network in UHF Band is described, in coexistence with a communications network used for the single agents to communicate within the both the exploring agents as well as the community and with the mission support agents. The proposed approach implements a substantial improvement in planetary navigation since it provides self-localization capabilities for robotic agents characterized by very low mass, volume and power budgets, thus enabling precise navigation capabilities to agents of reduced dimensions. Furthermore, a common and shared localization radiofrequency infrastructure enables new interaction mechanisms such as spatial arrangement of agents over the area of interest for distributed sensing.

Keywords: cooperative robotics, localization, robot navigation, surface exploration

Procedia PDF Downloads 288
736 The High Precision of Magnetic Detection with Microwave Modulation in Solid Spin Assembly of NV Centres in Diamond

Authors: Zongmin Ma, Shaowen Zhang, Yueping Fu, Jun Tang, Yunbo Shi, Jun Liu

Abstract:

Solid-state quantum sensors are attracting wide interest because of their high sensitivity at room temperature. In particular, spin properties of nitrogen–vacancy (NV) color centres in diamond make them outstanding sensors of magnetic fields, electric fields and temperature under ambient conditions. Much of the work on NV magnetic sensing has been done so as to achieve the smallest volume, high sensitivity of NV ensemble-based magnetometry using micro-cavity, light-trapping diamond waveguide (LTDW), nano-cantilevers combined with MEMS (Micro-Electronic-Mechanical System) techniques. Recently, frequency-modulated microwaves with continuous optical excitation method have been proposed to achieve high sensitivity of 6 μT/√Hz using individual NV centres at nanoscale. In this research, we built-up an experiment to measure static magnetic field through continuous wave optical excitation with frequency-modulated microwaves method under continuous illumination with green pump light at 532 nm, and bulk diamond sample with a high density of NV centers (1 ppm). The output of the confocal microscopy was collected by an objective (NA = 0.7) and detected by a high sensitivity photodetector. We design uniform and efficient excitation of the micro strip antenna, which is coupled well with the spin ensembles at 2.87 GHz for zero-field splitting of the NV centers. Output of the PD signal was sent to an LIA (Lock-In Amplifier) modulated signal, generated by the microwave source by IQ mixer. The detected signal is received by the photodetector, and the reference signal enters the lock-in amplifier to realize the open-loop detection of the NV atomic magnetometer. We can plot ODMR spectra under continuous-wave (CW) microwave. Due to the high sensitivity of the lock-in amplifier, the minimum detectable value of the voltage can be measured, and the minimum detectable frequency can be made by the minimum and slope of the voltage. The magnetic field sensitivity can be derived from η = δB√T corresponds to a 10 nT minimum detectable shift in the magnetic field. Further, frequency analysis of the noise in the system indicates that at 10Hz the sensitivity less than 10 nT/√Hz.

Keywords: nitrogen-vacancy (NV) centers, frequency-modulated microwaves, magnetic field sensitivity, noise density

Procedia PDF Downloads 434
735 Sinapic Acid Attenuation of Cyclophosphamide-Induced Liver Toxicity in Mice by Modulating Oxidative Stress, Nf-κB, and Caspase-3

Authors: Shiva Rezaei, Seyed Jalal Hosseinimehr, Abbasali Karimpour Malekshah, Mansooreh Mirzaei, Fereshteh Talebpour Amiri, Mehryar Zargari

Abstract:

Objective(s): Cyclophosphamide (CP), as an antineoplastic drug, is widely used in cancer patients, and liver toxicity is one of its complications. Sinapic acid (SA), as a natural phenylpropanoid, has antioxidant, anti-inflammatory, and anti-cancer properties. Materials and Methods: The purpose of the current study was to determine the protective effect of SA versus CP-induced liver toxicity. In this research, BALB/c mice were treated with SA (5 and 10 mg/kg) orally for one week, and CP (200 mg/kg) was injected on day 3 of the study. Oxidative stress markers, serum liver-specific enzymes, histopathological features, caspase-3, and nuclear factor kappa-B cells were then checked. Results: CP induced hepatotoxicity in mice and showed structural changes in liver tissue. CP significantly increased liver enzymes and lipid peroxidation and decreased glutathione. The immunoreactivity of caspase-3 and nuclear factor kappa-B cells was significantly increased. Administration of SA significantly maintained histochemical parameters and liver function enzymes in mice treated with CP. Immunohistochemical examination showed SA reduced apoptosis and inflammation. Conclusion: The data confirmed that SA with anti-apoptotic, anti-oxidative, and anti-inflammatory activities was able to preserve CP-induced liver injury in mice.

Keywords: apoptosis, cyclophosphamide, liver injury, inflammation, oxidative stress, sinapic acid

Procedia PDF Downloads 51