Search results for: PQ signal filtering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1933

Search results for: PQ signal filtering

1093 0.13-μm CMOS Vector Modulator for Wireless Backhaul System

Authors: J. S. Kim, N. P. Hong

Abstract:

In this paper, a CMOS vector modulator designed for wireless backhaul system based on 802.11ac is presented. A poly phase filter and sign select switches yield two orthogonal signal paths. Two variable gain amplifiers with strongly reduced phase shift of only ±5 ° are used to weight these paths. It has a phase control range of 360 ° and a gain range of -10 dB to 10 dB. The current drawn from a 1.2 V supply amounts 20.4 mA. Using a 0.13 mm technology, the chip die area amounts 1.47x0.75 mm².

Keywords: CMOS, phase shifter, backhaul, 802.11ac

Procedia PDF Downloads 383
1092 Application of Compressed Sensing Method for Compression of Quantum Data

Authors: M. Kowalski, M. Życzkowski, M. Karol

Abstract:

Current quantum key distribution systems (QKD) offer low bit rate of up to single MHz. Compared to conventional optical fiber links with multiple GHz bitrates, parameters of recent QKD systems are significantly lower. In the article we present the conception of application of the Compressed Sensing method for compression of quantum information. The compression methodology as well as the signal reconstruction method and initial results of improving the throughput of quantum information link are presented.

Keywords: quantum key distribution systems, fiber optic system, compressed sensing

Procedia PDF Downloads 690
1091 Integrated Geotechnical and Geophysical Investigation of a Proposed Construction Site at Mowe, Southwestern Nigeria

Authors: Kayode Festus Oyedele, Sunday Oladele, Adaora Chibundu Nduka

Abstract:

The subsurface of a proposed site for building development in Mowe, Nigeria, using Standard Penetration Test (SPT) and Cone Penetrometer Test (CPT) supplemented with Horizontal Electrical Profiling (HEP) was investigated with the aim of evaluating the suitability of the strata for foundation materials. Four SPT and CPT were implemented using 10 tonnes hammer. HEP utilizing Wenner array were performed with inter-electrode spacing of 10 – 60 m along four traverses coincident with each of the SPT and CPT. The HEP data were processed using DIPRO software and textural filtering of the resulting resistivity sections was implemented to enable delineation of hidden layers. Sandy lateritic clay, silty lateritic clay, clay, clayey sand and sand horizons were delineated. The SPT “N” value defined very soft to soft sandy lateritic (<4), stiff silty lateritic clay (7 – 12), very stiff silty clay (12 - 15), clayey sand (15- 20) and sand (27 – 37). Sandy lateritic clay (5-40 kg/cm2) and silty lateritic clay (25 - 65 kg/cm2) were defined from the CPT response. Sandy lateritic clay (220-750 Ωm), clay (< 50 Ωm) and sand (415-5359 Ωm) were delineated from the resistivity sections with two thin layers of silty lateritic clay and clayey sand defined in the texturally filtered resistivity sections. This study concluded that the presence of incompetent thick clayey materials (18 m) beneath the study area makes it unsuitable for shallow foundation. Deep foundation involving piling through the clayey layers to the competent sand at 20 m depth was recommended.

Keywords: cone penetrometer, foundation, lithologic texture, resistivity section, standard penetration test

Procedia PDF Downloads 263
1090 Enhanced Near-Infrared Upconversion Emission Based Lateral Flow Immunoassay for Background-Free Detection of Avian Influenza Viruses

Authors: Jaeyoung Kim, Heeju Lee, Huijin Jung, Heesoo Pyo, Seungki Kim, Joonseok Lee

Abstract:

Avian influenza viruses (AIV) are the primary cause of highly contagious respiratory diseases caused by type A influenza viruses of the Orthomyxoviridae family. AIV are categorized on the basis of types of surface glycoproteins such as hemagglutinin and neuraminidase. Certain H5 and H7 subtypes of AIV have evolved to the high pathogenic avian influenza (HPAI) virus, which has caused considerable economic loss to the poultry industry and led to severe public health crisis. Several commercial kits have been developed for on-site detection of AIV. However, the sensitivity of these methods is too low to detect low virus concentrations in clinical samples and opaque stool samples. Here, we introduced a background-free near-infrared (NIR)-to-NIR upconversion nanoparticle-based lateral flow immunoassay (NNLFA) platform to yield a sensor that detects AIV within 20 minutes. Ca²⁺ ion in the shell was used to enhance the NIR-to-NIR upconversion photoluminescence (PL) emission as a heterogeneous dopant without inducing significant changes in the morphology and size of the UCNPs. In a mixture of opaque stool samples and gold nanoparticles (GNPs), which are components of commercial AIV LFA, the background signal of the stool samples mask the absorption peak of GNPs. However, UCNPs dispersed in the stool samples still show strong emission centered at 800 nm when excited at 980 nm, which enables the NNLFA platform to detect 10-times lower viral load than a commercial GNP-based AIV LFA. The detection limit of NNLFA for low pathogenic avian influenza (LPAI) H5N2 and HPAI H5N6 viruses was 10² EID₅₀/mL and 10³.⁵ EID₅₀/mL, respectively. Moreover, when opaque brown-colored samples were used as the target analytes, strong NIR emission signal from the test line in NNLFA confirmed the presence of AIV, whereas commercial AIV LFA detected AIV with difficulty. Therefore, we propose that this rapid and background-free NNLFA platform has the potential of detecting AIV in the field, which could effectively prevent the spread of these viruses at an early stage.

Keywords: avian influenza viruses, lateral flow immunoassay on-site detection, upconversion nanoparticles

Procedia PDF Downloads 163
1089 Successful Rehabilitation of Recalcitrant Knee Pain Due to Anterior Cruciate Ligament Injury Masked by Extensive Skin Graft: A Case Report

Authors: Geum Yeon Sim, Tyler Pigott, Julio Vasquez

Abstract:

A 38-year-old obese female with no apparent past medical history presented with left knee pain. Six months ago, she sustained a left knee dislocation in a motor vehicle accident that was managed with a skin graft over the left lower extremity without any reconstructive surgery. She developed persistent pain and stiffness in her left knee that worsened with walking and stair climbing. Examination revealed healed extensive skin graft over the left lower extremity, including the left knee. Palpation showed moderate tenderness along the superior border of the patella, exquisite tenderness over MCL, and mild tenderness on the tibial tuberosity. There was normal sensation, reflexes, and strength in her lower extremities. There was limited active and passive range of motion of her left knee during flexion. There was instability noted upon the valgus stress test of the left knee. Left knee magnetic resonance imaging showed high-grade (grade 2-3) injury of the proximal superficial fibers of the MCL and diffuse thickening and signal abnormality of the cruciate ligaments, as well as edema-like subchondral marrow signal change in the anterolateral aspect of the lateral femoral condyle weight-bearing surface. There was also notable extensive scarring and edema of the skin, subcutaneous soft tissues, and musculature surrounding the knee. The patient was managed with left knee immobilization for five months, which was complicated by limited knee flexion. Physical therapy consisting of quadriceps, hamstrings, gastrocnemius stretching and strengthening, range of motion exercises, scar/soft tissue mobilization, and gait training was given with marked improvement in pain and range of motion. The patient experienced a further reduction in pain as well as an improvement in function with home exercises consisting of continued strengthening and stretching.

Keywords: ligamentous injury, trauma, rehabilitation, knee pain

Procedia PDF Downloads 106
1088 Development of a Laboratory Laser-Produced Plasma “Water Window” X-Ray Source for Radiobiology Experiments

Authors: Daniel Adjei, Mesfin Getachew Ayele, Przemyslaw Wachulak, Andrzej Bartnik, Luděk Vyšín, Henryk Fiedorowicz, Inam Ul Ahad, Lukasz Wegrzynski, Anna Wiechecka, Janusz Lekki, Wojciech M. Kwiatek

Abstract:

Laser produced plasma light sources, emitting high intensity pulses of X-rays, delivering high doses are useful to understand the mechanisms of high dose effects on biological samples. In this study, a desk-top laser plasma soft X-ray source, developed for radio biology research, is presented. The source is based on a double-stream gas puff target, irradiated with a commercial Nd:YAG laser (EKSPLA), which generates laser pulses of 4 ns time duration and energy up to 800 mJ at 10 Hz repetition rate. The source has been optimized for maximum emission in the “water window” wavelength range from 2.3 nm to 4.4 nm by using pure gas (argon, nitrogen and krypton) and spectral filtering. Results of the source characterization measurements and dosimetry of the produced soft X-ray radiation are shown and discussed. The high brightness of the laser produced plasma soft X-ray source and the low penetration depth of the produced X-ray radiation in biological specimen allows a high dose rate to be delivered to the specimen of over 28 Gy/shot; and 280 Gy/s at the maximum repetition rate of the laser system. The source has a unique capability for irradiation of cells with high pulse dose both in vacuum and He-environment. Demonstration of the source to induce DNA double- and single strand breaks will be discussed.

Keywords: laser produced plasma, soft X-rays, radio biology experiments, dosimetry

Procedia PDF Downloads 585
1087 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers

Authors: Shreyas Srinivas Rangan, Jurgis Porins

Abstract:

The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.

Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers

Procedia PDF Downloads 67
1086 Noise Mitigation Techniques to Minimize Electromagnetic Interference/Electrostatic Discharge Effects for the Lunar Mission Spacecraft

Authors: Vabya Kumar Pandit, Mudit Mittal, N. Prahlad Rao, Ramnath Babu

Abstract:

TeamIndus is the only Indian team competing for the Google Lunar XPRIZE(GLXP). The GLXP is a global competition to challenge the private entities to soft land a rover on the moon, travel minimum 500 meters and transmit high definition images and videos to Earth. Towards this goal, the TeamIndus strategy is to design and developed lunar lander that will deliver a rover onto the surface of the moon which will accomplish GLXP mission objectives. This paper showcases the various system level noise control techniques adopted by Electrical Distribution System (EDS), to achieve the required Electromagnetic Compatibility (EMC) of the spacecraft. The design guidelines followed to control Electromagnetic Interference by proper electronic package design, grounding, shielding, filtering, and cable routing within the stipulated mass budget, are explained. The paper also deals with the challenges of achieving Electromagnetic Cleanliness in presence of various Commercial Off-The-Shelf (COTS) and In-House developed components. The methods of minimizing Electrostatic Discharge (ESD) by identifying the potential noise sources, susceptible areas for charge accumulation and the methodology to prevent arcing inside spacecraft are explained. The paper then provides the EMC requirements matrix derived from the mission requirements to meet the overall Electromagnetic compatibility of the Spacecraft.

Keywords: electromagnetic compatibility, electrostatic discharge, electrical distribution systems, grounding schemes, light weight harnessing

Procedia PDF Downloads 291
1085 Double Functionalization of Magnetic Colloids with Electroactive Molecules and Antibody for Platelet Detection and Separation

Authors: Feixiong Chen, Naoufel Haddour, Marie Frenea-Robin, Yves MéRieux, Yann Chevolot, Virginie Monnier

Abstract:

Neonatal thrombopenia occurs when the mother generates antibodies against her baby’s platelet antigens. It is particularly critical for newborns because it can cause coagulation troubles leading to intracranial hemorrhage. In this case, diagnosis must be done quickly to make platelets transfusion immediately after birth. Before transfusion, platelet antigens must be tested carefully to avoid rejection. The majority of thrombopenia (95 %) are caused by antibodies directed against Human Platelet Antigen 1a (HPA-1a) or 5b (HPA-5b). The common method for antigen platelets detection is polymerase chain reaction allowing for identification of gene sequence. However, it is expensive, time-consuming and requires significant blood volume which is not suitable for newborns. We propose to develop a point-of-care device based on double functionalized magnetic colloids with 1) antibodies specific to antigen platelets and 2) highly sensitive electroactive molecules in order to be detected by an electrochemical microsensor. These magnetic colloids will be used first to isolate platelets from other blood components, then to capture specifically platelets bearing HPA-1a and HPA-5b antigens and finally to attract them close to sensor working electrode for improved electrochemical signal. The expected advantages are an assay time lower than 20 min starting from blood volume smaller than 100 µL. Our functionalization procedure based on amine dendrimers and NHS-ester modification of initial carboxyl colloids will be presented. Functionalization efficiency was evaluated by colorimetric titration of surface chemical groups, zeta potential measurements, infrared spectroscopy, fluorescence scanning and cyclic voltammetry. Our results showed that electroactive molecules and antibodies can be immobilized successfully onto magnetic colloids. Application of a magnetic field onto working electrode increased the detected electrochemical signal. Magnetic colloids were able to capture specific purified antigens extracted from platelets.

Keywords: Magnetic Nanoparticles , Electroactive Molecules, Antibody, Platelet

Procedia PDF Downloads 269
1084 Identification of Spam Keywords Using Hierarchical Category in C2C E-Commerce

Authors: Shao Bo Cheng, Yong-Jin Han, Se Young Park, Seong-Bae Park

Abstract:

Consumer-to-Consumer (C2C) E-commerce has been growing at a very high speed in recent years. Since identical or nearly-same kinds of products compete one another by relying on keyword search in C2C E-commerce, some sellers describe their products with spam keywords that are popular but are not related to their products. Though such products get more chances to be retrieved and selected by consumers than those without spam keywords, the spam keywords mislead the consumers and waste their time. This problem has been reported in many commercial services like e-bay and taobao, but there have been little research to solve this problem. As a solution to this problem, this paper proposes a method to classify whether keywords of a product are spam or not. The proposed method assumes that a keyword for a given product is more reliable if the keyword is observed commonly in specifications of products which are the same or the same kind as the given product. This is because that a hierarchical category of a product in general determined precisely by a seller of the product and so is the specification of the product. Since higher layers of the hierarchical category represent more general kinds of products, a reliable degree is differently determined according to the layers. Hence, reliable degrees from different layers of a hierarchical category become features for keywords and they are used together with features only from specifications for classification of the keywords. Support Vector Machines are adopted as a basic classifier using the features, since it is powerful, and widely used in many classification tasks. In the experiments, the proposed method is evaluated with a golden standard dataset from Yi-han-wang, a Chinese C2C e-commerce, and is compared with a baseline method that does not consider the hierarchical category. The experimental results show that the proposed method outperforms the baseline in F1-measure, which proves that spam keywords are effectively identified by a hierarchical category in C2C e-commerce.

Keywords: spam keyword, e-commerce, keyword features, spam filtering

Procedia PDF Downloads 293
1083 Implementation of a Monostatic Microwave Imaging System using a UWB Vivaldi Antenna

Authors: Babatunde Olatujoye, Binbin Yang

Abstract:

Microwave imaging is a portable, noninvasive, and non-ionizing imaging technique that employs low-power microwave signals to reveal objects in the microwave frequency range. This technique has immense potential for adoption in commercial and scientific applications such as security scanning, material characterization, and nondestructive testing. This work presents a monostatic microwave imaging setup using an Ultra-Wideband (UWB), low-cost, miniaturized Vivaldi antenna with a bandwidth of 1 – 6 GHz. The backscattered signals (S-parameters) of the Vivaldi antenna used for scanning targets were measured in the lab using a VNA. An automated two-dimensional (2-D) scanner was employed for the 2-D movement of the transceiver to collect the measured scattering data from different positions. The targets consist of four metallic objects, each with a distinct shape. Similar setup was also simulated in Ansys HFSS. A high-resolution Back Propagation Algorithm (BPA) was applied to both the simulated and experimental backscattered signals. The BPA utilizes the phase and amplitude information recorded over a two-dimensional aperture of 50 cm × 50 cm with a discreet step size of 2 cm to reconstruct a focused image of the targets. The adoption of BPA was demonstrated by coherently resolving and reconstructing reflection signals from conventional time-of-flight profiles. For both the simulation and experimental data, BPA accurately reconstructed a high resolution 2D image of the targets in terms of shape and location. An improvement of the BPA, in terms of target resolution, was achieved by applying the filtering method in frequency domain.

Keywords: back propagation, microwave imaging, monostatic, vivialdi antenna, ultra wideband

Procedia PDF Downloads 16
1082 Creating Risk Maps on the Spatiotemporal Occurrence of Agricultural Insecticides in Sub-Saharan Africa

Authors: Chantal Hendriks, Harry Gibson, Anna Trett, Penny Hancock, Catherine Moyes

Abstract:

The use of modern inputs for crop protection, such as insecticides, is strongly underestimated in Sub-Saharan Africa. Several studies measured toxic concentrations of insecticides in fruits, vegetables and fish that were cultivated in Sub-Saharan Africa. The use of agricultural insecticides has impact on human and environmental health, but it also has the potential to impact on insecticide resistance in malaria transmitting mosquitos. To analyse associations between historic use of agricultural insecticides and the distribution of insecticide resistance through space and time, the use and environmental fate of agricultural insecticides needs to be mapped through the same time period. However, data on the use and environmental fate of agricultural insecticides in Africa are limited and therefore risk maps on the spatiotemporal occurrence of agricultural insecticides are created using environmental data. Environmental data on crop density and crop type were used to select the areas that most likely receive insecticides. These areas were verified by a literature review and expert knowledge. Pesticide fate models were compared to select most dominant processes that are involved in the environmental fate of insecticides and that can be mapped at a continental scale. The selected processes include: surface runoff, erosion, infiltration, volatilization and the storing and filtering capacity of soils. The processes indicate the risk for insecticide accumulation in soil, water, sediment and air. A compilation of all available data for traces of insecticides in the environment was used to validate the maps. The risk maps can result in space and time specific measures that reduce the risk of insecticide exposure to non-target organisms.

Keywords: crop protection, pesticide fate, tropics, insecticide resistance

Procedia PDF Downloads 140
1081 Acoustic Emission for Investigation of Processes Occurring at Hydrogenation of Metallic Titanium

Authors: Anatoly A. Kuznetsov, Pavel G. Berezhko, Sergey M. Kunavin, Eugeny V. Zhilkin, Maxim V. Tsarev, Vyacheslav V. Yaroshenko, Valery V. Mokrushin, Olga Y. Yunchina, Sergey A. Mityashin

Abstract:

The acoustic emission is caused by short-time propagation of elastic waves that are generated as a result of quick energy release from sources localized inside some material. In particular, the acoustic emission phenomenon lies in the generation of acoustic waves resulted from the reconstruction of material internal structures. This phenomenon is observed at various physicochemical transformations, in particular, at those accompanying hydrogenation processes of metals or intermetallic compounds that make it possible to study parameters of these transformations through recording and analyzing the acoustic signals. It has been known that at the interaction between metals or inter metallides with hydrogen the most intensive acoustic signals are generated as a result of cracking or crumbling of an initial compact powder sample as a result of the change of material crystal structure under hydrogenation. This work is dedicated to the study into changes occurring in metallic titanium samples at their interaction with hydrogen and followed by acoustic emission signals. In this work the subjects for investigation were specimens of metallic titanium in two various initial forms: titanium sponge and fine titanium powder made of this sponge. The kinetic of the interaction of these materials with hydrogen, the acoustic emission signals accompanying hydrogenation processes and the structure of the materials before and after hydrogenation were investigated. It was determined that in both cases interaction of metallic titanium and hydrogen is followed by acoustic emission signals of high amplitude generated on reaching some certain value of the atomic ratio [H]/[Ti] in a solid phase because of metal cracking at a macrolevel. The typical sizes of the cracks are comparable with particle sizes of hydrogenated specimens. The reasons for cracking are internal stresses initiated in a sample due to the increasing volume of a solid phase as a result of changes in a material crystal lattice under hydrogenation. When the titanium powder is used, the atomic ratio [H]/[Ti] in a solid phase corresponding to the maximum amplitude of an acoustic emission signal are, as a rule, higher than when titanium sponge is used.

Keywords: acoustic emission signal, cracking, hydrogenation, titanium specimen

Procedia PDF Downloads 384
1080 Empowering Transformers for Evidence-Based Medicine

Authors: Jinan Fiaidhi, Hashmath Shaik

Abstract:

Breaking the barrier for practicing evidence-based medicine relies on effective methods for rapidly identifying relevant evidence from the body of biomedical literature. An important challenge confronted by medical practitioners is the long time needed to browse, filter, summarize and compile information from different medical resources. Deep learning can help in solving this based on automatic question answering (Q&A) and transformers. However, Q&A and transformer technologies are not trained to answer clinical queries that can be used for evidence-based practice, nor can they respond to structured clinical questioning protocols like PICO (Patient/Problem, Intervention, Comparison and Outcome). This article describes the use of deep learning techniques for Q&A that are based on transformer models like BERT and GPT to answer PICO clinical questions that can be used for evidence-based practice extracted from sound medical research resources like PubMed. We are reporting acceptable clinical answers that are supported by findings from PubMed. Our transformer methods are reaching an acceptable state-of-the-art performance based on two staged bootstrapping processes involving filtering relevant articles followed by identifying articles that support the requested outcome expressed by the PICO question. Moreover, we are also reporting experimentations to empower our bootstrapping techniques with patch attention to the most important keywords in the clinical case and the PICO questions. Our bootstrapped patched with attention is showing relevancy of the evidence collected based on entropy metrics.

Keywords: automatic question answering, PICO questions, evidence-based medicine, generative models, LLM transformers

Procedia PDF Downloads 41
1079 Clouds Influence on Atmospheric Ozone from GOME-2 Satellite Measurements

Authors: S. M. Samkeyat Shohan

Abstract:

This study is mainly focused on the determination and analysis of the photolysis rate of atmospheric, specifically tropospheric, ozone as function of cloud properties through-out the year 2007. The observational basis for ozone concentrations and cloud properties are the measurement data set of the Global Ozone Monitoring Experiment-2 (GOME-2) sensor on board the polar orbiting Metop-A satellite. Two different spectral ranges are used; ozone total column are calculated from the wavelength window 325 – 335 nm, while cloud properties, such as cloud top height (CTH) and cloud optical thick-ness (COT) are derived from the absorption band of molecular oxygen centered at 761 nm. Cloud fraction (CF) is derived from measurements in the ultraviolet, visible and near-infrared range of GOME-2. First, ozone concentrations above clouds are derived from ozone total columns, subtracting the contribution of stratospheric ozone and filtering those satellite measurements which have thin and low clouds. Then, the values of ozone photolysis derived from observations are compared with theoretical modeled results, in the latitudinal belt 5˚N-5˚S and 20˚N - 20˚S, as function of CF and COT. In general, good agreement is found between the data and the model, proving both the quality of the space-borne ozone and cloud properties as well as the modeling theory of ozone photolysis rate. The found discrepancies can, however, amount to approximately 15%. Latitudinal seasonal changes of photolysis rate of ozone are found to be negatively correlated to changes in upper-tropospheric ozone concentrations only in the autumn and summer months within the northern and southern tropical belts, respectively. This fact points to the entangled roles of temperature and nitrogen oxides in the ozone production, which are superimposed on its sole photolysis induced by thick and high clouds in the tropics.

Keywords: cloud properties, photolysis rate, stratospheric ozone, tropospheric ozone

Procedia PDF Downloads 211
1078 Dynamic Simulation of a Hybrid Wind Farm with Wind Turbines and Distributed Compressed Air Energy Storage System

Authors: Eronini Iheanyi Umez-Eronini

Abstract:

Most studies and existing implementations of compressed air energy storage (CAES) coupled with a wind farm to overcome intermittency and variability of wind power are based on bulk or centralized CAES plants. A dynamic model of a hybrid wind farm with wind turbines and distributed CAES, consisting of air storage tanks and compressor and expander trains at each wind turbine station, is developed and simulated in MATLAB. An ad hoc supervisory controller, in which the wind turbines are simply operated under classical power optimizing region control while scheduling power production by the expanders and air storage by the compressors, including modulation of the compressor power levels within a control range, is used to regulate overall farm power production to track minute-scale (3-minutes sampling period) TSO absolute power reference signal, over an eight-hour period. Simulation results for real wind data input with a simple wake field model applied to a hybrid plant composed of ten 5-MW wind turbines in a row and ten compatibly sized and configured Diabatic CAES stations show the plant controller is able to track the power demand signal within an error band size on the order of the electrical power rating of a single expander. This performance suggests that much improved results should be anticipated when the global D-CAES control is combined with power regulation for the individual wind turbines using available approaches for wind farm active power control. For standalone power plant fuel electrical efficiency estimate of up to 60%, the round trip electrical storage efficiency computed for the distributed CAES wherein heat generated by running compressors is utilized in the preheat stage of running high pressure expanders while fuel is introduced and combusted before the low pressure expanders, was comparable to reported round trip storage electrical efficiencies for bulk Adiabatic CAES.

Keywords: hybrid wind farm, distributed CAES, diabatic CAES, active power control, dynamic modeling and simulation

Procedia PDF Downloads 82
1077 An Amended Method for Assessment of Hypertrophic Scars Viscoelastic Parameters

Authors: Iveta Bryjova

Abstract:

Recording of viscoelastic strain-vs-time curves with the aid of the suction method and a follow-up analysis, resulting into evaluation of standard viscoelastic parameters, is a significant technique for non-invasive contact diagnostics of mechanical properties of skin and assessment of its conditions, particularly in acute burns, hypertrophic scarring (the most common complication of burn trauma) and reconstructive surgery. For elimination of the skin thickness contribution, usable viscoelastic parameters deduced from the strain-vs-time curves are restricted to the relative ones (i.e. those expressed as a ratio of two dimensional parameters), like grosselasticity, net-elasticity, biological elasticity or Qu’s area parameters, in literature and practice conventionally referred to as R2, R5, R6, R7, Q1, Q2, and Q3. With the exception of parameters R2 and Q1, the remaining ones substantially depend on the position of inflection point separating the elastic linear and viscoelastic segments of the strain-vs-time curve. The standard algorithm implemented in commercially available devices relies heavily on the experimental fact that the inflection time comes about 0.1 sec after the suction switch-on/off, which depreciates credibility of parameters thus obtained. Although the Qu’s US 7,556,605 patent suggests a method of improving the precision of the inflection determination, there is still room for nonnegligible improving. In this contribution, a novel method of inflection point determination utilizing the advantageous properties of the Savitzky–Golay filtering is presented. The method allows computation of derivatives of smoothed strain-vs-time curve, more exact location of inflection and consequently more reliable values of aforementioned viscoelastic parameters. An improved applicability of the five inflection-dependent relative viscoelastic parameters is demonstrated by recasting a former study under the new method, and by comparing its results with those provided by the methods that have been used so far.

Keywords: Savitzky–Golay filter, scarring, skin, viscoelasticity

Procedia PDF Downloads 302
1076 Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique

Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan

Abstract:

In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application.

Keywords: power spectral density, 3D EEG model, brain balancing, kNN

Procedia PDF Downloads 484
1075 Real-Time Demonstration of Visible Light Communication Based on Frequency-Shift Keying Employing a Smartphone as the Receiver

Authors: Fumin Wang, Jiaqi Yin, Lajun Wang, Nan Chi

Abstract:

In this article, we demonstrate a visible light communication (VLC) system over 8 meters free space transmission based on a commercial LED and a receiver in connection with an audio interface of a smart phone. The signal is in FSK modulation format. The successful experimental demonstration validates the feasibility of the proposed system in future wireless communication network.

Keywords: visible light communication, smartphone communication, frequency shift keying, wireless communication

Procedia PDF Downloads 389
1074 Theoretical Analysis of Mechanical Vibration for Offshore Platform Structures

Authors: Saeed Asiri, Yousuf Z. AL-Zahrani

Abstract:

A new class of support structures, called periodic structures, is introduced in this paper as a viable means for isolating the vibration transmitted from the sea waves to offshore platform structures through its legs. A passive approach to reduce transmitted vibration generated by waves is presented. The approach utilizes the property of periodic structural components that creates stop and pass bands. The stop band regions can be tailored to correspond to regions of the frequency spectra that contain harmonics of the wave frequency, attenuating the response in those regions. A periodic structural component is comprised of a repeating array of cells, which are themselves an assembly of elements. The elements may have differing material properties as well as geometric variations. For the purpose of this research, only geometric and material variations are considered and each cell is assumed to be identical. A periodic leg is designed in order to reduce transmitted vibration of sea waves. The effectiveness of the periodicity on the vibration levels of platform will be demonstrated theoretically. The theory governing the operation of this class of periodic structures is introduced using the transfer matrix method. The unique filtering characteristics of periodic structures are demonstrated as functions of their design parameters for structures with geometrical and material discontinuities; and determine the propagation factor by using the spectral finite element analysis and the effectiveness of design on the leg structure by changing the ratio of step length and area interface between the materials is demonstrated in order to find the propagation factor and frequency response.

Keywords: vibrations, periodic structures, offshore, platforms, transfer matrix method

Procedia PDF Downloads 289
1073 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 175
1072 Study on the Impact of Default Converter on the Quality of Energy Produced by DFIG Based Wind Turbine

Authors: N. Zerzouri, N. Benalia, N. Bensiali

Abstract:

This work is devoted to an analysis of the operation of a doubly fed induction generator (DFIG) integrated with a wind system. The power transfer between the stator and the network is carried out by acting on the rotor via a bidirectional signal converter. The analysis is devoted to the study of a fault in the converter due to an interruption of the control of a semiconductor. Simulation results obtained by the MATLAB/Simulink software illustrate the quality of the power generated at the default.

Keywords: doubly fed induction generator (DFIG), wind energy, PWM inverter, modeling

Procedia PDF Downloads 316
1071 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 288
1070 Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System

Authors: Rahman Ali, Muhammad Sajjad, Farkhund Iqbal, Muhammad Sadiq Hassan Zada, Mohammed Hussain

Abstract:

Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.

Keywords: car parking, Co2, Co2 reduction, IoT, merge sort, number plate recognition, smart car parking

Procedia PDF Downloads 144
1069 Detection of Patient Roll-Over Using High-Sensitivity Pressure Sensors

Authors: Keita Nishio, Takashi Kaburagi, Yosuke Kurihara

Abstract:

Recent advances in medical technology have served to enhance average life expectancy. However, the total time for which the patients are prescribed complete bedrest has also increased. With patients being required to maintain a constant lying posture- also called bedsore- development of a system to detect patient roll-over becomes imperative. For this purpose, extant studies have proposed the use of cameras, and favorable results have been reported. Continuous on-camera monitoring, however, tends to violate patient privacy. We have proposed unconstrained bio-signal measurement system that could detect body-motion during sleep and does not violate patient’s privacy. Therefore, in this study, we propose a roll-over detection method by the date obtained from the bi-signal measurement system. Signals recorded by the sensor were assumed to comprise respiration, pulse, body motion, and noise components. Compared the body-motion and respiration, pulse component, the body-motion, during roll-over, generate large vibration. Thus, analysis of the body-motion component facilitates detection of the roll-over tendency. The large vibration associated with the roll-over motion has a great effect on the Root Mean Square (RMS) value of time series of the body motion component calculated during short 10 s segments. After calculation, the RMS value during each segment was compared to a threshold value set in advance. If RMS value in any segment exceeded the threshold, corresponding data were considered to indicate occurrence of a roll-over. In order to validate the proposed method, we conducted experiment. A bi-directional microphone was adopted as a high-sensitivity pressure sensor and was placed between the mattress and bedframe. Recorded signals passed through an analog Band-pass Filter (BPF) operating over the 0.16-16 Hz bandwidth. BPF allowed the respiration, pulse, and body-motion to pass whilst removing the noise component. Output from BPF was A/D converted with the sampling frequency 100Hz, and the measurement time was 480 seconds. The number of subjects and data corresponded to 5 and 10, respectively. Subjects laid on a mattress in the supine position. During data measurement, subjects—upon the investigator's instruction—were asked to roll over into four different positions—supine to left lateral, left lateral to prone, prone to right lateral, and right lateral to supine. Recorded data was divided into 48 segments with 10 s intervals, and the corresponding RMS value for each segment was calculated. The system was evaluated by the accuracy between the investigator’s instruction and the detected segment. As the result, an accuracy of 100% was achieved. While reviewing the time series of recorded data, segments indicating roll-over tendencies were observed to demonstrate a large amplitude. However, clear differences between decubitus and the roll-over motion could not be confirmed. Extant researches possessed a disadvantage in terms of patient privacy. The proposed study, however, demonstrates more precise detection of patient roll-over tendencies without violating their privacy. As a future prospect, decubitus estimation before and after roll-over could be attempted. Since in this paper, we could not confirm the clear differences between decubitus and the roll-over motion, future studies could be based on utilization of the respiration and pulse components.

Keywords: bedsore, high-sensitivity pressure sensor, roll-over, unconstrained bio-signal measurement

Procedia PDF Downloads 120
1068 A Structured Mechanism for Identifying Political Influencers on Social Media Platforms: Top 10 Saudi Political Twitter Users

Authors: Ahmad Alsolami, Darren Mundy, Manuel Hernandez-Perez

Abstract:

Social media networks, such as Twitter, offer the perfect opportunity to either positively or negatively affect political attitudes on large audiences. The existence of influential users who have developed a reputation for their knowledge and experience of specific topics is a major factor contributing to this impact. Therefore, knowledge of the mechanisms to identify influential users on social media is vital for understanding their effect on their audience. The concept of the influential user is related to the concept of opinion leaders' to indicate that ideas first flow from mass media to opinion leaders and then to the rest of the population. Hence, the objective of this research was to provide reliable and accurate structural mechanisms to identify influential users, which could be applied to different platforms, places, and subjects. Twitter was selected as the platform of interest, and Saudi Arabia as the context for the investigation. These were selected because Saudi Arabia has a large number of Twitter users, some of whom are considerably active in setting agendas and disseminating ideas. The study considered the scientific methods that have been used to identify public opinion leaders before, utilizing metrics software on Twitter. The key findings propose multiple novel metrics to compare Twitter influencers, including the number of followers, social authority and the use of political hashtags, and four secondary filtering measures. Thus, using ratio and percentage calculations to classify the most influential users, Twitter accounts were filtered, analyzed and included. The structured approach is used as a mechanism to explore the top ten influencers on Twitter from the political domain in Saudi Arabia.

Keywords: Twitter, influencers, structured mechanism, Saudi Arabia

Procedia PDF Downloads 117
1067 Tip-Enhanced Raman Spectroscopy with Plasmonic Lens Focused Longitudinal Electric Field Excitation

Authors: Mingqian Zhang

Abstract:

Tip-enhanced Raman spectroscopy (TERS) is a scanning probe technique for individual objects and structured surfaces investigation that provides a wealth of enhanced spectral information with nanoscale spatial resolution and high detection sensitivity. It has become a powerful and promising chemical and physical information detection method in the nanometer scale. The TERS technique uses a sharp metallic tip regulated in the near-field of a sample surface, which is illuminated with a certain incident beam meeting the excitation conditions of the wave-vector matching. The local electric field, and, consequently, the Raman scattering, from the sample in the vicinity of the tip apex are both greatly tip-enhanced owning to the excitation of localized surface plasmons and the lightning-rod effect. Typically, a TERS setup is composed of a scanning probe microscope, excitation and collection optical configurations, and a Raman spectroscope. In the illumination configuration, an objective lens or a parabolic mirror is always used as the most important component, in order to focus the incident beam on the tip apex for excitation. In this research, a novel TERS setup was built up by introducing a plasmonic lens to the excitation optics as a focusing device. A plasmonic lens with symmetry breaking semi-annular slits corrugated on gold film was designed for the purpose of generating concentrated sub-wavelength light spots with strong longitudinal electric field. Compared to conventional far-field optical components, the designed plasmonic lens not only focuses an incident beam to a sub-wavelength light spot, but also realizes a strong z-component that dominants the electric field illumination, which is ideal for the excitation of tip-enhancement. Therefore, using a PL in the illumination configuration of TERS contributes to improve the detection sensitivity by both reducing the far-field background and effectively exciting the localized electric field enhancement. The FDTD method was employed to investigate the optical near-field distribution resulting from the light-nanostructure interaction. And the optical field distribution was characterized using an scattering-type scanning near-field optical microscope to demonstrate the focusing performance of the lens. The experimental result is in agreement with the theoretically calculated one. It verifies the focusing performance of the plasmonic lens. The optical field distribution shows a bright elliptic spot in the lens center and several arc-like side-lobes on both sides. After the focusing performance was experimentally verified, the designed plasmonic lens was used as a focusing component in the excitation configuration of TERS setup to concentrate incident energy and generate a longitudinal optical field. A collimated linearly polarized laser beam, with along x-axis polarization, was incident from the bottom glass side on the plasmonic lens. The incident light focused by the plasmonic lens interacted with the silver-coated tip apex and enhanced the Raman signal of the sample locally. The scattered Raman signal was gathered by a parabolic mirror and detected with a Raman spectroscopy. Then, the plasmonic lens based setup was employed to investigate carbon nanotubes and TERS experiment was performed. Experimental results indicate that the Raman signal is considerably enhanced which proves that the novel TERS configuration is feasible and promising.

Keywords: longitudinal electric field, plasmonics, raman spectroscopy, tip-enhancement

Procedia PDF Downloads 372
1066 Calpains; Insights Into the Pathogenesis of Heart Failure

Authors: Mohammadjavad Sotoudeheian

Abstract:

Heart failure (HF) prevalence, as a global cardiovascular problem, is increasing gradually. A variety of molecular mechanisms contribute to HF. Proteins involved in cardiac contractility regulation, such as ion channels and calcium handling proteins, are altered. Additionally, epigenetic modifications and gene expression can lead to altered cardiac function. Moreover, inflammation and oxidative stress contribute to HF. The progression of HF can be attributed to mitochondrial dysfunction that impairs energy production and increases apoptosis. Molecular mechanisms such as these contribute to the development of cardiomyocyte defects and HF and can be therapeutically targeted. The heart's contractile function is controlled by cardiomyocytes. Calpain, and its related molecules, including Bax, VEGF, and AMPK, are among the proteins involved in regulating cardiomyocyte function. Apoptosis is facilitated by Bax. Cardiomyocyte apoptosis is regulated by this protein. Furthermore, cardiomyocyte survival, contractility, wound healing, and proliferation are all regulated by VEGF, which is produced by cardiomyocytes during inflammation and cytokine stress. Cardiomyocyte proliferation and survival are also influenced by AMPK, an enzyme that plays an active role in energy metabolism. They all play key roles in apoptosis, angiogenesis, hypertrophy, and metabolism during myocardial inflammation. The role of calpains has been linked to several molecular pathways. The calpain pathway plays an important role in signal transduction and apoptosis, as well as autophagy, endocytosis, and exocytosis. Cell death and survival are regulated by these calcium-dependent cysteine proteases that cleave proteins. As a result, protein fragments can be used for various cellular functions. By cleaving adhesion and motility proteins, calcium proteins also contribute to cell migration. HF may be brought about by calpain-mediated pathways. Many physiological processes are mediated by the calpain molecular pathways. Signal transduction, cell death, and cell migration are all regulated by these molecular pathways. Calpain is activated by calcium binding to calmodulin. In the presence of calcium, calmodulin activates calpain. Calpains are stimulated by calcium, which increases matrix metalloproteinases (MMPs). In order to develop novel treatments for these diseases, we must understand how this pathway works. A variety of myocardial remodeling processes involve calpains, including remodeling of the extracellular matrix and hypertrophy of cardiomyocytes. Calpains also play a role in maintaining cardiac homeostasis through apoptosis and autophagy. The development of HF may be in part due to calpain-mediated pathways promoting cardiomyocyte death. Numerous studies have suggested the importance of the Ca2+ -dependent protease calpain in cardiac physiology and pathology. Therefore, it is important to consider this pathway to develop and test therapeutic options in humans that targets calpain in HF. Apoptosis, autophagy, endocytosis, exocytosis, signal transduction, and disease progression all involve calpain molecular pathways. Therefore, it is conceivable that calpain inhibitors might have therapeutic potential as they have been investigated in preclinical models of several conditions in which the enzyme has been implicated that might be treated with them. Ca 2+ - dependent proteases and calpains contribute to adverse ventricular remodeling and HF in multiple experimental models. In this manuscript, we will discuss the calpain molecular pathway's important roles in HF development.

Keywords: calpain, heart failure, autophagy, apoptosis, cardiomyocyte

Procedia PDF Downloads 66
1065 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children

Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura

Abstract:

Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.

Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification

Procedia PDF Downloads 299
1064 Epileptic Seizure Prediction by Exploiting Signal Transitions Phenomena

Authors: Mohammad Zavid Parvez, Manoranjan Paul

Abstract:

A seizure prediction method is proposed by extracting global features using phase correlation between adjacent epochs for detecting relative changes and local features using fluctuation/deviation within an epoch for determining fine changes of different EEG signals. A classifier and a regularization technique are applied for the reduction of false alarms and improvement of the overall prediction accuracy. The experiments show that the proposed method outperforms the state-of-the-art methods and provides high prediction accuracy (i.e., 97.70%) with low false alarm using EEG signals in different brain locations from a benchmark data set.

Keywords: Epilepsy, seizure, phase correlation, fluctuation, deviation.

Procedia PDF Downloads 465