Search results for: weak target signals
3330 Robust Features for Impulsive Noisy Speech Recognition Using Relative Spectral Analysis
Authors: Hajer Rahali, Zied Hajaiej, Noureddine Ellouze
Abstract:
The goal of speech parameterization is to extract the relevant information about what is being spoken from the audio signal. In speech recognition systems Mel-Frequency Cepstral Coefficients (MFCC) and Relative Spectral Mel-Frequency Cepstral Coefficients (RASTA-MFCC) are the two main techniques used. It will be shown in this paper that it presents some modifications to the original MFCC method. In our work the effectiveness of proposed changes to MFCC called Modified Function Cepstral Coefficients (MODFCC) were tested and compared against the original MFCC and RASTA-MFCC features. The prosodic features such as jitter and shimmer are added to baseline spectral features. The above-mentioned techniques were tested with impulsive signals under various noisy conditions within AURORA databases.Keywords: auditory filter, impulsive noise, MFCC, prosodic features, RASTA filter
Procedia PDF Downloads 4233329 A New Dual Forward Affine Projection Adaptive Algorithm for Speech Enhancement in Airplane Cockpits
Authors: Djendi Mohmaed
Abstract:
In this paper, we propose a dual adaptive algorithm, which is based on the combination between the forward blind source separation (FBSS) structure and the affine projection algorithm (APA). This proposed algorithm combines the advantages of the source separation properties of the FBSS structure and the fast convergence characteristics of the APA algorithm. The proposed algorithm needs two noisy observations to provide an enhanced speech signal. This process is done in a blind manner without the need for ant priori information about the source signals. The proposed dual forward blind source separation affine projection algorithm is denoted (DFAPA) and used for the first time in an airplane cockpit context to enhance the communication from- and to- the airplane. Intensive experiments were carried out in this sense to evaluate the performance of the proposed DFAPA algorithm.Keywords: adaptive algorithm, speech enhancement, system mismatch, SNR
Procedia PDF Downloads 1333328 Compensation of Power Quality Disturbances Using DVR
Authors: R. Rezaeipour
Abstract:
One of the key aspects of power quality improvement in power system is the mitigation of voltage sags/swells and flicker. Custom power devices have been known as the best tools for voltage disturbances mitigation as well as reactive power compensation. Dynamic voltage restorer (DVR) which is the most efficient and effective modern custom power device can provide the most commercial solution to solve several problems of power quality in distribution networks. This paper deals with analysis and simulation technique of DVR based on instantaneous power theory which is a quick control to detect signals. The main purpose of this work is to remove three important disturbances including voltage sags/swells and flicker. Simulation of the proposed method was carried out on two sample systems by using MATLAB software environment and the results of simulation show that the proposed method is able to provide desirable power quality in the presence of wide range of disturbances.Keywords: DVR, power quality, voltage sags, voltage swells, flicker
Procedia PDF Downloads 3443327 Structural Magnetic Properties of Multiferroic (BiFeO3)1−x(PbTiO3)x Ceramics
Authors: Mohammad Shariq, Davinder Kaur
Abstract:
A series of multiferroic (BiFeO3)1−x(PbTiO3)x [x= 0, 0.1, 0.2, 0.3, 0.4 and 0.5] solid solution ceramics were synthesised by conventional solid-state reaction method. Well crystalline phase has been optimized at sintering temperature of 950°C for 2 hours. X rays diffraction studies of these ceramics revealed the existence of a morphotropic phase boundary (MPB) region in this system, which exhibits co-existence of rhombohedral and tetragonal phase with a large tetragonality (c/a ratio) in the tetragonal phase region. The average grain size of samples was found to be between 1-1.5 µm. The M-H curve revealed the BiFeO3 (BFO) as antiferromanetic material whereas, induced weak ferromagnetism was observed for (BiFeO3)1−x(PbTiO3)x composites with x=0.1, 0.2, 0.3, 0.4 and 0.5 at temperature of 5 K. The results evidenced the destruction of a space-modulated spin structure in bulk materials, via substituent effects, releasing a latent magnetization locked within the cycloid. Relative to unmodified BiFeO3, modified BiFeO3-PbTiO3 -based ceramics revealed enhancement in the electric-field-induced polarization.Keywords: BiFeO3)1−x(PbTiO3)x ceramic, multiferroic, SQUID, magnetic properties
Procedia PDF Downloads 3453326 A Religious Book Translation by Pragmatic Approach: The Vajrachedika-Prajna-Paramita Sutra
Authors: Yoon-Cheol Park
Abstract:
This research focuses on examining the Chinese character-Korean language translation of the Vajrachedika-prajna-paramita sutra by a pragmatic approach. The background of this research is that there were no previous researches which looked into the Vajrachedika-prajna-paramita translation by pragmatic approach until now. Even though it is composed of conversational structures between Buddha and his disciple unlike other Buddhist sutras, most of its translation could find the traces to have pursued literal translation and still has now overlooked pragmatic elements in it. Accordingly, it is meaningful to examine the messages through speaker and hearer relation and between speaker intention and utterance meaning. Practically, the Vajrachedika-prajna-paramita sutra includes pragmatic elements, such as speech acts, presupposition, conversational implicature, the cooperative principle and politeness. First, speech acts in its sutra text show the translation to reveal obvious performance meanings of language to the target text. And presupposition in their dialogues is conveyed by paraphrasing or substituting abstruse language with easy expressions. Conversational implicature in utterances makes it possible to understand the meanings of holy words by relying on utterance contexts. In particular, relevance results in an increase of readability in the translation owing to previous utterance contexts. Finally, politeness in the target text is conveyed with natural stylistics through the honorific system of the Korean language. These elements mean that the pragmatic approach can function as a useful device in conveying holy words in a specific, practical and direct way depending on utterance contexts. Therefore, we expect that taking a pragmatic approach in translating the Vajrachedika-prajna-paramita sutra will provide a theoretical foundation for seeking better translation methods than the literal translations of the past. And it implies that the translation of Buddhist sutra needs to convey messages by translation methods which take into account the characteristic of sutra text like the Vajrachedika-prajna-paramita.Keywords: buddhist sutra, Chinese character-Korean language translation, pragmatic approach, utterance context
Procedia PDF Downloads 3993325 Overview of Wireless Body Area Networks
Authors: Rashi Jain
Abstract:
The Wireless Body Area Networks (WBANs) is an emerging interdisciplinary area where small sensors are placed on/within the human body. These sensors monitor the physiological activities and vital statistics of the body. The data from these sensors is aggregated and communicated to a remote doctor for immediate attention or to a database for records. On 6 Feb 2012, the IEEE 802.15.6 task group approved the standard for Body Area Network (BAN) technologies. The standard proposes the physical and MAC layer for the WBANs. The work provides an introduction to WBANs and overview of the physical and MAC layers of the standard. The physical layer specifications have been covered. A comparison of different protocols used at MAC layer is drawn. An introduction to the network layer and security aspects of the WBANs is made. The WBANs suffer certain limitations such as regulation of frequency bands, minimizing the effect of transmission and reception of electromagnetic signals on the human body, maintaining the energy efficiency among others. This has slowed down their implementation.Keywords: vehicular networks, sensors, MicroController 8085, LTE
Procedia PDF Downloads 2593324 Investigating the Online Effect of Language on Gesture in Advanced Bilinguals of Two Structurally Different Languages in Comparison to L1 Native Speakers of L2 and Explores Whether Bilinguals Will Follow Target L2 Patterns in Speech and Co-speech
Authors: Armita Ghobadi, Samantha Emerson, Seyda Ozcaliskan
Abstract:
Being a bilingual involves mastery of both speech and gesture patterns in a second language (L2). We know from earlier work in first language (L1) production contexts that speech and co-speech gesture form a tightly integrated system: co-speech gesture mirrors the patterns observed in speech, suggesting an online effect of language on nonverbal representation of events in gesture during the act of speaking (i.e., “thinking for speaking”). Relatively less is known about the online effect of language on gesture in bilinguals speaking structurally different languages. The few existing studies—mostly with small sample sizes—suggests inconclusive findings: some show greater achievement of L2 patterns in gesture with more advanced L2 speech production, while others show preferences for L1 gesture patterns even in advanced bilinguals. In this study, we focus on advanced bilingual speakers of two structurally different languages (Spanish L1 with English L2) in comparison to L1 English speakers. We ask whether bilingual speakers will follow target L2 patterns not only in speech but also in gesture, or alternatively, follow L2 patterns in speech but resort to L1 patterns in gesture. We examined this question by studying speech and gestures produced by 23 advanced adult Spanish (L1)-English (L2) bilinguals (Mage=22; SD=7) and 23 monolingual English speakers (Mage=20; SD=2). Participants were shown 16 animated motion event scenes that included distinct manner and path components (e.g., "run over the bridge"). We recorded and transcribed all participant responses for speech and segmented it into sentence units that included at least one motion verb and its associated arguments. We also coded all gestures that accompanied each sentence unit. We focused on motion event descriptions as it shows strong crosslinguistic differences in the packaging of motion elements in speech and co-speech gesture in first language production contexts. English speakers synthesize manner and path into a single clause or gesture (he runs over the bridge; running fingers forward), while Spanish speakers express each component separately (manner-only: el corre=he is running; circle arms next to body conveying running; path-only: el cruza el puente=he crosses the bridge; trace finger forward conveying trajectory). We tallied all responses by group and packaging type, separately for speech and co-speech gesture. Our preliminary results (n=4/group) showed that productions in English L1 and Spanish L1 differed, with greater preference for conflated packaging in L1 English and separated packaging in L1 Spanish—a pattern that was also largely evident in co-speech gesture. Bilinguals’ production in L2 English, however, followed the patterns of the target language in speech—with greater preference for conflated packaging—but not in gesture. Bilinguals used separated and conflated strategies in gesture in roughly similar rates in their L2 English, showing an effect of both L1 and L2 on co-speech gesture. Our results suggest that online production of L2 language has more limited effects on L2 gestures and that mastery of native-like patterns in L2 gesture might take longer than native-like L2 speech patterns.Keywords: bilingualism, cross-linguistic variation, gesture, second language acquisition, thinking for speaking hypothesis
Procedia PDF Downloads 743323 An Investigation into Libyan Teachers’ Views of Children’s Emotional and Behavioral Difficulties
Authors: Abdelbasit Gadour
Abstract:
A great number of children in mainstream schools across Libya are currently living with emotional, behavioral difficulties. This study aims to explore teachers’ perceptions of children’s emotional and behavioral difficulties (EBD) and their attributions of the causes of EBD. The relevance of this area of study to current educational practice is illustrated in the fact that primary school teachers in Libya find classroom behavior problems one of the major difficulties they face. The information presented in this study was gathered from 182 teachers that responded back to the survey, of whom 27 teachers were later interviewed. In general, teachers’ perceptions of EBD reflect personal experience, training, and attitudes. Teachers appear from this study to use words such as indifferent, frightened, withdrawn, aggressive, disobedient, hyperactive, less ambitious, lacking concentration, and academically weak to describe pupils with emotional and behavioral difficulties (EBD). The implications of this study are envisaged as being extremely important to support teachers addressing children’s EBD and shed light on the contributing factors to EBD for a successful teaching-learning process in Libyan primary schools.Keywords: children, emotional and behavior difficulties, learning, teachers'
Procedia PDF Downloads 1423322 A Voice Signal Encryption Scheme Based on Chaotic Theory
Authors: Hailang Yang
Abstract:
To ensure the confidentiality and integrity of speech signals in communication transmission, this paper proposes a voice signal encryption scheme based on chaotic theory. Firstly, the scheme utilizes chaotic mapping to generate a key stream and then employs the key stream to perform bitwise exclusive OR (XOR) operations for encrypting the speech signal. Additionally, the scheme utilizes a chaotic hash function to generate a Message Authentication Code (MAC), which is appended to the encrypted data to verify the integrity of the data. Subsequently, we analyze the security performance and encryption efficiency of the scheme, comparing and optimizing it against existing solutions. Finally, experimental results demonstrate that the proposed scheme can resist common attacks, achieving high-quality encryption and speed.Keywords: chaotic theory, XOR encryption, chaotic hash function, Message Authentication Code (MAC)
Procedia PDF Downloads 503321 Keratin Reconstruction: Evaluation of Green Peptides Technology on Hair Performance
Authors: R. Di Lorenzo, S. Laneri, A. Sacchi
Abstract:
Hair surface properties affect hair texture and shine, whereas the healthy state of the hair cortex sways hair ends. Even if cosmetic treatments are intrinsically safe, there is potentially damaging action on the hair fibers. Loss of luster, frizz, split ends, and other hair problems are particularly prevalent among people who repeatedly alter the natural style of their hair or among people with intrinsically weak hair. Technological and scientific innovations in hair care thus become invaluable allies to preserve their natural well-being and shine. The study evaluated restoring keratin-like ingredients that improve hair fibers' structural integrity, increase tensile strength, improve hair manageability and moisturizing. The hair shaft is composed of 65 - 95% of keratin. It gives the hair resistance, elasticity, and plastic properties and also contributes to their waterproofing. Providing exogenous keratin is, therefore, a practical approach to protect and nourish the hair. By analyzing the amino acid composition of keratin, we find a high frequency of hydrophobic amino acids. It confirms the critical role interactions, mainly hydrophobic, between cosmetic products and hair. The active ingredient analyzed comes from vegetable proteins through an enzymatic cut process that selected only oligo- and polypeptides (> 3500 KDa) rich in amino acids with hydrocarbon side chains apolar or sulfur. These chemical components are the most expressed amino acids at the level of the capillary keratin structure, and it determines the most significant possible compatibility with the target substrate. Given the biological variability of the sources, it isn't easy to define a constant and reproducible molecular formula of the product. Still, it consists of hydroxypropiltrimonium vegetable peptides with keratin-like performances. 20 natural hair tresses (30 cm in length and 0.50 g weight) were treated with the investigated products (5 % v/v aqueous solution) following a specific protocol and compared with non-treated (Control) and benchmark-keratin-treated strands (Benchmark). Their brightness, moisture content, cortical and surface integrity, and tensile strength were evaluated and statistically compared. Keratin-like treated hair tresses showed better results than the other two groups (Control and Benchmark). The product improves the surface with significant regularization of the cuticle closure, improves the cortex and the peri-medullar area filling, gives a highly organized and tidy structure, delivers a significant amount of sulfur on the hair, and is more efficient moisturization and imbibition power, increases hair brightness. The hydroxypropyltrimonium quaternized group added to the C-terminal end interacts with the negative charges that form on the hair after washing when disheveled and tangled. The interactions anchor the product to the hair surface, keeping the cuticles adhered to the shaft. The small size allows the peptides to penetrate and give body to the hair, together with a conditioning effect that gives an image of healthy hair. Results suggest that the product is a valid ally in numerous restructuring/conditioning, shaft protection, straightener/dryer-damage prevention hair care product.Keywords: conditioning, hair damage, hair, keratin, polarized light microscopy, scanning electron microscope, thermogravimetric analysis
Procedia PDF Downloads 1233320 Investigation of Biogas from Slaughterhouse and Dairy Farm Waste
Authors: Saadelnour Abdueljabbar Adam
Abstract:
Wastes from slaughterhouses in most towns in Sudan are often poorly managed and sometimes discharged into adjoining streams due to poor implementation of standards, thus causing environmental and public health hazards and also there is a large amount of manure from dairy farms. This paper presents a solution of organic waste from cow dairy farms and slaughterhouse. We present the findings of experimental investigation of biogas production using cow manure, blood and rumen content were mixed at three proportions :72.3%, 61%, 39% manure, 6%, 8.5%, 22% blood; and 21.7%, 30.5%, 39% rumen content in volume for bio-digester 1,2,3 respectively. This paper analyses the quantitative and qualitative composition of biogas: gas content, and the concentration of methane. The highest biogas output 0.116L/g dry matter from bio-digester1 together with a high-quality biogas of 85% methane Was from the mixture of cow manure with blood and rumen content were mixed at 72.3%manure, 6%blood and 21.7%rumen content which is useful for combustion and energy production. While bio-digester 2 and 3 gave 0.012L/g dry matter and 0.013L/g dry matter respectively with the weak concentration of methane (50%).Keywords: anaerobic digestion, bio-digester, blood, cow manure, rumen content
Procedia PDF Downloads 5673319 Association of the Time in Targeted Blood Glucose Range of 3.9–10 Mmol/L with the Mortality of Critically Ill Patients with or without Diabetes
Authors: Guo Yu, Haoming Ma, Peiru Zhou
Abstract:
BACKGROUND: In addition to hyperglycemia, hypoglycemia, and glycemic variability, a decrease in the time in the targeted blood glucose range (TIR) may be associated with an increased risk of death for critically ill patients. However, the relationship between the TIR and mortality may be influenced by the presence of diabetes and glycemic variability. METHODS: A total of 998 diabetic and non-diabetic patients with severe diseases in the ICU were selected for this retrospective analysis. The TIR is defined as the percentage of time spent in the target blood glucose range of 3.9–10.0 mmol/L within 24 hours. The relationship between TIR and in-hospital in diabetic and non-diabetic patients was analyzed. The effect of glycemic variability was also analyzed. RESULTS: The binary logistic regression model showed that there was a significant association between the TIR as a continuous variable and the in-hospital death of severely ill non-diabetic patients (OR=0.991, P=0.015). As a classification variable, TIR≥70% was significantly associated with in-hospital death (OR=0.581, P=0.003). Specifically, TIR≥70% was a protective factor for the in-hospital death of severely ill non-diabetic patients. The TIR of severely ill diabetic patients was not significantly associated with in-hospital death; however, glycemic variability was significantly and independently associated with in-hospital death (OR=1.042, P=0.027). Binary logistic regression analysis of comprehensive indices showed that for non-diabetic patients, the C3 index (low TIR & high CV) was a risk factor for increased mortality (OR=1.642, P<0.001). In addition, for diabetic patients, the C3 index was an independent risk factor for death (OR=1.994, P=0.008), and the C4 index (low TIR & low CV) was independently associated with increased survival. CONCLUSIONS: The TIR of non-diabetic patients during ICU hospitalization was associated with in-hospital death even after adjusting for disease severity and glycemic variability. There was no significant association between the TIR and mortality of diabetic patients. However, for both diabetic and non-diabetic critically ill patients, the combined effect of high TIR and low CV was significantly associated with ICU mortality. Diabetic patients seem to have higher blood glucose fluctuations and can tolerate a large TIR range. Both diabetic and non-diabetic critically ill patients should maintain blood glucose levels within the target range to reduce mortality.Keywords: severe disease, diabetes, blood glucose control, time in targeted blood glucose range, glycemic variability, mortality
Procedia PDF Downloads 2193318 Analysis of Matching Pursuit Features of EEG Signal for Mental Tasks Classification
Authors: Zin Mar Lwin
Abstract:
Brain Computer Interface (BCI) Systems have developed for people who suffer from severe motor disabilities and challenging to communicate with their environment. BCI allows them for communication by a non-muscular way. For communication between human and computer, BCI uses a type of signal called Electroencephalogram (EEG) signal which is recorded from the human„s brain by means of an electrode. The electroencephalogram (EEG) signal is an important information source for knowing brain processes for the non-invasive BCI. Translating human‟s thought, it needs to classify acquired EEG signal accurately. This paper proposed a typical EEG signal classification system which experiments the Dataset from “Purdue University.” Independent Component Analysis (ICA) method via EEGLab Tools for removing artifacts which are caused by eye blinks. For features extraction, the Time and Frequency features of non-stationary EEG signals are extracted by Matching Pursuit (MP) algorithm. The classification of one of five mental tasks is performed by Multi_Class Support Vector Machine (SVM). For SVMs, the comparisons have been carried out for both 1-against-1 and 1-against-all methods. Procedia PDF Downloads 2763317 Comparison of Water Equivalent Ratio of Several Dosimetric Materials in Proton Therapy Using Monte Carlo Simulations and Experimental Data
Authors: M. R. Akbari , H. Yousefnia, E. Mirrezaei
Abstract:
Range uncertainties of protons are currently a topic of interest in proton therapy. Two of the parameters that are often used to specify proton range are water equivalent thickness (WET) and water equivalent ratio (WER). Since WER values for a specific material is nearly constant at different proton energies, it is a more useful parameter to compare. In this study, WER values were calculated for different proton energies in polymethyl methacrylate (PMMA), polystyrene (PS) and aluminum (Al) using FLUKA and TRIM codes. The results were compared with analytical, experimental and simulated SEICS code data obtained from the literature. In FLUKA simulation, a cylindrical phantom, 1000 mm in height and 300 mm in diameter, filled with the studied materials was simulated. A typical mono-energetic proton pencil beam in a wide range of incident energies usually applied in proton therapy (50 MeV to 225 MeV) impinges normally on the phantom. In order to obtain the WER values for the considered materials, cylindrical detectors, 1 mm in height and 20 mm in diameter, were also simulated along the beam trajectory in the phantom. In TRIM calculations, type of projectile, energy and angle of incidence, type of target material and thickness should be defined. The mode of 'detailed calculation with full damage cascades' was selected for proton transport in the target material. The biggest difference in WER values between the codes was 3.19%, 1.9% and 0.67% for Al, PMMA and PS, respectively. In Al and PMMA, the biggest difference between each code and experimental data was 1.08%, 1.26%, 2.55%, 0.94%, 0.77% and 0.95% for SEICS, FLUKA and SRIM, respectively. FLUKA and SEICS had the greatest agreement (≤0.77% difference in PMMA and ≤1.08% difference in Al, respectively) with the available experimental data in this study. It is concluded that, FLUKA and TRIM codes have capability for Bragg curves simulation and WER values calculation in the studied materials. They can also predict Bragg peak location and range of proton beams with acceptable accuracy.Keywords: water equivalent ratio, dosimetric materials, proton therapy, Monte Carlo simulations
Procedia PDF Downloads 3213316 An Improved Circulating Tumor Cells Analysis Method for Identifying Tumorous Blood Cells
Authors: Salvador Garcia Bernal, Chi Zheng, Keqi Zhang, Lei Mao
Abstract:
Circulating Tumor Cells (CTC) is used to detect tumoral cell metastases using blood samples of patients with cancer (lung, breast, etc.). Using an immunofluorescent method a three channel image (Red, Green, and Blue) are obtained. These set of images usually overpass the 11 x 30 M pixels in size. An aided tool is designed for imaging cell analysis to segmented and identify the tumorous cell based on the three markers signals. Our Method, it is cell-based (area and cell shape) considering each channel information and extracting and making decisions if it is a valid CTC. The system also gives information about number and size of tumor cells found in the sample. We present results in real-life samples achieving acceptable performance in identifying CTCs in short time.Keywords: Circulating Tumor Cells (CTC), cell analysis, immunofluorescent, medical image analysis
Procedia PDF Downloads 2113315 Speech Enhancement Using Kalman Filter in Communication
Authors: Eng. Alaa K. Satti Salih
Abstract:
Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals.Keywords: autoregressive process, Kalman filter, Matlab, noise speech
Procedia PDF Downloads 3433314 Mechanical Study Printed Circuit Boards Bonding for Jefferson Laboratory Detector
Authors: F. Noto, F. De Persio, V. Bellini, G. Costa. F. Mammoliti, F. Meddi, C. Sutera, G. M. Urcioli
Abstract:
One plane X and one plane Y of silicon microstrip detectors will constitute the front part of the Super Bigbite Spectrometer that is under construction and that will be installed in the experimental Hall A of the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory), located in Newport News, Virgina, USA. Each plane will be made up by two nearly identical, 300 μm thick, 10 cm x 10.3 cm wide silicon microstrip detectors with 50 um pitch, whose electronic signals will be transferred to the front-end electronic based on APV25 chips through C-shaped FR4 Printed Circuit Boards (PCB). A total of about 10000 strips are read-out. This paper treats the optimization of the detector support structure, the materials used through a finite element simulation. A very important aspect of the study will also cover the optimization of the bonding parameters between detector and electronics.Keywords: FEM analysis, bonding, SBS tracker, mechanical structure
Procedia PDF Downloads 3373313 Experimental Demonstration of Broadband Erbium-Doped Fiber Amplifier
Authors: Belloui Bouzid
Abstract:
In this paper, broadband design of erbium doped fiber amplifier (EDFA) is demonstrated and proved experimentally. High and broad gain is covered in C and L bands. The used technique combines, in one configuration, two double passes with split band structure for the amplification of two traveled signals one for the C band and the other for L band. This new topology is to investigate the trends of high gain and wide amplification at different status of pumping power, input wavelength, and input signal power. The presented paper is to explore the performance of EDFA gain using what it can be called double pass double branch wide band amplification configuration. The obtained results show high gain and wide broadening range of 44.24 dB and 80 nm amplification respectively.Keywords: erbium doped fiber amplifier, erbium doped fiber laser, optical amplification, fiber laser
Procedia PDF Downloads 2523312 Enhanced Poly Fluoroalkyl Substances Degradation in Complex Wastewater Using Modified Continuous Flow Nonthermal Plasma Reactor
Authors: Narasamma Nippatlapallia
Abstract:
Communities across the world are desperate to get their environment free of toxic per-poly fluoroalkyl substances (PFAS) especially when these chemicals are in aqueous media. In the present study, two different chain length PFAS (PFHxA (C6), PFDA (C10)) are selected for degradation using a modified continuous flow nonthermal plasma. The results showed 82.3% PFHxA and 94.1 PFDA degradation efficiencies, respectively. The defluorination efficiency is also evaluated which is 28% and 34% for PFHxA and PFDA, respectively. The results clearly indicates that the structure of PFAS has a great impact on degradation efficiency. The effect of flow rate is studied. increase in flow rate beyond 2 mL/min, decrease in degradation efficiency of the targeted PFAS was noticed. PFDA degradation was decreased from 85% to 42%, and PFHxA was decreased to 32% from 64% with increase in flow rate from 2 to 5 mL/min. Similarly, with increase in flow rate the percentage defluorination was decreased for both C10, and C6 compounds. This observation can be attributed to mainly because of change in residence time (contact time). Real water/wastewater is a composition of various organic, and inorganic ions that may affect the activity of oxidative species such as 𝑂𝐻. radicals on the target pollutants. Therefore, it is important to consider radicals quenching chemicals to understand the efficiency of the reactor. In gas-liquid NTP discharge reactors 𝑂𝐻. , 𝑒𝑎𝑞 − , 𝑂 . , 𝑂3, 𝐻2𝑂2, 𝐻. are often considered as reactive species for oxidation and reduction of pollutants. In this work, the role played by two distinct 𝑂 .𝐻 Scavengers, ethanol and glycerol, on PFAS percentage degradation, and defluorination efficiency (i,e., fluorine removal) are measured was studied. The addition of scavenging agents to the PFAS solution diminished the PFAS degradation to different extents depending on the target compound molecular structure. In comparison with the degradation of only PFAS solution, the addition of 1.25 M ethanol inhibited C10, and C6 degradation by 8%, and 12%, respectively. This research was supported with energy efficiency, production rate, and specific yield, fluoride, and PFAS concentration analysis with respect to optimum hydraulic retention time (HRT) of the continuous flow reactor.Keywords: wastewater, PFAS, nonthermal plasma, mineralization, defluorination
Procedia PDF Downloads 283311 Determination of Non-CO2 Greenhouse Gas Emission in Electronics Industry
Authors: Bong Jae Lee, Jeong Il Lee, Hyo Su Kim
Abstract:
Both developed and developing countries have adopted the decision to join the Paris agreement to reduce greenhouse gas (GHG) emissions at the Conference of the Parties (COP) 21 meeting in Paris. As a result, the developed and developing countries have to submit the Intended Nationally Determined Contributions (INDC) by 2020, and each country will be assessed for their performance in reducing GHG. After that, they shall propose a reduction target which is higher than the previous target every five years. Therefore, an accurate method for calculating greenhouse gas emissions is essential to be presented as a rational for implementing GHG reduction measures based on the reduction targets. Non-CO2 GHGs (CF4, NF3, N2O, SF6 and so on) are being widely used in fabrication process of semiconductor manufacturing, and etching/deposition process of display manufacturing process. The Global Warming Potential (GWP) value of Non-CO2 is much higher than CO2, which means it will have greater effect on a global warming than CO2. Therefore, GHG calculation methods of the electronics industry are provided by Intergovernmental Panel on climate change (IPCC) and U.S. Environmental Protection Agency (EPA), and it will be discussed at ISO/TC 146 meeting. As discussed earlier, being precise and accurate in calculating Non-CO2 GHG is becoming more important. Thus this study aims to discuss the implications of the calculating methods through comparing the methods of IPCC and EPA. As a conclusion, after analyzing the methods of IPCC & EPA, the method of EPA is more detailed and it also provides the calculation for N2O. In case of the default emission factor (by IPCC & EPA), IPCC provides more conservative results compared to that of EPA; The factor of IPCC was developed for calculating a national GHG emission, while the factor of EPA was specifically developed for the U.S. which means it must have been developed to address the environmental issue of the US. The semiconductor factory ‘A’ measured F gas according to the EPA Destruction and Removal Efficiency (DRE) protocol and estimated their own DRE, and it was observed that their emission factor shows higher DRE compared to default DRE factor of IPCC and EPA Therefore, each country can improve their GHG emission calculation by developing its own emission factor (if possible) at the time of reporting Nationally Determined Contributions (NDC). Acknowledgements: This work was supported by the Korea Evaluation Institute of Industrial Technology (No. 10053589).Keywords: non-CO2 GHG, GHG emission, electronics industry, measuring method
Procedia PDF Downloads 2863310 Zingiberaceous Plants as a Source of Anti-Bacterial Activity: Targeting Bacterial Cell Division Protein (FtsZ)
Authors: S. Reshma Reghu, Shiburaj Sugathan, T. G. Nandu, K. B. Ramesh Kumar, Mathew Dan
Abstract:
Bacterial diseases are considered to be one of the most prevalent health hazards in the developing world and many bacteria are becoming resistant to existing antibiotics making the treatment ineffective. Thus, it is necessary to find novel targets and develop new antibacterial drugs with a novel mechanism of action. The process of bacterial cell division is a novel and attractive target for new antibacterial drug discovery. FtsZ, a homolog of eukaryotic tubulin, is the major protein of the bacterial cell division machinery and is considered as an important antibacterial drug target. Zingiberaceae, the Ginger family consists of aromatic herbs with creeping rhizomes. Many of these plants have antimicrobial properties.This study aimed to determine the anti-bacterial activity of selected Zingiberaceous plants by targeting bacterial cell division protein, FtsZ. Essential oils and methanol extracts of Amomum ghaticum, Alpinia galanga, Kaempferia galanga, K. rotunda, and Zingiber officinale were tested to find its antibacterial efficiency using disc diffusion method against authentic bacterial strains obtained from MTCC (India). Essential oil isolated from A.galanga and Z.officinale were further assayed for FtsZ inhibition assay following non-radioactive malachite green-phosphomolybdate assay using E. coli FtsZ protein obtained from Cytoskelton Inc., USA. Z.officinale essential oil possess FtsZ inhibitory property. A molecular docking study was conducted with the known bioactive compounds of Z. officinale as ligands with the E. coli FtsZ protein homology model. Some of the major constituents of this plant like catechin, epicatechin, and gingerol possess agreeable docking scores. The results of this study revealed that several chemical constituents in Ginger plants can be utilised as potential source of antibacterial activity and it can warrant further investigation through drug discovery studies.Keywords: antibacterial, FtsZ, zingiberaceae, docking
Procedia PDF Downloads 4713309 Development of an Automatic Control System for ex vivo Heart Perfusion
Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala
Abstract:
Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller
Procedia PDF Downloads 1723308 Design of a Phemt Buffer Amplifier in Mm-Wave Band around 60 GHz
Authors: Maryam Abata, Moulhime El Bekkali, Said Mazer, Catherine Algani, Mahmoud Mehdi
Abstract:
One major problem of most electronic systems operating in the millimeter wave band is the signal generation with a high purity and a stable carrier frequency. This problem is overcome by using the combination of a signal with a low frequency local oscillator (LO) and several stages of frequency multipliers. The use of these frequency multipliers to create millimeter-wave signals is an attractive alternative to direct generation signal. Therefore, the isolation problem of the local oscillator from the other stages is always present, which leads to have various mechanisms that can disturb the oscillator performance, thus a buffer amplifier is often included in oscillator outputs. In this paper, we present the study and design of a buffer amplifier in the mm-wave band using a 0.15μm pHEMT from UMS foundry. This amplifier will be used as a part of a frequency quadrupler at 60 GHz.Keywords: Mm-wave band, local oscillator, frequency quadrupler, buffer amplifier
Procedia PDF Downloads 5423307 A Bacterial Foraging Optimization Algorithm Applied to the Synthesis of Polyacrylamide Hydrogels
Authors: Florin Leon, Silvia Curteanu
Abstract:
The Bacterial Foraging Optimization (BFO) algorithm is inspired by the behavior of bacteria such as Escherichia coli or Myxococcus xanthus when searching for food, more precisely the chemotaxis behavior. Bacteria perceive chemical gradients in the environment, such as nutrients, and also other individual bacteria, and move toward or in the opposite direction to those signals. The application example considered as a case study consists in establishing the dependency between the reaction yield of hydrogels based on polyacrylamide and the working conditions such as time, temperature, monomer, initiator, crosslinking agent and inclusion polymer concentrations, as well as type of the polymer added. This process is modeled with a neural network which is included in an optimization procedure based on BFO. An experimental study of BFO parameters is performed. The results show that the algorithm is quite robust and can obtain good results for diverse combinations of parameter values.Keywords: bacterial foraging, hydrogels, modeling and optimization, neural networks
Procedia PDF Downloads 1513306 Psychological Factors Predicting Social Distance during the COVID-19 Pandemic: An Empirical Investigation
Authors: Calogero Lo Destro
Abstract:
Numerous nations around the world are facing exceptional challenges in employing measures to stop the spread of COVID-19. Following the recommendations of the World Health Organization, a series of preventive measures have been adopted. However, individuals must comply with these rules and recommendations in order to make these measures effective. While COVID-19 was climaxing, it seemed of crucial importance to analyze which psychosocial factors contribute to the acceptance of such preventive behavior, thus favoring the management of COVID-19 worldwide health crisis. In particular, the identification of aspects related to obstacles and facilitation of adherence to social distancing has been considered crucial in the containment of the virus spread. Since the virus was firstly detected in China, Asian people could be considered a relevant outgroup targeted for exclusion. We also hypothesized social distance could be influenced by characteristics of the target, such as smiling or coughing. 260 participants participated in this research on a voluntary basis. They filled a survey designed to explore a series of COVID-19 measures (such as exposure to virus and fear of infection). We also assessed participants state and trait anxiety. The dependent variable was social distance, based on a measure of seating distance designed ad hoc for the present work. Our hypothesis that participants could report greater distance in response to Asian people was not confirmed. On the other hand, significantly lower distance in response to smiling compared to coughing targets was reported. Adopting a regression analysis model, we found that participants' social distance, in response to both coughing and smiling targets, was predicted by fear of infection and by the perception COVID-19 could become a pandemic. Social distance in response to the coughing target was also significantly and positively predicted by age and state anxiety. In summary, the present work has sought to identify a set of psychological variables, which may still be predictive of social distancing.Keywords: COVID-19, social distancing, health, preventive behaviors, risk of infection
Procedia PDF Downloads 1223305 Plasmonic Biosensor for Early Detection of Environmental DNA (eDNA) Combined with Enzyme Amplification
Authors: Monisha Elumalai, Joana Guerreiro, Joana Carvalho, Marta Prado
Abstract:
DNA biosensors popularity has been increasing over the past few years. Traditional analytical techniques tend to require complex steps and expensive equipment however DNA biosensors have the advantage of getting simple, fast and economic. Additionally, the combination of DNA biosensors with nanomaterials offers the opportunity to improve the selectivity, sensitivity and the overall performance of the devices. DNA biosensors are based on oligonucleotides as sensing elements. These oligonucleotides are highly specific to complementary DNA sequences resulting in the hybridization of the strands. DNA biosensors are not only an advantage in the clinical field but also applicable in numerous research areas such as food analysis or environmental control. Zebra Mussels (ZM), Dreissena polymorpha are invasive species responsible for enormous negative impacts on the environment and ecosystems. Generally, the detection of ZM is made when the observation of adult or macroscopic larvae's is made however at this stage is too late to avoid the harmful effects. Therefore, there is a need to develop an analytical tool for the early detection of ZM. Here, we present a portable plasmonic biosensor for the detection of environmental DNA (eDNA) released to the environment from this invasive species. The plasmonic DNA biosensor combines gold nanoparticles, as transducer elements, due to their great optical properties and high sensitivity. The detection strategy is based on the immobilization of a short base pair DNA sequence on the nanoparticles surface followed by specific hybridization in the presence of a complementary target DNA. The hybridization events are tracked by the optical response provided by the nanospheres and their surrounding environment. The identification of the DNA sequences (synthetic target and probes) to detect Zebra mussel were designed by using Geneious software in order to maximize the specificity. Moreover, to increase the optical response enzyme amplification of DNA might be used. The gold nanospheres were synthesized and characterized by UV-visible spectrophotometry and transmission electron microscopy (TEM). The obtained nanospheres present the maximum localized surface plasmon resonance (LSPR) peak position are found to be around 519 nm and a diameter of 17nm. The DNA probes modified with a sulfur group at one end of the sequence were then loaded on the gold nanospheres at different ionic strengths and DNA probe concentrations. The optimal DNA probe loading will be selected based on the stability of the optical signal followed by the hybridization study. Hybridization process leads to either nanoparticle dispersion or aggregation based on the presence or absence of the target DNA. Finally, this detection system will be integrated into an optical sensing platform. Considering that the developed device will be used in the field, it should fulfill the inexpensive and portability requirements. The sensing devices based on specific DNA detection holds great potential and can be exploited for sensing applications in-loco.Keywords: ZM DNA, DNA probes, nicking enzyme, gold nanoparticles
Procedia PDF Downloads 2433304 Quality Assurances for an On-Board Imaging System of a Linear Accelerator: Five Months Data Analysis
Authors: Liyun Chang, Cheng-Hsiang Tsai
Abstract:
To ensure the radiation precisely delivering to the target of cancer patients, the linear accelerator equipped with the pretreatment on-board imaging system is introduced and through it the patient setup is verified before the daily treatment. New generation radiotherapy using beam-intensity modulation, usually associated the treatment with steep dose gradients, claimed to have achieved both a higher degree of dose conformation in the targets and a further reduction of toxicity in normal tissues. However, this benefit is counterproductive if the beam is delivered imprecisely. To avoid shooting critical organs or normal tissues rather than the target, it is very important to carry out the quality assurance (QA) of this on-board imaging system. The QA of the On-Board Imager® (OBI) system of one Varian Clinac-iX linear accelerator was performed through our procedures modified from a relevant report and AAPM TG142. Two image modalities, 2D radiography and 3D cone-beam computed tomography (CBCT), of the OBI system were examined. The daily and monthly QA was executed for five months in the categories of safety, geometrical accuracy and image quality. A marker phantom and a blade calibration plate were used for the QA of geometrical accuracy, while the Leeds phantom and Catphan 504 phantom were used in the QA of radiographic and CBCT image quality, respectively. The reference images were generated through a GE LightSpeed CT simulator with an ADAC Pinnacle treatment planning system. Finally, the image quality was analyzed via an OsiriX medical imaging system. For the geometrical accuracy test, the average deviations of the OBI isocenter in each direction are less than 0.6 mm with uncertainties less than 0.2 mm, while all the other items have the displacements less than 1 mm. For radiographic image quality, the spatial resolution is 1.6 lp/cm with contrasts less than 2.2%. The spatial resolution, low contrast, and HU homogenous of CBCT are larger than 6 lp/cm, less than 1% and within 20 HU, respectively. All tests are within the criteria, except the HU value of Teflon measured with the full fan mode exceeding the suggested value that could be due to itself high HU value and needed to be rechecked. The OBI system in our facility was then demonstrated to be reliable with stable image quality. The QA of OBI system is really necessary to achieve the best treatment for a patient.Keywords: CBCT, image quality, quality assurance, OBI
Procedia PDF Downloads 2973303 The Integration Process of Non-EU Citizens in Luxembourg: From an Empirical Approach Toward a Theoretical Model
Authors: Angela Odero, Chrysoula Karathanasi, Michèle Baumann
Abstract:
Integration of foreign communities has been a forefront issue in Luxembourg for some time now. The country’s continued progress depends largely on the successful integration of immigrants. The aim of our study was to analyze factors which intervene in the course of integration of Non-EU citizens through the discourse of Non-EU citizens residing in Luxembourg, who have signed the Welcome and Integration Contract (CAI). The two-year contract offers integration services to assist foreigners in getting settled in the country. Semi-structured focus group discussions with 50 volunteers were held in English, French, Spanish, Serbo-Croatian or Chinese. Participants were asked to talk about their integration experiences. Recorded then transcribed, the transcriptions were analyzed with the help of NVivo 10, a qualitative analysis software. A systematic and reiterative analysis of decomposing and reconstituting was realized through (1) the identification of predetermined categories (difficulties, challenges and integration needs) (2) initial coding – the grouping together of similar ideas (3) axial coding – the regrouping of items from the initial coding in new ways in order to create sub-categories and identify other core dimensions. Our results show that intervening factors include language acquisition, professional career and socio-cultural activities or events. Each of these factors constitutes different components whose weight shifts from person to person and from situation to situation. Connecting these three emergent factors are two elements essential to the success of the immigrant’s integration – the role of time and deliberate effort from the immigrants, the community, and the formal institutions charged with helping immigrants integrate. We propose a theoretical model where the factors described may be classified in terms of how they predispose, facilitate, and / or reinforce the process towards a successful integration. Measures currently in place propose one size fits all programs yet integrative measures which target the family unit and those customized to target groups based on their needs would work best.Keywords: integration, integration services, non-eu citizens, qualitative analysis, third country nationals
Procedia PDF Downloads 3043302 Comparative Analysis of Two Approaches to Joint Signal Detection, ToA and AoA Estimation in Multi-Element Antenna Arrays
Authors: Olesya Bolkhovskaya, Alexey Davydov, Alexander Maltsev
Abstract:
In this paper two approaches to joint signal detection, time of arrival (ToA) and angle of arrival (AoA) estimation in multi-element antenna array are investigated. Two scenarios were considered: first one, when the waveform of the useful signal is known a priori and, second one, when the waveform of the desired signal is unknown. For first scenario, the antenna array signal processing based on multi-element matched filtering (MF) with the following non-coherent detection scheme and maximum likelihood (ML) parameter estimation blocks is exploited. For second scenario, the signal processing based on the antenna array elements covariance matrix estimation with the following eigenvector analysis and ML parameter estimation blocks is applied. The performance characteristics of both signal processing schemes are thoroughly investigated and compared for different useful signals and noise parameters.Keywords: antenna array, signal detection, ToA, AoA estimation
Procedia PDF Downloads 4923301 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 262