Search results for: blind signal separation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3122

Search results for: blind signal separation

2132 Closed Loop Traffic Control System Using PLC

Authors: Chinmay Shah

Abstract:

The project is all about development of a close loop traffic light control system using PLC (Programmable Logic Controller). This project is divided into two parts which are hardware and software. The hardware part for this project is a model of four way junction of a traffic light. Three indicator lamps (Red, Yellow and Green) are installed at each lane for represents as traffic light signal. This traffic control model is a replica of actuated traffic control. Actuated traffic control system is a close loop traffic control system which controls the timing of the indicator lamps depending on the fluidity of traffic for a particular lane. To make it autonomous, in each lane three IR sensors are placed which helps to sense the percentage of traffic present on any particular lane. The IR Sensors and Indicator lamps are connected to LG PLC XGB series. The PLC controls every signal which is coming from the inputs (IR Sensors) to software and display to the outputs (Indicator lamps). Default timing for the indicator lamps is 30 seconds for each lane. But depending on the percentage of traffic present, if the traffic is nearly 30-35%, green lamp will be on for 10 seconds, for 65-70% traffic it will be 20 seconds, for full 100% traffic it will be on for full 30 seconds. The software part that operates with LG PLC is “XG 5000” Programmer. Using this software, the ladder logic diagram is programmed to control the traffic light base on the flow chart. At the end of this project, the traffic light system is actuated successfully by PLC.

Keywords: close loop, IR sensor, PLC, light control system

Procedia PDF Downloads 571
2131 Variable vs. Fixed Window Width Code Correlation Reference Waveform Receivers for Multipath Mitigation in Global Navigation Satellite Systems with Binary Offset Carrier and Multiplexed Binary Offset Carrier Signals

Authors: Fahad Alhussein, Huaping Liu

Abstract:

This paper compares the multipath mitigation performance of code correlation reference waveform receivers with variable and fixed window width, for binary offset carrier and multiplexed binary offset carrier signals typically used in global navigation satellite systems. In the variable window width method, such width is iteratively reduced until the distortion on the discriminator with multipath is eliminated. This distortion is measured as the Euclidean distance between the actual discriminator (obtained with the incoming signal), and the local discriminator (generated with a local copy of the signal). The variable window width have shown better performance compared to the fixed window width. In particular, the former yields zero error for all delays for the BOC and MBOC signals considered, while the latter gives rather large nonzero errors for small delays in all cases. Due to its computational simplicity, the variable window width method is perfectly suitable for implementation in low-cost receivers.

Keywords: correlation reference waveform receivers, binary offset carrier, multiplexed binary offset carrier, global navigation satellite systems

Procedia PDF Downloads 131
2130 Engineering the Topological Insulator Structures for Terahertz Detectors

Authors: M. Marchewka

Abstract:

The article is devoted to the possible optical transitions in double quantum wells system based on HgTe/HgCd(Mn)Te heterostructures. Such structures can find applications as detectors and sources of radiation in the terahertz range. The Double Quantum Wells (DQW) systems consist of two QWs separated by the transparent for electrons barrier. Such systems look promising from the point of view of the additional degrees of freedom. In the case of the topological insulator in about 6.4nm wide HgTe QW or strained 3D HgTe films at the interfaces, the topologically protected surface states appear at the interfaces/surfaces. Electrons in those edge states move along the interfaces/surfaces without backscattering due to time-reversal symmetry. Combination of the topological properties, which was already verified by the experimental way, together with the very well know properties of the DQWs, can be very interesting from the applications point of view, especially in the THz area. It is important that at the present stage, the technology makes it possible to create high-quality structures of this type, and intensive experimental and theoretical studies of their properties are already underway. The idea presented in this paper is based on the eight-band KP model, including the additional terms related to the structural inversion asymmetry, interfaces inversion asymmetry, the influence of the magnetically content, and the uniaxial strain describe the full pictures of the possible real structure. All of this term, together with the external electric field, can be sources of breaking symmetry in investigated materials. Using the 8 band KP model, we investigated the electronic shape structure with and without magnetic field from the application point of view as a THz detector in a small magnetic field (below 2T). We believe that such structures are the way to get the tunable topological insulators and the multilayer topological insulator. Using the one-dimensional electrons at the topologically protected interface states as fast and collision-free signal carriers as charge and signal carriers, the detection of the optical signal should be fast, which is very important in the high-resolution detection of signals in the THz range. The proposed engineering of the investigated structures is now one of the important steps on the way to get the proper structures with predicted properties.

Keywords: topological insulator, THz spectroscopy, KP model, II-VI compounds

Procedia PDF Downloads 122
2129 Glycerol-Based Bio-Solvents for Organic Synthesis

Authors: Dorith Tavor, Adi Wolfson

Abstract:

In the past two decades a variety of green solvents have been proposed, including water, ionic liquids, fluorous solvents, and supercritical fluids. However, their implementation in industrial processes is still limited due to their tedious and non-sustainable synthesis, lack of experimental data and familiarity, as well as operational restrictions and high cost. Several years ago we presented, for the first time, the use of glycerol-based solvents as alternative sustainable reaction mediums in both catalytic and non-catalytic organic synthesis. Glycerol is the main by-product from the conversion of oils and fats in oleochemical production. Moreover, in the past decade, its price has substantially decreased due to an increase in supply from the production and use of fatty acid derivatives in the food, cosmetics, and drugs industries and in biofuel synthesis, i.e., biodiesel. The renewable origin, beneficial physicochemical properties and reusability of glycerol-based solvents, enabled improved product yield and selectivity as well as easy product separation and catalyst recycling. Furthermore, their high boiling point and polarity make them perfect candidates for non-conventional heating and mixing techniques such as ultrasound- and microwave-assisted reactions. Finally, in some reactions, such as catalytic transfer-hydrogenation or transesterification, they can also be used simultaneously as both solvent and reactant. In our ongoing efforts to design a viable protocol that will facilitate the acceptance of glycerol and its derivatives as sustainable solvents, pure glycerol and glycerol triacetate (triacetin) as well as various glycerol-triacetin mixtures were tested as sustainable solvents in several representative organic reactions, such as nucleophilic substitution of benzyl chloride to benzyl acetate, Suzuki-Miyaura cross-coupling of iodobenzene and phenylboronic acid, baker’s yeast reduction of ketones, and transfer hydrogenation of olefins. It was found that reaction performance was affected by the glycerol to triacetin ratio, as the solubility of the substrates in the solvent determined product yield. Thereby, employing optimal glycerol to triacetin ratio resulted in maximum product yield. In addition, using glycerol-based solvents enabled easy and successful separation of the products and recycling of the catalysts.

Keywords: glycerol, green chemistry, sustainability, catalysis

Procedia PDF Downloads 624
2128 Improved Signal-To-Noise Ratio by the 3D-Functionalization of Fully Zwitterionic Surface Coatings

Authors: Esther Van Andel, Stefanie C. Lange, Maarten M. J. Smulders, Han Zuilhof

Abstract:

False outcomes of diagnostic tests are a major concern in medical health care. To improve the reliability of surface-based diagnostic tests, it is of crucial importance to diminish background signals that arise from the non-specific binding of biomolecules, a process called fouling. The aim is to create surfaces that repel all biomolecules except the molecule of interest. This can be achieved by incorporating antifouling protein repellent coatings in between the sensor surface and it’s recognition elements (e.g. antibodies, sugars, aptamers). Zwitterionic polymer brushes are considered excellent antifouling materials, however, to be able to bind the molecule of interest, the polymer brushes have to be functionalized and so far this was only achieved at the expense of either antifouling or binding capacity. To overcome this limitation, we combined both features into one single monomer: a zwitterionic sulfobetaine, ensuring antifouling capabilities, equipped with a clickable azide moiety which allows for further functionalization. By copolymerizing this monomer together with a standard sulfobetaine, the number of azides (and with that the number of recognition elements) can be tuned depending on the application. First, the clickable azido-monomer was synthesized and characterized, followed by copolymerizing this monomer to yield functionalizable antifouling brushes. The brushes were fully characterized using surface characterization techniques like XPS, contact angle measurements, G-ATR-FTIR and XRR. As a proof of principle, the brushes were subsequently functionalized with biotin via strain-promoted alkyne azide click reactions, which yielded a fully zwitterionic biotin-containing 3D-functionalized coating. The sensing capacity was evaluated by reflectometry using avidin and fibrinogen containing protein solutions. The surfaces showed excellent antifouling properties as illustrated by the complete absence of non-specific fibrinogen binding, while at the same time clear responses were seen for the specific binding of avidin. A great increase in signal-to-noise ratio was observed, even when the amount of functional groups was lowered to 1%, compared to traditional modification of sulfobetaine brushes that rely on a 2D-approach in which only the top-layer can be functionalized. This study was performed on stoichiometric silicon nitride surfaces for future microring resonator based assays, however, this methodology can be transferred to other biosensor platforms which are currently being investigated. The approach presented herein enables a highly efficient strategy for selective binding with retained antifouling properties for improved signal-to-noise ratios in binding assays. The number of recognition units can be adjusted to a specific need, e.g. depending on the size of the analyte to be bound, widening the scope of these functionalizable surface coatings.

Keywords: antifouling, signal-to-noise ratio, surface functionalization, zwitterionic polymer brushes

Procedia PDF Downloads 306
2127 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian

Abstract:

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature

Procedia PDF Downloads 129
2126 Optimal Sliding Mode Controller for Knee Flexion during Walking

Authors: Gabriel Sitler, Yousef Sardahi, Asad Salem

Abstract:

This paper presents an optimal and robust sliding mode controller (SMC) to regulate the position of the knee joint angle for patients suffering from knee injuries. The controller imitates the role of active orthoses that produce the joint torques required to overcome gravity and loading forces and regain natural human movements. To this end, a mathematical model of the shank, the lower part of the leg, is derived first and then used for the control system design and computer simulations. The design of the controller is carried out in optimal and multi-objective settings. Four objectives are considered: minimization of the control effort and tracking error; and maximization of the control signal smoothness and closed-loop system’s speed of response. Optimal solutions in terms of the Pareto set and its image, the Pareto front, are obtained. The results show that there are trade-offs among the design objectives and many optimal solutions from which the decision-maker can choose to implement. Also, computer simulations conducted at different points from the Pareto set and assuming knee squat movement demonstrate competing relationships among the design goals. In addition, the proposed control algorithm shows robustness in tracking a standard gait signal when accounting for uncertainty in the shank’s parameters.

Keywords: optimal control, multi-objective optimization, sliding mode control, wearable knee exoskeletons

Procedia PDF Downloads 82
2125 Closing the Front Door of Child Protection: Rethinking Mandated Reporting

Authors: Miriam Itzkowitz, Katie Olson

Abstract:

Through an interdisciplinary and trauma-responsive lens, this article reviews the legal and social history of mandated reporting laws and family separation, examines the ethical conundrum of mandated reporting as it relates to evidence-based practice, and discusses alternatives to mandated reporting as a primary prevention strategy. Using existing and emerging data, the authors argue that mandated reporting as a universal strategy contributes to racial disproportionality in the child welfare system and that anti-racist practices should begin with an examination of our reliance on mandated reporting.

Keywords: child welfare, education, mandated reporting, racial disproportionality, trauma

Procedia PDF Downloads 353
2124 Fault Detection and Diagnosis of Broken Bar Problem in Induction Motors Base Wavelet Analysis and EMD Method: Case Study of Mobarakeh Steel Company in Iran

Authors: M. Ahmadi, M. Kafil, H. Ebrahimi

Abstract:

Nowadays, induction motors have a significant role in industries. Condition monitoring (CM) of this equipment has gained a remarkable importance during recent years due to huge production losses, substantial imposed costs and increases in vulnerability, risk, and uncertainty levels. Motor current signature analysis (MCSA) is one of the most important techniques in CM. This method can be used for rotor broken bars detection. Signal processing methods such as Fast Fourier transformation (FFT), Wavelet transformation and Empirical Mode Decomposition (EMD) are used for analyzing MCSA output data. In this study, these signal processing methods are used for broken bar problem detection of Mobarakeh steel company induction motors. Based on wavelet transformation method, an index for fault detection, CF, is introduced which is the variation of maximum to the mean of wavelet transformation coefficients. We find that, in the broken bar condition, the amount of CF factor is greater than the healthy condition. Based on EMD method, the energy of intrinsic mode functions (IMF) is calculated and finds that when motor bars become broken the energy of IMFs increases.

Keywords: broken bar, condition monitoring, diagnostics, empirical mode decomposition, fourier transform, wavelet transform

Procedia PDF Downloads 150
2123 Enhancement of X-Rays Images Intensity Using Pixel Values Adjustments Technique

Authors: Yousif Mohamed Y. Abdallah, Razan Manofely, Rajab M. Ben Yousef

Abstract:

X-Ray images are very popular as a first tool for diagnosis. Automating the process of analysis of such images is important in order to help physician procedures. In this practice, teeth segmentation from the radiographic images and feature extraction are essential steps. The main objective of this study was to study correction preprocessing of x-rays images using local adaptive filters in order to evaluate contrast enhancement pattern in different x-rays images such as grey color and to evaluate the usage of new nonlinear approach for contrast enhancement of soft tissues in x-rays images. The data analyzed by using MatLab program to enhance the contrast within the soft tissues, the gray levels in both enhanced and unenhanced images and noise variance. The main techniques of enhancement used in this study were contrast enhancement filtering and deblurring images using the blind deconvolution algorithm. In this paper, prominent constraints are firstly preservation of image's overall look; secondly, preservation of the diagnostic content in the image and thirdly detection of small low contrast details in diagnostic content of the image.

Keywords: enhancement, x-rays, pixel intensity values, MatLab

Procedia PDF Downloads 485
2122 Cloud Design for Storing Large Amount of Data

Authors: M. Strémy, P. Závacký, P. Cuninka, M. Juhás

Abstract:

Main goal of this paper is to introduce our design of private cloud for storing large amount of data, especially pictures, and to provide good technological backend for data analysis based on parallel processing and business intelligence. We have tested hypervisors, cloud management tools, storage for storing all data and Hadoop to provide data analysis on unstructured data. Providing high availability, virtual network management, logical separation of projects and also rapid deployment of physical servers to our environment was also needed.

Keywords: cloud, glusterfs, hadoop, juju, kvm, maas, openstack, virtualization

Procedia PDF Downloads 353
2121 Statistical Pattern Recognition for Biotechnological Process Characterization Based on High Resolution Mass Spectrometry

Authors: S. Fröhlich, M. Herold, M. Allmer

Abstract:

Early stage quantitative analysis of host cell protein (HCP) variations is challenging yet necessary for comprehensive bioprocess development. High resolution mass spectrometry (HRMS) provides a high-end technology for accurate identification alongside with quantitative information. Hereby we describe a flexible HRMS assay platform to quantify HCPs relevant in microbial expression systems such as E. Coli in both up and downstream development by means of MVDA tools. Cell pellets were lysed and proteins extracted, purified samples not further treated before applying the SMART tryptic digest kit. Peptides separation was optimized using an RP-UHPLC separation platform. HRMS-MSMS analysis was conducted on an Orbitrap Velos Elite applying CID. Quantification was performed label-free taking into account ionization properties and physicochemical peptide similarities. Results were analyzed using SIEVE 2.0 (Thermo Fisher Scientific) and SIMCA (Umetrics AG). The developed HRMS platform was applied to an E. Coli expression set with varying productivity and the corresponding downstream process. Selected HCPs were successfully quantified within the fmol range. Analysing HCP networks based on pattern analysis facilitated low level quantification and enhanced validity. This approach is of high relevance for high-throughput screening experiments during upstream development, e.g. for titer determination, dynamic HCP network analysis or product characterization. Considering the downstream purification process, physicochemical clustering of identified HCPs is of relevance to adjust buffer conditions accordingly. However, the technology provides an innovative approach for label-free MS based quantification relying on statistical pattern analysis and comparison. Absolute quantification based on physicochemical properties and peptide similarity score provides a technological approach without the need of sophisticated sample preparation strategies and is therefore proven to be straightforward, sensitive and highly reproducible in terms of product characterization.

Keywords: process analytical technology, mass spectrometry, process characterization, MVDA, pattern recognition

Procedia PDF Downloads 251
2120 Performance Comparison of Wideband Covariance Matrix Sparse Representation (W-CMSR) with Other Wideband DOA Estimation Methods

Authors: Sandeep Santosh, O. P. Sahu

Abstract:

In this paper, performance comparison of wideband covariance matrix sparse representation (W-CMSR) method with other existing wideband Direction of Arrival (DOA) estimation methods has been made.W-CMSR relies less on a priori information of the incident signal number than the ordinary subspace based methods.Consider the perturbation free covariance matrix of the wideband array output. The diagonal covariance elements are contaminated by unknown noise variance. The covariance matrix of array output is conjugate symmetric i.e its upper right triangular elements can be represented by lower left triangular ones.As the main diagonal elements are contaminated by unknown noise variance,slide over them and align the lower left triangular elements column by column to obtain a measurement vector.Simulation results for W-CMSR are compared with simulation results of other wideband DOA estimation methods like Coherent signal subspace method (CSSM), Capon, l1-SVD, and JLZA-DOA. W-CMSR separate two signals very clearly and CSSM, Capon, L1-SVD and JLZA-DOA fail to separate two signals clearly and an amount of pseudo peaks exist in the spectrum of L1-SVD.

Keywords: W-CMSR, wideband direction of arrival (DOA), covariance matrix, electrical and computer engineering

Procedia PDF Downloads 471
2119 Precursors Signatures of Few Major Earthquakes in Italy Using Very Low Frequency Signal of 45.9kHz

Authors: Keshav Prasad Kandel, Balaram Khadka, Karan Bhatta, Basu Dev Ghimire

Abstract:

Earthquakes still exist as a threating disaster. Being able to predict earthquakes will certainly help prevent substantial loss of life and property. Perhaps, Very Low Frequency/Low Frequency (VLF/LF) signal band (3-30 kHz), which is effectively reflected from D-layer of ionosphere, can be established as a tool to predict earthquake. On May 20 and May 29, 2012, earthquakes of magnitude 6.1 and 5.8 respectively struck Emilia-Romagna of Italy. A year back, on August 24, 2016, an earthquake of magnitude 6.2 struck Central Italy (42.7060 N and 13.2230 E) at 1:36 UT. We present the results obtained from the US Navy VLF Transmitter’s NSY signal of 45.9 kHz transmitted from Niscemi, in the province of Sicily, Italy and received at the Kiel Longwave Monitor, Germany for 2012 and 2016. We analyzed the terminator times, their individual differences and nighttime fluctuation counts. We also analyzed trends, dispersion and nighttime fluctuation which gave us a possible precursors to these earthquakes. Since perturbations in VLF amplitude could also be due to various other factors like lightning, geomagnetic activities (storms, auroras etc.) and solar activities (flares, UV flux, etc.), we filtered the possible perturbations due to these agents to guarantee that the perturbations seen in VLF/LF amplitudes were as a precursor to Earthquakes. As our TRGCP path is North-south, the sunrise and sunset time in transmitter and receiver places matches making pathway for VLF/LF smoother and therefore hoping to obtain more natural data. To our surprise, we found many clear anomalies (as precursors) in terminator times 5 days to 16 days before the earthquakes. Moreover, using night time fluctuation method, we found clear anomalies 5 days to 13 days prior to main earthquakes. This exactly correlates with the findings of previous authors that ionospheric perturbations are seen few days to one month before the seismic activity. In addition to this, we were amazed to observe unexpected decrease of dispersion on certain anomalies where it was supposed to increase, thereby not supporting our finding to some extent. To resolve this problem, we devised a new parameter called dispersion nighttime (dispersion). On analyzing, this parameter decreases significantly on days of nighttime anomalies thereby supporting our precursors to much extent.

Keywords: D-layer, TRGCP (Transmitter Receiver Great Circle Path), terminator times, VLF/LF

Procedia PDF Downloads 191
2118 Magnetic Survey for the Delineation of Concrete Pillars in Geotechnical Investigation for Site Characterization

Authors: Nuraddeen Usman, Khiruddin Abdullah, Mohd Nawawi, Amin Khalil Ismail

Abstract:

A magnetic survey is carried out in order to locate the remains of construction items, specifically concrete pillars. The conventional Euler deconvolution technique can perform the task but it requires the use of fixed structural index (SI) and the construction items are made of materials with different shapes which require different SI (unknown). A Euler deconvolution technique that estimate background, horizontal coordinate (xo and yo), depth and structural index (SI) simultaneously is prepared and used for this task. The synthetic model study carried indicated the new methodology can give a good estimate of location and does not depend on magnetic latitude. For field data, both the total magnetic field and gradiometer reading had been collected simultaneously. The computed vertical derivatives and gradiometer readings are compared and they have shown good correlation signifying the effectiveness of the method. The filtering is carried out using automated procedure, analytic signal and other traditional techniques. The clustered depth solutions coincided with the high amplitude/values of analytic signal and these are the possible target positions of the concrete pillars being sought. The targets under investigation are interpreted to be located at the depth between 2.8 to 9.4 meters. More follow up survey is recommended as this mark the preliminary stage of the work.

Keywords: concrete pillar, magnetic survey, geotechnical investigation, Euler Deconvolution

Procedia PDF Downloads 258
2117 Signal Amplification Using Graphene Oxide in Label Free Biosensor for Pathogen Detection

Authors: Agampodi Promoda Perera, Yong Shin, Mi Kyoung Park

Abstract:

The successful detection of pathogenic bacteria in blood provides important information for early detection, diagnosis and the prevention and treatment of infectious diseases. Silicon microring resonators are refractive-index-based optical biosensors that provide highly sensitive, label-free, real-time multiplexed detection of biomolecules. We demonstrate the technique of using GO (graphene oxide) to enhance the signal output of the silicon microring optical sensor. The activated carboxylic groups in GO molecules bind directly to single stranded DNA with an amino modified 5’ end. This conjugation amplifies the shift in resonant wavelength in a real-time manner. We designed a capture probe for strain Staphylococcus aureus of 21 bp and a longer complementary target sequence of 70 bp. The mismatched target sequence we used was of Streptococcus agalactiae of 70 bp. GO is added after the complementary binding of the probe and target. GO conjugates to the unbound single stranded segment of the target and increase the wavelength shift on the silicon microring resonator. Furthermore, our results show that GO could successfully differentiate between the mismatched DNA sequences from the complementary DNA sequence. Therefore, the proposed concept could effectively enhance sensitivity of pathogen detection sensors.

Keywords: label free biosensor, pathogenic bacteria, graphene oxide, diagnosis

Procedia PDF Downloads 468
2116 Nonlinear Analysis in Investigating the Complexity of Neurophysiological Data during Reflex Behavior

Authors: Juliana A. Knocikova

Abstract:

Methods of nonlinear signal analysis are based on finding that random behavior can arise in deterministic nonlinear systems with a few degrees of freedom. Considering the dynamical systems, entropy is usually understood as a rate of information production. Changes in temporal dynamics of physiological data are indicating evolving of system in time, thus a level of new signal pattern generation. During last decades, many algorithms were introduced to assess some patterns of physiological responses to external stimulus. However, the reflex responses are usually characterized by short periods of time. This characteristic represents a great limitation for usual methods of nonlinear analysis. To solve the problems of short recordings, parameter of approximate entropy has been introduced as a measure of system complexity. Low value of this parameter is reflecting regularity and predictability in analyzed time series. On the other side, increasing of this parameter means unpredictability and a random behavior, hence a higher system complexity. Reduced neurophysiological data complexity has been observed repeatedly when analyzing electroneurogram and electromyogram activities during defence reflex responses. Quantitative phrenic neurogram changes are also obvious during severe hypoxia, as well as during airway reflex episodes. Concluding, the approximate entropy parameter serves as a convenient tool for analysis of reflex behavior characterized by short lasting time series.

Keywords: approximate entropy, neurophysiological data, nonlinear dynamics, reflex

Procedia PDF Downloads 300
2115 Implementation of Congestion Management Strategies on Arterial Roads: Case Study of Geelong

Authors: A. Das, L. Hitihamillage, S. Moridpour

Abstract:

Natural disasters are inevitable to the biodiversity. Disasters such as flood, tsunami and tornadoes could be brutal, harsh and devastating. In Australia, flooding is a major issue experienced by different parts of the country. In such crisis, delays in evacuation could decide the life and death of the people living in those regions. Congestion management could become a mammoth task if there are no steps taken before such situations. In the past to manage congestion in such circumstances, many strategies were utilised such as converting the road shoulders to extra lanes or changing the road geometry by adding more lanes. However, expansion of road to resolving congestion problems is not considered a viable option nowadays. The authorities avoid this option due to many reasons, such as lack of financial support and land space. They tend to focus their attention on optimising the current resources they possess and use traffic signals to overcome congestion problems. Traffic Signal Management strategy was considered a viable option, to alleviate congestion problems in the City of Geelong, Victoria. Arterial road with signalised intersections considered in this paper and the traffic data required for modelling collected from VicRoads. Traffic signalling software SIDRA used to model the roads, and the information gathered from VicRoads. In this paper, various signal parameters utilised to assess and improve the corridor performance to achieve the best possible Level of Services (LOS) for the arterial road.

Keywords: congestion, constraints, management, LOS

Procedia PDF Downloads 398
2114 The Effect of Hydrogen on the Magnetic Properties of ZnO: A Density Functional Tight Binding Study

Authors: M. A. Lahmer, K. Guergouri

Abstract:

The ferromagnetic properties of carbon-doped ZnO (ZnO:CO) and hydrogenated carbon-doped ZnO (ZnO:CO+H) are investigated using the density functional tight binding (DFTB) method. Our results reveal that CO-doped ZnO is a ferromagnetic material with a magnetic moment of 1.3 μB per carbon atom. The presence of hydrogen in the material in the form of CO-H complex decreases the total magnetism of the material without suppressing ferromagnetism. However, the system in this case becomes quickly antiferromagnetic when the C-C separation distance was increased.

Keywords: ZnO, carbon, hydrogen, ferromagnetism, density functional tight binding

Procedia PDF Downloads 286
2113 A Study on the Improvement of Mobile Device Call Buzz Noise Caused by Audio Frequency Ground Bounce

Authors: Jangje Park, So Young Kim

Abstract:

The market demand for audio quality in mobile devices continues to increase, and audible buzz noise generated in time division communication is a chronic problem that goes against the market demand. In the case of time division type communication, the RF Power Amplifier (RF PA) is driven at the audio frequency cycle, and it makes various influences on the audio signal. In this paper, we measured the ground bounce noise generated by the peak current flowing through the ground network in the RF PA with the audio frequency; it was confirmed that the noise is the cause of the audible buzz noise during a call. In addition, a grounding method of the microphone device that can improve the buzzing noise was proposed. Considering that the level of the audio signal generated by the microphone device is -38dBV based on 94dB Sound Pressure Level (SPL), even ground bounce noise of several hundred uV will fall within the range of audible noise if it is induced by the audio amplifier. Through the grounding method of the microphone device proposed in this paper, it was confirmed that the audible buzz noise power density at the RF PA driving frequency was improved by more than 5dB under the conditions of the Printed Circuit Board (PCB) used in the experiment. A fundamental improvement method was presented regarding the buzzing noise during a mobile phone call.

Keywords: audio frequency, buzz noise, ground bounce, microphone grounding

Procedia PDF Downloads 136
2112 Hypoxia Tolerance, Longevity and Cancer-Resistance in the Mole Rat Spalax – a Liver Transcriptomics Approach

Authors: Hanno Schmidt, Assaf Malik, Anne Bicker, Gesa Poetzsch, Aaron Avivi, Imad Shams, Thomas Hankeln

Abstract:

The blind subterranean mole rat Spalax shows a remarkable tolerance to hypoxia, cancer-resistance and longevity. Unravelling the genomic basis of these adaptations will be important for biomedical applications. RNA-Seq gene expression data were obtained from normoxic and hypoxic Spalax and rat liver tissue. Hypoxic Spalax broadly downregulates genes from major liver function pathways. This energy-saving response is likely a crucial adaptation to low oxygen levels. In contrast, the hypoxiasensitive rat shows massive upregulation of energy metabolism genes. Candidate genes with plausible connections to the mole rat’s phenotype, such as important key genes related to hypoxia-tolerance, DNA damage repair, tumourigenesis and ageing, are substantially higher expressed in Spalax than in rat. Comparative liver transcriptomics highlights the importance of molecular adaptations at the gene regulatory level in Spalax and pinpoints a variety of starting points for subsequent functional studies.

Keywords: cancer, hypoxia, longevity, transcriptomics

Procedia PDF Downloads 157
2111 Clinical Training Simulation Experience of Medical Sector Students

Authors: Tahsien Mohamed Okasha

Abstract:

Simulation is one of the emerging educational strategies that depend on the creation of scenarios to imitate what could happen in real life. At the time of COVID, we faced big obstacles in medical education, specially the clinical part and how we could apply it, the simulation was the golden key. Simulation is a very important tool of education for medical sector students, through creating a safe, changeable, quiet environment with less anxiety level for students to practice and to have repeated trials on their competencies. That impacts the level of practice, achievement, and the way of acting in real situations and experiences. A blind Random sample of students from different specialties and colleges who came and finished their training in an integrated environment was collected and tested, and the responses were graded from (1-5). The results revealed that 77% of the studied subjects agreed that dealing and interacting with different medical sector candidates in the same place was beneficial. 77% of the studied subjects agreed that simulations were challenging in thinking and decision-making skills .75% agreed that using high-fidelity manikins was helpful. 75% agree .76% agreed that working in a safe, prepared environment is helpful for realistic situations.

Keywords: simulation, clinical training, education, medical sector students

Procedia PDF Downloads 31
2110 Preparation of Silver and Silver-Gold, Universal and Repeatable, Surface Enhanced Raman Spectroscopy Platforms from SERSitive

Authors: Pawel Albrycht, Monika Ksiezopolska-Gocalska, Robert Holyst

Abstract:

Surface Enhanced Raman Spectroscopy (SERS) is a technique of growing importance not only in purely scientific research related to analytical chemistry. It finds more and more applications in broadly understood testing - medical, forensic, pharmaceutical, food - and everywhere works perfectly, on one condition that SERS substrates used for testing give adequate enhancement, repeatability, and homogeneity of SERS signal. This is a problem that has existed since the invention of this technique. Some laboratories use as SERS amplifiers colloids with silver or gold nanoparticles, others form rough silver or gold surfaces, but results are generally either weak or unrepeatable. Furthermore, these structures are very often highly specific - they amplify the signal only of a small group of compounds. It means that they work with some kinds of analytes but only with those which were used at a developer’s laboratory. When it comes to research on different compounds, completely new SERS 'substrates' are required. That underlay our decision to develop universal substrates for the SERS spectroscopy. Generally, each compound has different affinity for both silver and gold, which have the best SERS properties, and that's what depends on what signal we get in the SERS spectrum. Our task was to create the platform that gives a characteristic 'fingerprint' of the largest number of compounds with very high repeatability - even at the expense of the intensity of the enhancement factor (EF) (possibility to repeat research results is of the uttermost importance). As specified above SERS substrates are offered by SERSitive company. Applied method is based on cyclic potentiodynamic electrodeposition of silver or silver-gold nanoparticles on the conductive surface of ITO-coated glass at controlled temperature of the reaction solution. Silver nanoparticles are supplied in the form of silver nitrate (AgNO₃, 10 mM), gold nanoparticles are derived from tetrachloroauric acid (10 mM) while sodium sulfite (Na₂O₃, 5 mM) is used as a reductor. To limit and standardize the size of the SERS surface on which nanoparticles are deposited, photolithography is used. We secure the desired ITO-coated glass surface, and then etch the unprotected ITO layer which prevents nanoparticles from settling at these sites. On the prepared surface, we carry out the process described above, obtaining SERS surface with nanoparticles of sizes 50-400 nm. The SERSitive platforms present highly sensitivity (EF = 10⁵-10⁶), homogeneity and repeatability (70-80%).

Keywords: electrodeposition, nanoparticles, Raman spectroscopy, SERS, SERSitive, SERS platforms, SERS substrates

Procedia PDF Downloads 155
2109 Frequent-Flyer Program: The Connection between Commercial Partners and Spin-off

Authors: Changmin Jiang

Abstract:

In this paper, we build a theoretical model to investigate the relationship between two recent trends in airline frequent-flyer programs (FFPs): the adoption of the “coalition” business model with other commercial partners, and the separation from airlines’ operations. We show that commercial partners benefit from teaming up with FFP, while increasing the number of commercial partners will increase the total profit; it reduces the average profit of the parties involved. Furthermore, we show that the number of commercial partners of an FFP is negatively related with the benefit to keep the FFP in-house.

Keywords: frequent flyer program, coalition, commercial partners, spin-off

Procedia PDF Downloads 302
2108 Simultaneous Determination of Cefazolin and Cefotaxime in Urine by HPLC

Authors: Rafika Bibi, Khaled Khaladi, Hind Mokran, Mohamed Salah Boukhechem

Abstract:

A high performance liquid chromatographic method with ultraviolet detection at 264nm was developed and validate for quantitative determination and separation of cefazolin and cefotaxime in urine, the mobile phase consisted of acetonitrile and phosphate buffer pH4,2(15 :85) (v/v) pumped through ODB 250× 4,6 mm, 5um column at a flow rate of 1ml/min, loop of 20ul. In this condition, the validation of this technique showed that it is linear in a range of 0,01 to 10ug/ml with a good correlation coefficient ( R>0,9997), retention time of cefotaxime, cefazolin was 9.0, 10.1 respectively, the statistical evaluation of the method was examined by means of within day (n=6) and day to day (n=5) and was found to be satisfactory with high accuracy and precision.

Keywords: cefazolin, cefotaxime, HPLC, bioscience, biochemistry, pharmaceutical

Procedia PDF Downloads 363
2107 Research of Concentratibility of Low Quality Bauxite Raw Materials

Authors: Nadezhda Nikolaeva, Tatyana Alexandrova, Alexandr Alexandrov

Abstract:

Processing of high-silicon bauxite on the base of the traditional clinkering method is related to high power consumption and capital investments, which makes production of alumina from those ores non-competitive in terms of basic economic showings. For these reasons, development of technological solutions enabling to process bauxites with various chemical and mineralogical structures efficiently with low level of thermal power consumption is important. Flow sheet of the studies on washability of ores from the Timanskoe and the Severo-Onezhskoe deposits is on the base of the flotation method.

Keywords: low-quality bauxite, resource-saving technology, optimization, aluminum, conditioning of composition, separation characteristics

Procedia PDF Downloads 290
2106 An Optimized Approach to Generate the Possible States of Football Tournaments Final Table

Authors: Mouslem Damkhi

Abstract:

This paper focuses on possible states of a football tournament final table according to the number of participating teams. Each team holds a position in the table with which it is possible to determine the highest and lowest points for that team. This paper proposes an optimized search space based on the minimum and maximum number of points which can be gained by each team to produce and enumerate the possible states for a football tournament final table. The proposed search space minimizes producing the invalid states which cannot occur during a football tournament. The generated states are filtered by a validity checking algorithm which seeks to reach a tournament graph based on a generated state. Thus, the algorithm provides a way to determine which team’s wins, draws and loses values guarantee a particular table position. The paper also presents and discusses the experimental results of the approach on the tournaments with up to eight teams. Comparing with a blind search algorithm, our proposed approach reduces generating the invalid states up to 99.99%, which results in a considerable optimization in term of the execution time.

Keywords: combinatorics, enumeration, graph, tournament

Procedia PDF Downloads 122
2105 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos

Authors: Nassima Noufail, Sara Bouhali

Abstract:

In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.

Keywords: video segmentation, action detection, classification, Kmeans, C3D

Procedia PDF Downloads 77
2104 Effects of Oral Resveratrol Supplementation on Inflammation and Quality of Life in Patients with Ulcerative Colitis

Authors: M. Samsami, A. Hekmatdoost, N. Ebrahimi Daryani, P. Rezanejad Asl

Abstract:

Ulcerative colitis (UC) is an inflammatory bowel disease in which immune and inflammatory factors are thought to be effective in this disease. Resveratrol is an antioxidant and anti-inflammatory compound. This study determined the effects of resveratrol compound on inflammatory factors in patients with ulcerative colitis. This study was a double-blind randomized clinical trial conducted on 50 patients with UC. Subjects received one capsule daily for 6 wk of either resveratrol (500 mg) or a placebo. Inflammatory factors, anthropometric measures, and IBDQ-9 (Inflammatory Bowel Disease Questionnaire-9) scores were assessed at baseline and at the end of the study. STATA12 software was used for data analysis. No significant differences were found in the background variables between the two groups at baseline. The results indicated that resveratrol supplementation for 6 week significantly decreased plasma levels of TNF-a and hs-CRP and the activity of NF-κB over the placebo group (p<0.001). Significant differences remained after adjustment for vitamin C (p<0.0001). The IBDQ-9 scores increased significantly in the resveratrol group over the placebo group (p<0.001). The findings of this study showed that resveratrol supplementation can be useful in patients with ulcerative colitis.

Keywords: IBD, inflammation, resveratrol, ulcerative colitis

Procedia PDF Downloads 411
2103 Economic Analysis of a Carbon Abatement Technology

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis Pagone Emmanuele, Agbadede Roupa, Allison Isaiah

Abstract:

Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero-emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, the current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbomachinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50% cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low-temperature heat exchanger LTHX (referred to by some authors as air preheater the mixed conductive membrane responsible for oxygen transfer and the high-temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout)–AZEP 85% (85% CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine–AZEP 85% (85% CO2 capture). This paper discusses monte carlo risk analysis of four possible layouts of the AZEP cycle.

Keywords: gas turbine, global warming, green house gas, fossil fuel power plants

Procedia PDF Downloads 397