Search results for: sweep signal
1139 An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System
Authors: Yu-ding Du, Qi-lian Bao, Nassim Bessaad, Lin Liu
Abstract:
The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably.Keywords: multi-sensor data fusion, train positioning, GNSS, odometer, digital track map, map matching, BP neural network, adaptive weighted fusion, Kalman filter
Procedia PDF Downloads 2521138 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet
Authors: Amir Moslemi, Amir movafeghi, Shahab Moradi
Abstract:
One of the most important challenging factors in medical images is nominated as noise.Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjected to low quality due to the noise. The quality of CT images is dependent on the absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on the purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete wavelet transform(DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result in good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).Keywords: computed tomography (CT), noise reduction, curve-let, contour-let, signal to noise peak-peak ratio (PSNR), structure similarity (Ssim), absorbed dose to patient (ADP)
Procedia PDF Downloads 4411137 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data
Authors: M. Kharrat, G. Moreau, Z. Aboura
Abstract:
The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition
Procedia PDF Downloads 1551136 MRI R2* of Liver in an Animal Model
Authors: Chiung-Yun Chang, Po-Chou Chen, Jiun-Shiang Tzeng, Ka-Wai Mac, Chia-Chi Hsiao, Jo-Chi Jao
Abstract:
This study aimed to measure R2* relaxation rates in the liver of New Zealand White (NZW) rabbits. R2* relaxation rate has been widely used in various hepatic diseases for iron overload by quantifying iron contents in liver. R2* relaxation rate is defined as the reciprocal of T2* relaxation time and mainly depends on the composition of tissue. Different tissues would have different R2* relaxation rates. The signal intensity decay in Magnetic resonance imaging (MRI) may be characterized by R2* relaxation rates. In this study, a 1.5T GE Signa HDxt whole body MR scanner equipped with an 8-channel high resolution knee coil was used to observe R2* values in NZW rabbit’s liver and muscle. Eight healthy NZW rabbits weighted 2 ~ 2.5 kg were recruited. After anesthesia using Zoletil 50 and Rompun 2% mixture, the abdomen of rabbit was landmarked at the center of knee coil to perform 3-plane localizer scan using fast spoiled gradient echo (FSPGR) pulse sequence. Afterward, multi-planar fast gradient echo (MFGR) scans were performed with 8 various echo times (TEs) (2/4/6/8/10/12/14/16 ms) to acquire images for R2* calculations. Regions of interest (ROIs) at liver and muscle were measured using Advantage workstation. Finally, the R2* was obtained by a linear regression of ln(SI) on TE. The results showed that the longer the echo time, the smaller the signal intensity. The R2* values of liver and muscle were 44.8 10.9 s-1 and 37.4 9.5 s-1, respectively. It implies that the iron concentration of liver is higher than that of muscle. In conclusion, R2* is correlated with iron contents in tissue. The correlations between R2* and iron content in NZW rabbit might be valuable for further exploration.Keywords: liver, magnetic resonance imaging, muscle, R2* relaxation rate
Procedia PDF Downloads 4361135 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique
Procedia PDF Downloads 5941134 Assessment of the Possible Effects of Biological Control Agents of Lantana camara and Chromolaena odorata in Davao City, Mindanao, Philippines
Authors: Cristine P. Canlas, Crislene Mae L. Gever, Patricia Bea R. Rosialda, Ma. Nina Regina M. Quibod, Perry Archival C. Buenavente, Normandy M. Barbecho, Cynthia Adeline A. Layusa, Michael Day
Abstract:
Invasive plants have an impact on global biodiversity and ecosystem function, and their management is a complex and formidable task. Two of these invasive plant species, Lantana camara and Chromolaena odorata, are found in the Philippines. Lantana camara has the ability to suppress the growth of and outcompete neighboring plants. Chromolaena odorata causes serious agricultural and economical damage and causes fire hazards during dry season. In addition, both species has been reported to poison livestock. One of the known global management strategies to control invasive plants is the introduction of biological control agents. These natural enemies of the invasive plants reduce population density and impacts of the invasive plants, resulting in the balance of the nature in their invasion. Through secondary data sources, interviews, and field validation (e.g. microhabitat searches, sweep netting, opportunistic sampling, photo-documentation), we investigated whether the biocontrol agents previously released by the Philippine Coconut Authority (PCA) in their Davao Research Center to control these invasive plants are still present and are affecting their respective host weeds. We confirm the presence of the biocontrol agent of L. camara, Uroplata girardi, which was introduced in 1985, and Cecidochares connexa, a biocontrol agent of C. odorata released in 2003. Four other biocontrol agents were found to affect L. camara. Signs of damage (e.g. stem galls in C. odorata, and leaf mines in L. camara) signify that these biocontrol agents have successfully established outside of their release site in Davao. Further investigating the extent of the spread of these biocontrol agents in the Philippines and their damage to the two weeds will contribute to the management of invasive plant species in the country.Keywords: invasive alien species, biological control agent, entomology, worst weeds
Procedia PDF Downloads 3741133 The Formation of Mutual Understanding in Conversation: An Embodied Approach
Authors: Haruo Okabayashi
Abstract:
The mutual understanding in conversation is very important for human relations. This study investigates the mental function of the formation of mutual understanding between two people in conversation using the embodied approach. Forty people participated in this study. They are divided into pairs randomly. Four conversation situations between two (make/listen to fun or pleasant talk, make/listen to regrettable talk) are set for four minutes each, and the finger plethysmogram (200 Hz) of each participant is measured. As a result, the attractors of the participants who reported “I did not understand my partner” show the collapsed shape, which means the fluctuation of their rhythm is too small to match their partner’s rhythm, and their cross correlation is low. The autonomic balance of both persons tends to resonate during conversation, and both LLEs tend to resonate, too. In human history, in order for human beings as weak mammals to live, they may have been with others; that is, they have brought about resonating characteristics, which is called self-organization. However, the resonant feature sometimes collapses, depending on the lifestyle that the person was formed by himself after birth. It is difficult for people who do not have a lifestyle of mutual gaze to resonate their biological signal waves with others’. These people have features such as anxiety, fatigue, and confusion tendency. Mutual understanding is thought to be formed as a result of cooperation between the features of self-organization of the persons who are talking and the lifestyle indicated by mutual gaze. Such an entanglement phenomenon is called a nonlinear relation. By this research, it is found that the formation of mutual understanding is expressed by the rhythm of a biological signal showing a nonlinear relationship.Keywords: embodied approach, finger plethysmogram, mutual understanding, nonlinear phenomenon
Procedia PDF Downloads 2661132 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng
Abstract:
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.Keywords: electrohysterogram, feature, preterm labor, term labor
Procedia PDF Downloads 5711131 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space
Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson
Abstract:
Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling
Procedia PDF Downloads 2351130 Leaf Image Processing: Review
Authors: T. Vijayashree, A. Gopal
Abstract:
The aim of the work is to classify and authenticate medicinal plant materials and herbs widely used for Indian herbal medicinal preparation. The quality and authenticity of these raw materials are to be ensured for the preparation of herbal medicines. These raw materials are to be carefully screened, analyzed and documented due to mistaken of look-alike materials which do not have medicinal characteristics.Keywords: authenticity, standardization, principal component analysis, imaging processing, signal processing
Procedia PDF Downloads 2461129 Effect of Electropolymerization Method in the Charge Transfer Properties and Photoactivity of Polyaniline Photoelectrodes
Authors: Alberto Enrique Molina Lozano, María Teresa Cortés Montañez
Abstract:
Polyaniline (PANI) photoelectrodes were electrochemically synthesized through electrodeposition employing three techniques: chronoamperometry (CA), cyclic voltammetry (CV), and potential pulse (PP) methods. The substrate used for electrodeposition was a fluorine-doped tin oxide (FTO) glass with dimensions of 2.5 cm x 1.3 cm. Subsequently, structural and optical characterization was conducted utilizing Fourier-transform infrared (FTIR) spectroscopy and UV-visible (UV-vis) spectroscopy, respectively. The FTIR analysis revealed variations in the molar ratio of benzenoid to quinonoid rings within the PANI polymer matrix, indicative of differing oxidation states arising from the distinct electropolymerization methodologies employed. In the optical characterization, differences in the energy band gap (Eg) values and positions of the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) were observed, attributable to variations in doping levels and structural irregularities introduced during the electropolymerization procedures. To assess the charge transfer properties of the PANI photoelectrodes, electrochemical impedance spectroscopy (EIS) experiments were carried out within a 0.1 M sodium sulfate (Na₂SO₄) electrolyte. The results displayed a substantial decrease in charge transfer resistance with the PANI coatings compared to uncoated substrates, with PANI obtained through cyclic voltammetry (CV) presenting the lowest charge transfer resistance, contrasting PANI obtained via chronoamperometry (CA) and potential pulses (PP). Subsequently, the photoactive response of the PANI photoelectrodes was measured through linear sweep voltammetry (LSV) and chronoamperometry. The photoelectrochemical measurements revealed a discernible photoactivity in all PANI-coated electrodes. However, PANI electropolymerized through CV displayed the highest photocurrent. Interestingly, PANI derived from chronoamperometry (CA) exhibited the highest degree of stable photocurrent over an extended temporal interval.Keywords: PANI, photocurrent, photoresponse, charge separation, recombination
Procedia PDF Downloads 651128 Research on Low interfacial Tension Viscoelastic Fluid Oil Displacement System in Unconventional Reservoir
Authors: Long Long Chen, Xinwei Liao, Shanfa Tang, Shaojing Jiang, Ruijia Tang, Rui Wang, Shu Yun Feng, Si Yao Wang
Abstract:
Unconventional oil reservoirs have the characteristics of strong heterogeneity and poor injectability, and traditional chemical flooding technology is not effective in such reservoirs; polymer flooding in the production of heavy oil reservoirs is difficult to handle produced fluid and easy to block oil wells, etc. Therefore, a viscoelastic fluid flooding system with good adaptability, low interfacial tension, plugging, and diverting capabilities was studied. The viscosity, viscoelasticity, surface/interfacial activity, wettability, emulsification, and oil displacement performance of the anionic Gemini surfactant flooding system were studied, and the adaptability of the system to the reservoir environment was evaluated. The oil displacement effect of the system in low-permeability and high-permeability (heavy oil) reservoirs was investigated, and the mechanism of the system to enhance water flooding recovery was discussed. The results show that the system has temperature resistance and viscosity increasing performance (65℃, 4.12mPa•s), shear resistance and viscoelasticity; at a lower concentration (0.5%), the oil-water interfacial tension can be reduced to ultra-low (10-3mN/m); has good emulsifying ability for heavy oil, and is easy to break demulsification (4.5min); has good adaptability to reservoirs with high salinity (30000mg/L). Oil flooding experiments show that this system can increase the water flooding recovery rate of low-permeability homogeneous and heterogeneous cores by 13% and 15%, respectively, and can increase the water-flooding recovery rate of high-permeability heavy oil reservoirs by 40%. The anionic Gemini surfactant flooding system studied in this paper is a viscoelastic fluid, has good emulsifying and oil washing ability, can effectively improve sweep efficiency, reduce injection pressure, and has broad application in unconventional reservoirs to enhance oil recovery prospect.Keywords: oil displacement system, recovery factor, rheology, interfacial activity, environmental adaptability
Procedia PDF Downloads 1241127 The Implantable MEMS Blood Pressure Sensor Model With Wireless Powering And Data Transmission
Authors: Vitaliy Petrov, Natalia Shusharina, Vitaliy Kasymov, Maksim Patrushev, Evgeny Bogdanov
Abstract:
The leading worldwide death reasons are ischemic heart disease and other cardiovascular illnesses. Generally, the common symptom is high blood pressure. Long-time blood pressure control is very important for the prophylaxis, correct diagnosis and timely therapy. Non-invasive methods which are based on Korotkoff sounds are impossible to apply often and for a long time. Implantable devices can combine longtime monitoring with high accuracy of measurements. The main purpose of this work is to create a real-time monitoring system for decreasing the death rate from cardiovascular diseases. These days implantable electronic devices began to play an important role in medicine. Usually implantable devices consist of a transmitter, powering which could be wireless with a special made battery and measurement circuit. Common problems in making implantable devices are short lifetime of the battery, big size and biocompatibility. In these work, blood pressure measure will be the focus because it’s one of the main symptoms of cardiovascular diseases. Our device will consist of three parts: the implantable pressure sensor, external transmitter and automated workstation in a hospital. The Implantable part of pressure sensors could be based on piezoresistive or capacitive technologies. Both sensors have some advantages and some limitations. The Developed circuit is based on a small capacitive sensor which is made of the technology of microelectromechanical systems (MEMS). The Capacitive sensor can provide high sensitivity, low power consumption and minimum hysteresis compared to the piezoresistive sensor. For this device, it was selected the oscillator-based circuit where frequency depends from the capacitance of sensor hence from capacitance one can calculate pressure. The external device (transmitter) used for wireless charging and signal transmission. Some implant devices for these applications are passive, the external device sends radio wave signal on internal LC circuit device. The external device gets reflected the signal from the implant and from a change of frequency is possible to calculate changing of capacitance and then blood pressure. However, this method has some disadvantages, such as the patient position dependence and static using. Developed implantable device doesn’t have these disadvantages and sends blood pressure data to the external part in real-time. The external device continuously sends information about blood pressure to hospital cloud service for analysis by a physician. Doctor’s automated workstation at the hospital also acts as a dashboard, which displays actual medical data of patients (which require attention) and stores it in cloud service. Usually, critical heart conditions occur few hours before heart attack but the device is able to send an alarm signal to the hospital for an early action of medical service. The system was tested with wireless charging and data transmission. These results can be used for ASIC design for MEMS pressure sensor.Keywords: MEMS sensor, RF power, wireless data, oscillator-based circuit
Procedia PDF Downloads 5891126 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy
Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai
Abstract:
Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy
Procedia PDF Downloads 1411125 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction
Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal
Abstract:
The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.Keywords: acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation
Procedia PDF Downloads 3351124 Differential Expression of GABA and Its Signaling Components in Ulcerative Colitis and Irritable Bowel Syndrome Pathogenesis
Authors: Surbhi Aggarwal, Jaishree Paul
Abstract:
Background: Role of GABA has been implicated in autoimmune diseases like multiple sclerosis, type1 diabetes and rheumatoid arthritis where they modulate the immune response but role in gut inflammation has not been defined. Ulcerative colitis (UC) and diarrhoeal predominant irritable bowel syndrome (IBS-D) both involve inflammation of gastrointestinal tract. UC is a chronic, relapsing and idiopathic inflammation of gut. IBS is a common functional gastrointestinal disorder characterised by abdominal pain, discomfort and alternating bowel habits. Mild inflammation is known to occur in IBS-D. Aim: Aim of this study was to investigate the role of GABA in UC as well as in IBS-D. Materials and methods: Blood and biopsy samples from UC, IBS-D and controls were collected. ELISA was used for measuring level of GABA in serum of UC, IBS-D and controls. RT-PCR analysis was done to determine GABAergic signal system in colon biopsy of UC, IBS-D and controls. RT-PCR was done to check the expression of proinflammatory cytokines. CurveExpert 1.4, Graphpad prism-6 software were used for data analysis. Statistical analysis was done by unpaired, two-way student`s t-test. All sets of data were represented as mean± SEM. A probability level of p < 0.05 was considered statistically significant. Results and conclusion: Significantly decreased level of GABA and altered GABAergic signal system was detected in UC and IBS-D as compared to controls. Significantly increased expression of proinflammatory cytokines was also determined in UC and IBS-D as compared to controls. Hence we conclude that insufficient level of GABA in UC and IBS-D leads to overproduction of proinflammatory cytokines which further contributes to inflammation. GABA may be used as a promising therapeutic target for treatment of gut inflammation or other inflammatory diseases.Keywords: diarrheal predominant irritable bowel syndrome, γ-aminobutyric acid (GABA), inflammation, ulcerative colitis
Procedia PDF Downloads 2261123 Enhancing Embedded System Efficiency with Digital Signal Processing Cores
Authors: Anil H. Dhanawade, Akshay S., Harshal M. Lakesar
Abstract:
This paper presents a comprehensive analysis of the performance advantages offered by DSP (Digital Signal Processing) cores compared to traditional MCU (Microcontroller Unit) cores in the execution of various functions critical to real-time applications. The focus is on the integration of DSP functionalities, specifically in the context of motor control applications such as Field-Oriented Control (FOC), trigonometric calculations, back-EMF estimation, digital filtering, and high-resolution PWM generation. Through comparative analysis, it is demonstrated that DSP cores significantly enhance processing efficiency, achieving faster execution times for complex mathematical operations essential for precise torque and speed control. The study highlights the capabilities of DSP cores, including single-cycle Multiply-Accumulate (MAC) operations and optimized hardware for trigonometric functions, which collectively reduce latency and improve real-time performance. In contrast, MCU cores, while capable of performing similar tasks, typically exhibit longer execution times due to reliance on software-based solutions and lack of dedicated hardware acceleration. The findings underscore the critical role of DSP cores in applications requiring high-speed processing and low-latency response, making them indispensable in the automotive, industrial, and robotics sectors. This work serves as a reference for future developments in embedded systems, emphasizing the importance of architecture choice in achieving optimal performance in demanding computational tasks.Keywords: CPU core, DSP, assembly code, motor control
Procedia PDF Downloads 161122 Inter-Annual Variations of Sea Surface Temperature in the Arabian Sea
Authors: K. S. Sreejith, C. Shaji
Abstract:
Though both Arabian Sea and its counterpart Bay of Bengal is forced primarily by the semi-annually reversing monsoons, the spatio-temporal variations of surface waters is very strong in the Arabian Sea as compared to the Bay of Bengal. This study focuses on the inter-annual variability of Sea Surface Temperature (SST) in the Arabian Sea by analysing ERSST dataset which covers 152 years of SST (January 1854 to December 2002) based on the ICOADS in situ observations. To capture the dominant SST oscillations and to understand the inter-annual SST variations at various local regions of the Arabian Sea, wavelet analysis was performed on this long time-series SST dataset. This tool is advantageous over other signal analysing tools like Fourier analysis, based on the fact that it unfolds a time-series data (signal) both in frequency and time domain. This technique makes it easier to determine dominant modes of variability and explain how those modes vary in time. The analysis revealed that pentadal SST oscillations predominate at most of the analysed local regions in the Arabian Sea. From the time information of wavelet analysis, it was interpreted that these cold and warm events of large amplitude occurred during the periods 1870-1890, 1890-1910, 1930-1950, 1980-1990 and 1990-2005. SST oscillations with peaks having period of ~ 2-4 years was found to be significant in the central and eastern regions of Arabian Sea. This indicates that the inter-annual SST variation in the Indian Ocean is affected by the El Niño-Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) events.Keywords: Arabian Sea, ICOADS, inter-annual variation, pentadal oscillation, SST, wavelet analysis
Procedia PDF Downloads 2761121 Sexual Orientation, Household Labour Division and the Motherhood Wage Penalty
Authors: Julia Hoefer Martí
Abstract:
While research has consistently found a significant motherhood wage penalty for heterosexual women, where homosexual women are concerned, evidence has appeared to suggest no effect, or possibly even a wage bonus. This paper presents a model of the household with a public good that requires both a monetary expense and a labour investment, and where the household budget is shared between partners. Lower-wage partners will do relatively more of the household labour while higher-wage partners will specialise in market labour, and the arrival of a child exacerbates this split, resulting in the lower-wage partner taking on even more of the household labour in relative terms. Employers take this gender-sexuality dyad as a signal for employees’ commitment to the labour market after having a child, and use the information when setting wages after employees become parents. Given that women empirically earn lower wages than men, in a heterosexual couple the female partner will often do more of the household labour. However, as not every female partner has a lower wage, this results in an over-adjustment of wages that manifests as an unexplained motherhood wage penalty. On the other hand, in homosexual couples wage distributions are ex ante identical, and gender is no longer a useful signal to employers as to whether the partner is likely to specialise in household labour or market labour. This model is then tested using longitudinal data from the EU Standards of Income and Living Conditions (EU-SILC) to investigate the hypothesis that women experience different wage effects of motherhood depending on their sexual orientation. While heterosexual women receive a significant motherhood wage penalty of 8-10%, homosexual mothers do not receive any significant wage bonus or penalty of motherhood, consistent with the hypothesis presented above.Keywords: discrimination, gender, motherhood, sexual orientation, labor economics
Procedia PDF Downloads 1641120 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 971119 Intelligent Indoor Localization Using WLAN Fingerprinting
Authors: Gideon C. Joseph
Abstract:
The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression
Procedia PDF Downloads 3471118 MnO₂-Carbon Nanotubes Catalyst for Enhanced Oxygen Reduction Reaction in Polymer Electrolyte Membrane Fuel Cell
Authors: Abidullah, Basharat Hussain, Jong Seok Kim
Abstract:
Polymer electrolyte membrane fuel cell (PEMFC) is an electrochemical cell, which undergoes an oxygen reduction reaction to produce electrical energy. Platinum (Pt) metal has been used as a catalyst since its inception, but expensiveness is the major obstacle in the commercialization of fuel cells. Herein a non-precious group metal (NPGM) is employed instead of Pt to reduce the cost of PEMFCs. Manganese dioxide impregnated carbon nanotubes (MnO₂-CNTs composite) is a catalyst having excellent electrochemical properties and offers a better alternative to the Platinum-based PEMFC. The catalyst is synthesized by impregnating the transition metal on large surface carbonaceous CNTs by hydrothermal synthesis techniques. To enhance the catalytic activity and increase the volumetric current density, the sample was pyrolyzed at 800ᵒC under a nitrogen atmosphere. During pyrolysis, the nitrogen was doped in the framework of CNTs. Then the material was treated with acid for removing the unreacted metals and adding oxygen functional group to the CNT framework. This process ameliorates the catalytic activity of the manganese-based catalyst. The catalyst has been characterized by scanning electron microscope (SEM), X-ray diffraction (XRD), and the catalyst activity has been examined by rotating disc electrode (RDE) experiment. The catalyst was strong enough to withstand an austere alkaline environment in experimental conditions and had a high electrocatalytic activity for oxygen reduction reaction (ORR). Linear Sweep Voltammetry (LSV) depicts an excellent current density of -4.0 mA/cm² and an overpotential of -0.3V vs. standard calomel electrode (SCE) in 0.1M KOH electrolyte. Rotating disk electrode (RDE) was conducted at 400, 800, 1200, and 1600 rpm. The catalyst exhibited a higher methanol tolerance and long term durability with respect to commercial Pt/C. The results for MnO₂-CNT show that the low-cost catalyst will supplant the expensive Pt/C catalyst in the fuel cell.Keywords: carbon nanotubes, methanol fuel cell, oxygen reduction reaction, MnO₂-CNTs
Procedia PDF Downloads 1251117 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers
Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal
Abstract:
Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test
Procedia PDF Downloads 991116 The Seeds of Limitlessness: Dambudzo Marechera's Utopian Thinking
Authors: Emily S. M. Chow
Abstract:
The word ‘utopia’ was coined by Thomas More in Utopia (1516). Its Greek roots ‘ou’ means ‘not’ and ‘topos’ means ‘place.’ In other words, it literally refers to ‘no-place.’ However, the possibility of having an alternative and better future society has always been appealing. In fact, at the core of every utopianism is the search for a future alternative state with the anticipation of a better life. Nonetheless, the practicalities of such ideas have never ceased to be questioned. At times, building a utopia presents itself as a divisive act. In addition to the violence that must be employed to sweep away the old regime in order to make space for the new, all utopias carry within them the potential for bringing catastrophic consequences to human life. After all, every utopia seeks to remodel the individual in a very particular way for the benefit of the masses. In this sense, utopian thinking has the potential both to create and destroy the future. While writing during a traumatic transitional period in Zimbabwe’s history, Dambudzo Marechera witnessed an age of upheavals in which different parties battled for power over Zimbabwe. Being aware of the fact that all institutionalized narratives, be they originated from the governance of the UK, Ian Smith’s white minority regime or Zimbabwe’s revolutionary parties, revealed themselves to be nothing more than fiction, Marechera realized the impossibility of determining reality absolutely. As such, this thesis concerns the writing of the Zimbabwean maverick, Dambudzo Marechera. It argues that Marechera writes a unique vision of utopia. In short, for Marechera utopia is not a static entity but a moment of perpetual change. He rethinks utopia in the sense that he phrases it as an event that ceaselessly contests institutionalized and naturalized narratives of a post-colonial self and its relationship to society. Marechera writes towards a vision of an alternative future of the country. Yet, it is a vision that does not constitute a fully rounded sense of utopia. Being cautious about the world and the operation of power upon the people, rather than imposing his own utopian ideals, Marechera chooses to instead peeling away the narrative constitution of the self in relation to society in order to turn towards a truly radical utopian thinking that empowers the individual.Keywords: African literature, Marechera, post-colonial literature, utopian studies
Procedia PDF Downloads 4131115 Best Practical Technique to Drain Recoverable Oil from Unconventional Deep Libyan Oil Reservoir
Authors: Tarek Duzan, Walid Esayed
Abstract:
Fluid flow in porous media is attributed fundamentally to parameters that are controlled by depositional and post-depositional environments. After deposition, digenetic events can act negatively on the reservoir and reduce the effective porosity, thereby making the rock less permeable. Therefore, exploiting hydrocarbons from such resources requires partially altering the rock properties to improve the long-term production rate and enhance the recovery efficiency. In this study, we try to address, firstly, the phenomena of permeability reduction in tight sandstone reservoirs and illustrate the implemented procedures to investigate the problem roots; finally, benchmark the candidate solutions at the field scale and recommend the mitigation strategy for the field development plan. During the study, two investigations have been considered: subsurface analysis using ( PLT ) and Laboratory tests for four candidate wells of the interested reservoir. Based on the above investigations, it was obvious that the Production logging tool (PLT) has shown areas of contribution in the reservoir, which is considered very limited, considering the total reservoir thickness. Also, Alcohol treatment was the first choice to go with for the AA9 well. The well productivity has been relatively restored but not to its initial productivity. Furthermore, Alcohol treatment in the lab was effective and restored permeability in some plugs by 98%, but operationally, the challenge would be the ability to distribute enough alcohol in a wellbore to attain the sweep Efficiency obtained within a laboratory core plug. However, the Second solution, which is based on fracking wells, has shown excellent results, especially for those wells that suffered a high drop in oil production. It is suggested to frac and pack the wells that are already damaged in the Waha field to mitigate the damage and restore productivity back as much as possible. In addition, Critical fluid velocity and its effect on fine sand migration in the reservoir have to be well studied on core samples, and therefore, suitable pressure drawdown will be applied in the reservoir to limit fine sand migration.Keywords: alcohol treatment, post-depositional environments, permeability, tight sandstone
Procedia PDF Downloads 681114 Fluorescing Aptamer-Gold Nanoparticle Complex for the Sensitive Detection of Bisphenol A
Authors: Eunsong Lee, Gae Baik Kim, Young Pil Kim
Abstract:
Bisphenol A (BPA) is one of the endocrine disruptors (EDCs), which have been suspected to be associated with reproductive dysfunction and physiological abnormality in human. Since the BPA has been widely used to make plastics and epoxy resins, the leach of BPA from the lining of plastic products has been of major concern, due to its environmental or human exposure issues. The simple detection of BPA based on the self-assembly of aptamer-mediated gold nanoparticles (AuNPs) has been reported elsewhere, yet the detection sensitivity still remains challenging. Here we demonstrate an improved AuNP-based sensor of BPA by using fluorescence-combined AuNP colorimetry in order to overcome the drawback of traditional AuNP sensors. While the anti-BPA aptamer (full length or truncated ssDNA) triggered the self-assembly of unmodified AuNP (citrate-stabilized AuNP) in the presence of BPA at high salt concentrations, no fluorescence signal was observed by the subsequent addition of SYBR Green, due to a small amount of free anti-BPA aptamer. In contrast, the absence of BPA did not cause the self-assembly of AuNPs (no color change by salt-bridged surface stabilization) and high fluorescence signal by SYBP Green, which was due to a large amount of free anti-BPA aptamer. As a result, the quantitative analysis of BPA was achieved using the combination of absorption of AuNP with fluorescence intensity of SYBR green as a function of BPA concentration, which represented more improved detection sensitivity (as low as 1 ppb) than did in the AuNP colorimetric analysis. This method also enabled to detect high BPA in water-soluble extracts from thermal papers with high specificity against BPS and BPF. We suggest that this approach will be alternative for traditional AuNP colorimetric assays in the field of aptamer-based molecular diagnosis.Keywords: bisphenol A, colorimetric, fluoroscence, gold-aptamer nanobiosensor
Procedia PDF Downloads 1881113 Polymer Flooding: Chemical Enhanced Oil Recovery Technique
Authors: Abhinav Bajpayee, Shubham Damke, Rupal Ranjan, Neha Bharti
Abstract:
Polymer flooding is a dramatic improvement in water flooding and quickly becoming one of the EOR technologies. Used for improving oil recovery. With the increasing energy demand and depleting oil reserves EOR techniques are becoming increasingly significant .Since most oil fields have already begun water flooding, chemical EOR technique can be implemented by using fewer resources than any other EOR technique. Polymer helps in increasing the viscosity of injected water thus reducing water mobility and hence achieves a more stable displacement .Polymer flooding helps in increasing the injection viscosity as has been revealed through field experience. While the injection of a polymer solution improves reservoir conformance the beneficial effect ceases as soon as one attempts to push the polymer solution with water. It is most commonly applied technique because of its higher success rate. In polymer flooding, a water-soluble polymer such as Polyacrylamide is added to the water in the water flood. This increases the viscosity of the water to that of a gel making the oil and water greatly improving the efficiency of the water flood. It also improves the vertical and areal sweep efficiency as a consequence of improving the water/oil mobility ratio. Polymer flooding plays an important role in oil exploitation, but around 60 million ton of wastewater is produced per day with oil extraction together. Therefore the treatment and reuse of wastewater becomes significant which can be carried out by electro dialysis technology. This treatment technology can not only decrease environmental pollution, but also achieve closed-circuit of polymer flooding wastewater during crude oil extraction. There are three potential ways in which a polymer flood can make the oil recovery process more efficient: (1) through the effects of polymers on fractional flow, (2) by decreasing the water/oil mobility ratio, and (3) by diverting injected water from zones that have been swept. It has also been suggested that the viscoelastic behavior of polymers can improve displacement efficiency Polymer flooding may also have an economic impact because less water is injected and produced compared with water flooding. In future we need to focus on developing polymers that can be used in reservoirs of high temperature and high salinity, applying polymer flooding in different reservoir conditions and also combine polymer with other processes (e.g., surfactant/ polymer flooding).Keywords: fractional flow, polymer, viscosity, water/oil mobility ratio
Procedia PDF Downloads 3991112 Design and Creation of a BCI Videogame for Training and Measure of Sustained Attention in Children with ADHD
Authors: John E. Muñoz, Jose F. Lopez, David S. Lopez
Abstract:
Attention Deficit Hyperactivity Disorder (ADHD) is a disorder that affects 1 out of 5 Colombian children, converting into a real public health problem in the country. Conventional treatments such as medication and neuropsychological therapy have been proved to be insufficient in order to decrease high incidence levels of ADHD in the principal Colombian cities. This work demonstrates a design and development of a videogame that uses a brain computer interface not only to serve as an input device but also as a tool to monitor neurophysiologic signal. The video game named “The Harvest Challenge” puts a cultural scene of a Colombian coffee grower in its context, where a player can use his/her avatar in three mini games created in order to reinforce four fundamental aspects: i) waiting ability, ii) planning ability, iii) ability to follow instructions and iv) ability to achieve objectives. The details of this collaborative designing process of the multimedia tool according to the exact clinic necessities and the description of interaction proposals are presented through the mental stages of attention and relaxation. The final videogame is presented as a tool for sustained attention training in children with ADHD using as an action mechanism the neuromodulation of Beta and Theta waves through an electrode located in the central part of the front lobe of the brain. The processing of an electroencephalographic signal is produced automatically inside the videogame allowing to generate a report of the theta/beta ratio evolution - a biological marker, which has been demonstrated to be a sufficient measure to discriminate of children with deficit and without.Keywords: BCI, neuromodulation, ADHD, videogame, neurofeedback, theta/beta ratio
Procedia PDF Downloads 3711111 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization
Authors: Subhajit Das, Nirjhar Dhang
Abstract:
Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization
Procedia PDF Downloads 2151110 Detection of Alzheimer's Protein on Nano Designed Polymer Surfaces in Water and Artificial Saliva
Authors: Sevde Altuntas, Fatih Buyukserin
Abstract:
Alzheimer’s disease is responsible for irreversible neural damage of brain parts. One of the disease markers is Amyloid-β 1-42 protein that accumulates in the brain in the form plaques. The basic problem for detection of the protein is the low amount of protein that cannot be detected properly in body liquids such as blood, saliva or urine. To solve this problem, tests like ELISA or PCR are proposed which are expensive, require specialized personnel and can contain complex protocols. Therefore, Surface-enhanced Raman Spectroscopy (SERS) a good candidate for detection of Amyloid-β 1-42 protein. Because the spectroscopic technique can potentially allow even single molecule detection from liquid and solid surfaces. Besides SERS signal can be improved by using nanopattern surface and also is specific to molecules. In this context, our study proposes to fabricate diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin - T to detect low concentrations of Amyloid-β 1-42 protein in water and artificial saliva medium by the enhancement of protein SERS signal. The nanopatterned PC surface that was used to enhance SERS signal was fabricated by using Anodic Alumina Membranes (AAM) as a template. It is possible to produce AAMs with different column structures and varying thicknesses depending on voltage and anodization time. After fabrication process, the pore diameter of AAMs can be arranged with dilute acid solution treatment. In this study, two different columns structures were prepared. After a surface modification to decrease their surface energy, AAMs were treated with PC solution. Following the solvent evaporation, nanopatterned PC films with tunable pillared structures were peeled off from the membrane surface. The PC film was then modified with Au and Thioflavin-T for the detection of Amyloid-β 1-42 protein. The protein detection studies were conducted first in water via this biosensor platform. Same measurements were conducted in artificial saliva to detect the presence of Amyloid Amyloid-β 1-42 protein. SEM, SERS and contact angle measurements were carried out for the characterization of different surfaces and further demonstration of the protein attachment. SERS enhancement factor calculations were also completed via experimental results. As a result, our research group fabricated diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin-T to detect low concentrations of Alzheimer’s Amiloid – β protein in water and artificial saliva medium. This work was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) Grant No: 214Z167.Keywords: alzheimer, anodic aluminum oxide, nanotopography, surface enhanced Raman spectroscopy
Procedia PDF Downloads 291