Search results for: signal restoration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2032

Search results for: signal restoration

172 Nanobiosensor System for Aptamer Based Pathogen Detection in Environmental Waters

Authors: Nimet Yildirim Tirgil, Ahmed Busnaina, April Z. Gu

Abstract:

Environmental waters are monitored worldwide to protect people from infectious diseases primarily caused by enteric pathogens. All long, Escherichia coli (E. coli) is a good indicator for potential enteric pathogens in waters. Thus, a rapid and simple detection method for E. coli is very important to predict the pathogen contamination. In this study, to the best of our knowledge, as the first time we developed a rapid, direct and reusable SWCNTs (single walled carbon nanotubes) based biosensor system for sensitive and selective E. coli detection in water samples. We use a novel and newly developed flexible biosensor device which was fabricated by high-rate nanoscale offset printing process using directed assembly and transfer of SWCNTs. By simple directed assembly and non-covalent functionalization, aptamer (biorecognition element that specifically distinguish the E. coli O157:H7 strain from other pathogens) based SWCNTs biosensor system was designed and was further evaluated for environmental applications with simple and cost-effective steps. The two gold electrode terminals and SWCNTs-bridge between them allow continuous resistance response monitoring for the E. coli detection. The detection procedure is based on competitive mode detection. A known concentration of aptamer and E. coli cells were mixed and after a certain time filtered. The rest of free aptamers injected to the system. With hybridization of the free aptamers and their SWCNTs surface immobilized probe DNA (complementary-DNA for E. coli aptamer), we can monitor the resistance difference which is proportional to the amount of the E. coli. Thus, we can detect the E. coli without injecting it directly onto the sensing surface, and we could protect the electrode surface from the aggregation of target bacteria or other pollutants that may come from real wastewater samples. After optimization experiments, the linear detection range was determined from 2 cfu/ml to 10⁵ cfu/ml with higher than 0.98 R² value. The system was regenerated successfully with 5 % SDS solution over 100 times without any significant deterioration of the sensor performance. The developed system had high specificity towards E. coli (less than 20 % signal with other pathogens), and it could be applied to real water samples with 86 to 101 % recovery and 3 to 18 % cv values (n=3).

Keywords: aptamer, E. coli, environmental detection, nanobiosensor, SWCTs

Procedia PDF Downloads 200
171 Breast Cancer Sensing and Imaging Utilized Printed Ultra Wide Band Spherical Sensor Array

Authors: Elyas Palantei, Dewiani, Farid Armin, Ardiansyah

Abstract:

High precision of printed microwave sensor utilized for sensing and monitoring the potential breast cancer existed in women breast tissue was optimally computed. The single element of UWB printed sensor that successfully modeled through several numerical optimizations was multiple fabricated and incorporated with woman bra to form the spherical sensors array. One sample of UWB microwave sensor obtained through the numerical computation and optimization was chosen to be fabricated. In overall, the spherical sensors array consists of twelve stair patch structures, and each element was individually measured to characterize its electrical properties, especially the return loss parameter. The comparison of S11 profiles of all UWB sensor elements is discussed. The constructed UWB sensor is well verified using HFSS programming, CST programming, and experimental measurement. Numerically, both HFSS and CST confirmed the potential operation bandwidth of UWB sensor is more or less 4.5 GHz. However, the measured bandwidth provided is about 1.2 GHz due to the technical difficulties existed during the manufacturing step. The configuration of UWB microwave sensing and monitoring system implemented consists of 12 element UWB printed sensors, vector network analyzer (VNA) to perform as the transceiver and signal processing part, the PC Desktop/Laptop acting as the image processing and displaying unit. In practice, all the reflected power collected from whole surface of artificial breast model are grouped into several numbers of pixel color classes positioned on the corresponding row and column (pixel number). The total number of power pixels applied in 2D-imaging process was specified to 100 pixels (or the power distribution pixels dimension 10x10). This was determined by considering the total area of breast phantom of average Asian women breast size and synchronizing with the single UWB sensor physical dimension. The interesting microwave imaging results were plotted and together with some technical problems arisen on developing the breast sensing and monitoring system are examined in the paper.

Keywords: UWB sensor, UWB microwave imaging, spherical array, breast cancer monitoring, 2D-medical imaging

Procedia PDF Downloads 195
170 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 310
169 Modelling and Control of Milk Fermentation Process in Biochemical Reactor

Authors: Jožef Ritonja

Abstract:

The biochemical industry is one of the most important modern industries. Biochemical reactors are crucial devices of the biochemical industry. The essential bioprocess carried out in bioreactors is the fermentation process. A thorough insight into the fermentation process and the knowledge how to control it are essential for effective use of bioreactors to produce high quality and quantitatively enough products. The development of the control system starts with the determination of a mathematical model that describes the steady state and dynamic properties of the controlled plant satisfactorily, and is suitable for the development of the control system. The paper analyses the fermentation process in bioreactors thoroughly, using existing mathematical models. Most existing mathematical models do not allow the design of a control system for controlling the fermentation process in batch bioreactors. Due to this, a mathematical model was developed and presented that allows the development of a control system for batch bioreactors. Based on the developed mathematical model, a control system was designed to ensure optimal response of the biochemical quantities in the fermentation process. Due to the time-varying and non-linear nature of the controlled plant, the conventional control system with a proportional-integral-differential controller with constant parameters does not provide the desired transient response. The improved adaptive control system was proposed to improve the dynamics of the fermentation. The use of the adaptive control is suggested because the parameters’ variations of the fermentation process are very slow. The developed control system was tested to produce dairy products in the laboratory bioreactor. A carbon dioxide concentration was chosen as the controlled variable. The carbon dioxide concentration correlates well with the other, for the quality of the fermentation process in significant quantities. The level of the carbon dioxide concentration gives important information about the fermentation process. The obtained results showed that the designed control system provides minimum error between reference and actual values of carbon dioxide concentration during a transient response and in a steady state. The recommended control system makes reference signal tracking much more efficient than the currently used conventional control systems which are based on linear control theory. The proposed control system represents a very effective solution for the improvement of the milk fermentation process.

Keywords: biochemical reactor, fermentation process, modelling, adaptive control

Procedia PDF Downloads 132
168 Repeatable Surface Enhanced Raman Spectroscopy Substrates from SERSitive for Wide Range of Chemical and Biological Substances

Authors: Monika Ksiezopolska-Gocalska, Pawel Albrycht, Robert Holyst

Abstract:

Surface Enhanced Raman Spectroscopy (SERS) is a technique used to analyze very low concentrations of substances in solutions, even in aqueous solutions - which is its advantage over IR. This technique can be used in the pharmacy (to check the purity of products); forensics (whether at a crime scene there were any illegal substances); or medicine (serving as a medical test) and lots more. Due to the high potential of this technique, its increasing popularity in analytical laboratories, and simultaneously - the absence of appropriate platforms enhancing the SERS signal (crucial to observe the Raman effect at low analyte concentration in solutions (1 ppm)), we decided to invent our own SERS platforms. As an enhancing layer, we have chosen gold and silver nanoparticles, because these two have the best SERS properties, and each has an affinity for the other kind of particles, which increases the range of research capabilities. The next step was to commercialize them, which resulted in the creation of the company ‘SERSitive.eu’ focusing on production of highly sensitive (Ef = 10⁵ – 10⁶), homogeneous and reproducible (70 - 80%) substrates. SERStive SERS substrates are made using the electrodeposition of silver or silver-gold nanoparticles technique. Thanks to a very detailed analysis of data based on studies optimizing such parameters as deposition time, temperature of the reaction solution, applied potential, used reducer, or reagent concentrations using a standardized compound - p-mercaptobenzoic acid (PMBA) at a concentration of 10⁻⁶ M, we have developed a high-performance process for depositing precious metal nanoparticles on the surface of ITO glass. In order to check a quality of the SERSitive platforms, we examined the wide range of the chemical compounds and the biological substances. Apart from analytes that have great affinity to the metal surfaces (e.g. PMBA) we obtained very good results for those fitting less the SERS measurements. Successfully we received intensive, and what’s more important - very repetitive spectra for; amino acids (phenyloalanine, 10⁻³ M), drugs (amphetamine, 10⁻⁴ M), designer drugs (cathinone derivatives, 10⁻³ M), medicines and ending with bacteria (Listeria, Salmonella, Escherichia coli) and fungi.

Keywords: nanoparticles, Raman spectroscopy, SERS, SERS applications, SERS substrates, SERSitive

Procedia PDF Downloads 151
167 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 149
166 Development of a Real-Time Simulink Based Robotic System to Study Force Feedback Mechanism during Instrument-Object Interaction

Authors: Jaydip M. Desai, Antonio Valdevit, Arthur Ritter

Abstract:

Robotic surgery is used to enhance minimally invasive surgical procedure. It provides greater degree of freedom for surgical tools but lacks of haptic feedback system to provide sense of touch to the surgeon. Surgical robots work on master-slave operation, where user is a master and robotic arms are the slaves. Current, surgical robots provide precise control of the surgical tools, but heavily rely on visual feedback, which sometimes cause damage to the inner organs. The goal of this research was to design and develop a real-time simulink based robotic system to study force feedback mechanism during instrument-object interaction. Setup includes three Velmex XSlide assembly (XYZ Stage) for three dimensional movement, an end effector assembly for forceps, electronic circuit for four strain gages, two Novint Falcon 3D gaming controllers, microcontroller board with linear actuators, MATLAB and Simulink toolboxes. Strain gages were calibrated using Imada Digital Force Gauge device and tested with a hard-core wire to measure instrument-object interaction in the range of 0-35N. Designed simulink model successfully acquires 3D coordinates from two Novint Falcon controllers and transfer coordinates to the XYZ stage and forceps. Simulink model also reads strain gages signal through 10-bit analog to digital converter resolution of a microcontroller assembly in real time, converts voltage into force and feedback the output signals to the Novint Falcon controller for force feedback mechanism. Experimental setup allows user to change forward kinematics algorithms to achieve the best-desired movement of the XYZ stage and forceps. This project combines haptic technology with surgical robot to provide sense of touch to the user controlling forceps through machine-computer interface.

Keywords: surgical robot, haptic feedback, MATLAB, strain gage, simulink

Procedia PDF Downloads 534
165 Abilitest Battery: Presentation of Tests and Psychometric Properties

Authors: Sylwia Sumińska, Łukasz Kapica, Grzegorz Szczepański

Abstract:

Introduction: Cognitive skills are a crucial part of everyday functioning. Cognitive skills include perception, attention, language, memory, executive functions, and higher cognitive skills. With the aging of societies, there is an increasing percentage of people whose cognitive skills decline. Cognitive skills affect work performance. The appropriate diagnosis of a worker’s cognitive skills reduces the risk of errors and accidents at work which is also important for senior workers. The study aimed to prepare new cognitive tests for adults aged 20-60 and assess the psychometric properties of the tests. The project responds to the need for reliable and accurate methods of assessing cognitive performance. Computer tests were developed to assess psychomotor performance, attention, and working memory. Method: Two hundred eighty people aged 20-60 will participate in the study in 4 age groups. Inclusion criteria for the study were: no subjective cognitive impairment, no history of severe head injuries, chronic diseases, psychiatric and neurological diseases. The research will be conducted from February - to June 2022. Cognitive tests: 1) Measurement of psychomotor performance: Reaction time, Reaction time with selective attention component; 2) Measurement of sustained attention: Visual search (dots), Visual search (numbers); 3) Measurement of working memory: Remembering words, Remembering letters. To assess the validity and the reliability subjects will perform the Vienna Test System, i.e., “Reaction Test” (reaction time), “Signal Detection” (sustained attention), “Corsi Block-Tapping Test” (working memory), and Perception and Attention Test (TUS), Colour Trails Test (CTT), Digit Span – subtest from The Wechsler Adult Intelligence Scale. Eighty people will be invited to a session after three months aimed to assess the consistency over time. Results: Due to ongoing research, the detailed results from 280 people will be shown at the conference separately in each age group. The results of correlation analysis with the Vienna Test System will be demonstrated as well.

Keywords: aging, attention, cognitive skills, cognitive tests, psychomotor performance, working memory

Procedia PDF Downloads 106
164 Demand-Side Financing for Thai Higher Education: A Reform Towards Sustainable Development

Authors: Daral Maesincee, Jompol Thongpaen

Abstract:

Thus far, most of the decisions made within the walls of Thai higher education (HE) institutions have primarily been supply-oriented. With the current supply-driven, itemized HE financing systems, the nation is struggling to systemically produce high-quality manpower that serves the market’s needs, often resulting in education mismatches and unemployment – particularly in science, technology, and innovation (STI)-related fields. With the COVID-19 pandemic challenges widening the education inequality (accessibility and quality) gap, HE becomes even more unobtainable for underprivileged students, permanently leaving some out of the system. Therefore, Thai HE needs a new financing system that produces the “right people” for the “right occupations” through the “right ways,” regardless of their socioeconomic backgrounds, and encourages the creation of non-degree courses to tackle these ongoing challenges. The “Demand-Side Financing for Thai Higher Education” policy aims to do so by offering a new paradigm of HE resource allocation via two main mechanisms: i) standardized formula-based unit-cost subsidizations that is specific to each study field and ii) student loan programs that respond to the “demand signals” from the labor market and the students, that are in line with the country’s priorities. Through in-dept reviews, extensive studies, and consultations with various experts, education committees, and related agencies, i) the method of demand signal analysis is identified, ii) the unit-cost of each student in the sample study fields is approximated, iii) the method of budget analysis is formulated, iv) the interagency workflows are established, and v) a supporting information database is created to suggest the number of graduates each HE institution can potentially produce, the study fields and skillsets that are needed by the labor market, the employers’ satisfaction with the graduates, and each study field’s employment rates. By responding to the needs of all stakeholders, this policy is expected to steer Thai HE toward producing more STI-related manpower in order to uplift Thai people’s quality of life and enhance the nation’s global competitiveness. This policy is currently in the process of being considered by the National Education Transformation Committee and the Higher Education Commission.

Keywords: demand-side financing, higher education resource, human capital, higher education

Procedia PDF Downloads 202
163 Exo-III Assisted Amplification Strategy through Target Recycling of Hg²⁺ Detection in Water: A GNP Based Label-Free Colorimetry Employing T-Rich Hairpin-Loop Metallobase

Authors: Abdul Ghaffar Memon, Xiao Hong Zhou, Yunpeng Xing, Ruoyu Wang, Miao He

Abstract:

Due to deleterious environmental and health effects of the Hg²⁺ ions, various online, detection methods apart from the traditional analytical tools have been developed by researchers. Biosensors especially, label, label-free, colorimetric and optical sensors have advanced with sensitive detection. However, there remains a gap of ultrasensitive quantification as noise interact significantly especially in the AuNP based label-free colorimetry. This study reported an amplification strategy using Exo-III enzyme for target recycling of Hg²⁺ ions in a T-rich hairpin loop metallobase label-free colorimetric nanosensor with an improved sensitivity using unmodified gold nanoparticles (uGNPs) as an indicator. The two T-rich metallobase hairpin loop structures as 5’- CTT TCA TAC ATA GAA AAT GTA TGT TTG -3 (HgS1), and 5’- GGC TTT GAG CGC TAA GAA A TA GCG CTC TTT G -3’ (HgS2) were tested in the study. The thermodynamic properties of HgS1 and HgS2 were calculated using online tools (http://biophysics.idtdna.com/cgi-bin/meltCalculator.cgi). The lab scale synthesized uGNPs were utilized in the analysis. The DNA sequence had T-rich bases on both tails end, which in the presence of Hg²⁺ forms a T-Hg²⁺-T mismatch, promoting the formation of dsDNA. Later, the Exo-III incubation enable the enzyme to cleave stepwise mononucleotides from the 3’ end until the structure become single-stranded. These ssDNA fragments then adsorb on the surface of AuNPs in their presence and protect AuNPs from the induced salt aggregation. The visible change in color from blue (aggregation stage in the absence of Hg²⁺) and pink (dispersion state in the presence of Hg²⁺ and adsorption of ssDNA fragments) can be observed and analyzed through UV spectrometry. An ultrasensitive quantitative nanosensor employing Exo-III assisted target recycling of mercury ions through label-free colorimetry with nanomolar detection using uGNPs have been achieved and is further under the optimization to achieve picomolar range by avoiding the influence of the environmental matrix. The proposed strategy will supplement in the direction of uGNP based ultrasensitive, rapid, onsite, label-free colorimetric detection.

Keywords: colorimetric, Exo-III, gold nanoparticles, Hg²⁺ detection, label-free, signal amplification

Procedia PDF Downloads 312
162 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 173
161 Design of Identification Based Adaptive Control for Fermentation Process in Bioreactor

Authors: J. Ritonja

Abstract:

The biochemical technology has been developing extremely fast since the middle of the last century. The main reason for such development represents a requirement for large production of high-quality biologically manufactured products such as pharmaceuticals, foods, and beverages. The impact of the biochemical industry on the world economy is enormous. The great importance of this industry also results in intensive development in scientific disciplines relevant to the development of biochemical technology. In addition to developments in the fields of biology and chemistry, which enable to understand complex biochemical processes, development in the field of control theory and applications is also very important. In the paper, the control for the biochemical reactor for the milk fermentation was studied. During the fermentation process, the biophysical quantities must be precisely controlled to obtain the high-quality product. To control these quantities, the bioreactor’s stirring drive and/or heating system can be used. Available commercial biochemical reactors are equipped with open loop or conventional linear closed loop control system. Due to the outstanding parameters variations and the partial nonlinearity of the biochemical process, the results obtained with these control systems are not satisfactory. To improve the fermentation process, the self-tuning adaptive control system was proposed. The use of the self-tuning adaptive control is suggested because the parameters’ variations of the studied biochemical process are very slow in most cases. To determine the linearized mathematical model of the fermentation process, the recursive least square identification method was used. Based on the obtained mathematical model the linear quadratic regulator was tuned. The parameters’ identification and the controller’s synthesis are executed on-line and adapt the controller’s parameters to the fermentation process’ dynamics during the operation. The use of the proposed combination represents the original solution for the control of the milk fermentation process. The purpose of the paper is to contribute to the progress of the control systems for the biochemical reactors. The proposed adaptive control system was tested thoroughly. From the obtained results it is obvious that the proposed adaptive control system assures much better following of the reference signal as a conventional linear control system with fixed control parameters.

Keywords: adaptive control, biochemical reactor, linear quadratic regulator, recursive least square identification

Procedia PDF Downloads 126
160 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image

Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias

Abstract:

Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.

Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals

Procedia PDF Downloads 75
159 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study

Authors: Jitendra Pratap, Jonathan Sivyer

Abstract:

Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.

Keywords: CT, iodine density, spectral, dual-energy

Procedia PDF Downloads 120
158 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou

Abstract:

Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.

Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity

Procedia PDF Downloads 272
157 The Response of Adaptive Mechanism of Fluorescent Proteins from Coral Species and Target Cell Properties on Signalling Capacity as Biosensor

Authors: Elif Tugce Aksun Tumerkan

Abstract:

Fluorescent proteins (FPs) have become very popular since green fluorescent protein discovered from crystal jellyfish. It is known that Anthozoa species have a wide range of chromophore organisms, and the initial crystal structure for non-fluorescent chromophores obtained from the reef-building coral has been determined. There are also differently coloured pigments in non-bioluminescent Anthozoa zooxanthellate and azooxanthellate which are frequently members of the GFP-like protein family. The development of fluorescent proteins (FPs) and their applications is an outstanding example of basic science leading to practical biotechnological and medical applications. Fluorescent proteins have several applications in science and are used as important indicators in molecular biology and cell-based research. With rising interest in cell biology, FPs have used as biosensor indicators and probes in pharmacology and cell biology. Using fluorescent proteins in genetically encoded metabolite sensors has many advantages than chemical probes for metabolites such as easily introduced into any cell or organism in any sub-cellular localization and giving chance to fixing to fluoresce of different colours or characteristics. There are different factors effects to signalling mechanism when they used as a biosensor. While there are wide ranges of research have been done on the significance and applications of fluorescent proteins, the cell signalling response of FPs and target cell are less well understood. In this study, it was aimed to clarify the response of adaptive mechanisms of coral species such as pH, temperature and symbiotic relationship and target cells properties on the signalling capacity. Corals are a rich natural source of fluorescent proteins that change with environmental conditions such as light, heat stress and injury. Adaptation mechanism of coral species to these types of environmental variations is important factor due to FPs properties have affected by this mechanism. Since fluorescent proteins obtained from nature, their own ecological property like the symbiotic relationship is observed very commonly in coral species and living conditions have the impact on FPs efficiency. Target cell properties also have an effect on signalling and visualization. The dynamicity of detector that used for reading fluorescence and the level of background fluorescence are key parameters for the quality of the fluorescent signal. Among the factors, it can be concluded that coral species adaptive characteristics have the strongest effect on FPs signalling capacity.

Keywords: biosensor, cell biology, environmental conditions, fluorescent protein, sea anemone

Procedia PDF Downloads 170
156 Broadband Ultrasonic and Rheological Characterization of Liquids Using Longitudinal Waves

Authors: M. Abderrahmane Mograne, Didier Laux, Jean-Yves Ferrandis

Abstract:

Rheological characterizations of complex liquids like polymer solutions present an important scientific interest for a lot of researchers in many fields as biology, food industry, chemistry. In order to establish master curves (elastic moduli vs frequency) which can give information about microstructure, classical rheometers or viscometers (such as Couette systems) are used. For broadband characterization of the sample, temperature is modified in a very large range leading to equivalent frequency modifications applying the Time Temperature Superposition principle. For many liquids undergoing phase transitions, this approach is not applicable. That is the reason, why the development of broadband spectroscopic methods around room temperature becomes a major concern. In literature many solutions have been proposed but, to our knowledge, there is no experimental bench giving the whole rheological characterization for frequencies about a few Hz (Hertz) to many MHz (Mega Hertz). Consequently, our goal is to investigate in a nondestructive way in very broadband frequency (A few Hz – Hundreds of MHz) rheological properties using longitudinal ultrasonic waves (L waves), a unique experimental bench and a specific container for the liquid: a test tube. More specifically, we aim to estimate the three viscosities (longitudinal, shear and bulk) and the complex elastic moduli (M*, G* and K*) respectively longitudinal, shear and bulk moduli. We have decided to use only L waves conditioned in two ways: bulk L wave in the liquid or guided L waves in the tube test walls. In this paper, we will present first results for very low frequencies using the ultrasonic tracking of a falling ball in the test tube. This will lead to the estimation of shear viscosity from a few mPa.s to a few Pa.s (Pascal second). Corrections due to the small dimensions of the tube will be applied and discussed regarding the size of the falling ball. Then the use of bulk L wave’s propagation in the liquid and the development of a specific signal processing in order to assess longitudinal velocity and attenuation will conduct to the longitudinal viscosity evaluation in the MHz frequency range. At last, the first results concerning the propagation, the generation and the processing of guided compressional waves in the test tube walls will be discussed. All these approaches and results will be compared to standard methods available and already validated in our lab.

Keywords: nondestructive measurement for liquid, piezoelectric transducer, ultrasonic longitudinal waves, viscosities

Procedia PDF Downloads 265
155 Relationship between Readability of Paper-Based Braille and Character Spacing

Authors: T. Nishimura, K. Doi, H. Fujimoto, T. Wada

Abstract:

The Number of people with acquired visual impairments has increased in recent years. In specialized courses at schools for the blind and in Braille lessons offered by social welfare organizations, many people with acquired visual impairments cannot learn to read adequately Braille. One of the reasons is that the common Braille patterns for people visual impairments who already has mature Braille reading skill being difficult to read for Braille reading beginners. In addition, there is the scanty knowledge of Braille book manufacturing companies regarding what Braille patterns would be easy to read for beginners. Therefore, it is required to investigate a suitable Braille patterns would be easy to read for beginners. In order to obtain knowledge regarding suitable Braille patterns for beginners, this study aimed to elucidate the relationship between readability of paper-based Braille and its patterns. This study focused on character spacing, which readily affects Braille reading ability, to determine a suitable character spacing ratio (ratio of character spacing to dot spacing) for beginners. Specifically, considering beginners with acquired visual impairments who are unfamiliar with reading Braille, we quantitatively evaluated the effect of character spacing ratio on Braille readability through an evaluation experiment using sighted subjects with no experience of reading Braille. In this experiment, ten sighted adults took the blindfold were asked to read test piece (three Braille characters). Braille used as test piece was composed of five dots. They were asked to touch the Braille by sliding their forefinger on the test piece immediately after the test examiner gave a signal to start the experiment. Then, they were required to release their forefinger from the test piece when they perceived the Braille characters. Seven conditions depended on character spacing ratio was held (i.e., 1.2, 1.4, 1.5, 1.6, 1.8, 2.0, 2.2 [mm]), and the other four depended on the dot spacing (i.e., 2.0, 2.5, 3.0, 3.5 [mm]). Ten trials were conducted for each conditions. The test pieces are created using by NISE Graphic could print Braille adjusted arbitrary value of character spacing and dot spacing with high accuracy. We adopted the evaluation indices for correct rate, reading time, and subjective readability to investigate how the character spacing ratio affects Braille readability. The results showed that Braille reading beginners could read Braille accurately and quickly, when character spacing ratio is more than 1.8 and dot spacing is more than 3.0 mm. Furthermore, it is difficult to read Braille accurately and quickly for beginners, when both character spacing and dot spacing are small. For this study, suitable character spacing ratio to make reading easy for Braille beginners is revealed.

Keywords: Braille, character spacing, people with visual impairments, readability

Procedia PDF Downloads 286
154 Moderating Effect of Owner's Influence on the Relationship between the Probability of Client Failure and Going Concern Opinion Issuance

Authors: Mohammad Noor Hisham Osman, Ahmed Razman Abdul Latiff, Zaidi Mat Daud, Zulkarnain Muhamad Sori

Abstract:

The problem that Malaysian auditors do not issue going concern opinion (GC opinion) to seriously financially distressed companies is still a pressing issue. Policy makers, particularly the Financial Statement Review Committee (FSRC) of Malaysian Institute of Accountant, have raised this issue as early as in 2009. Similar problem happened in the US, UK, and many developing countries. It is important for auditors to issue GC opinion properly because such opinion is one signal about the viability of a company much needed by stakeholders. There are at least two unanswered questions or research gaps in the literature on determinants of GC opinion. Firstly, is client’s probability of failure associated with GC opinion issuance? Secondly, to what extent influential owners (management, family, and institution) moderate the association between client probability of failure and GC opinion issuance. The objective of this study is, therefore, twofold; (1) To examine the extent of the relationship between the probability of client failure and the issuance of GC opinion and (2) To examine the level of management, family, and institutional ownerships moderate the association between client probability of failure and the issuance of GC opinion. This study is quantitative in nature, and the sources of data are secondary (mainly company’s annual reports). A total of four hypotheses have been developed and tested on data accumulated from annual reports of seriously financially distressed Malaysian public listed companies. Data from 2006 to 2012 on a sample of 644 observations have been analyzed using panel logistic regression. It is found that certainty (rather than probability) of client failure affects the issuance of GC opinion. In addition, it is found that only the level of family ownership does positively moderate the relationship between client probability of failure and GC opinion issuance. This study is a contribution to auditing literature as its findings can enhance our understanding about audit quality; particularly on the variables that are associated with the issuance of GC opinion. The findings of this study shed light on the roles family owners in GC opinion issuance process, and this would open ways for the researcher to suggest measures that can be used to tackle the problem of auditors do not want to issue GC opinion to financially distressed clients. The measures to be suggested can be useful to policy makers in formulating future promulgations.

Keywords: audit quality, auditing, auditor characteristics, going concern opinion, Malaysia

Procedia PDF Downloads 261
153 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: fault detection, ground robot, inverse simulation, rover

Procedia PDF Downloads 308
152 Comparison of EMG Normalization Techniques Recommended for Back Muscles Used in Ergonomics Research

Authors: Saif Al-Qaisi, Alif Saba

Abstract:

Normalization of electromyography (EMG) data in ergonomics research is a prerequisite for interpreting the data. Normalizing accounts for variability in the data due to differences in participants’ physical characteristics, electrode placement protocols, time of day, and other nuisance factors. Typically, normalized data is reported as a percentage of the muscle’s isometric maximum voluntary contraction (%MVC). Various MVC techniques have been recommended in the literature for normalizing EMG activity of back muscles. This research tests and compares the recommended MVC techniques in the literature for three back muscles commonly used in ergonomics research, which are the lumbar erector spinae (LES), latissimus dorsi (LD), and thoracic erector spinae (TES). Six healthy males from a university population participated in this research. Five different MVC exercises were compared for each muscle using the Tringo wireless EMG system (Delsys Inc.). Since the LES and TES share similar functions in controlling trunk movements, their MVC exercises were the same, which included trunk extension at -60°, trunk extension at 0°, trunk extension while standing, hip extension, and the arch test. The MVC exercises identified in the literature for the LD were chest-supported shoulder extension, prone shoulder extension, lat-pull down, internal shoulder rotation, and abducted shoulder flexion. The maximum EMG signal was recorded during each MVC trial, and then the averages were computed across participants. A one-way analysis of variance (ANOVA) was utilized to determine the effect of MVC technique on muscle activity. Post-hoc analyses were performed using the Tukey test. The MVC technique effect was statistically significant for each of the muscles (p < 0.05); however, a larger sample of participants was needed to detect significant differences in the Tukey tests. The arch test was associated with the highest EMG average at the LES, and also it resulted in the maximum EMG activity more often than the other techniques (three out of six participants). For the TES, trunk extension at 0° was associated with the largest EMG average, and it resulted in the maximum EMG activity the most often (three out of six participants). For the LD, participants obtained their maximum EMG either from chest-supported shoulder extension (three out of six participants) or prone shoulder extension (three out of six participants). Chest-supported shoulder extension, however, had a larger average than prone shoulder extension (0.263 and 0.240, respectively). Although all the aforementioned techniques were superior in their averages, they did not always result in the maximum EMG activity. If an accurate estimate of the true MVC is desired, more than one technique may have to be performed. This research provides additional MVC techniques for each muscle that may elicit the maximum EMG activity.

Keywords: electromyography, maximum voluntary contraction, normalization, physical ergonomics

Procedia PDF Downloads 194
151 The Response of Mammal Populations to Abrupt Changes in Fire Regimes in Montane Landscapes of South-Eastern Australia

Authors: Jeremy Johnson, Craig Nitschke, Luke Kelly

Abstract:

Fire regimes, climate and topographic gradients interact to influence ecosystem structure and function across fire-prone, montane landscapes worldwide. Biota have developed a range of adaptations to historic fire regime thresholds, which allow them to persist in these environments. In south-eastern Australia, a signal of fire regime changes is emerging across these landscapes, and anthropogenic climate change is likely to be one of the main drivers of an increase in burnt area and more frequent wildfire over the last 25 years. This shift has the potential to modify vegetation structure and composition at broad scales, which may lead to landscape patterns to which biota are not adapted, increasing the likelihood of local extirpation of some mammal species. This study aimed to address concerns related to the influence of abrupt changes in fire regimes on mammal populations in montane landscapes. It first examined the impact of climate, topography, and vegetation on fire patterns and then explored the consequences of these changes on mammal populations and their habitats. Field studies were undertaken across diverse vegetation, fire severity and fire frequency gradients, utilising camera trapping and passive acoustic monitoring methodologies and the collection of fine-scale vegetation data. Results show that drought is a primary contributor to fire regime shifts at the landscape scale, while topographic factors have a variable influence on wildfire occurrence at finer scales. Frequent, high severity wildfire influenced forest structure and composition at broad spatial scales, and at fine scales, it reduced occurrence of hollow-bearing trees and promoted coarse woody debris. Mammals responded differently to shifts in forest structure and composition depending on their habitat requirements. This study highlights the complex interplay between fire regimes, environmental gradients, and biotic adaptations across temporal and spatial scales. It emphasizes the importance of understanding complex interactions to effectively manage fire-prone ecosystems in the face of climate change.

Keywords: fire, ecology, biodiversity, landscape ecology

Procedia PDF Downloads 74
150 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: input performance, mobile device, slim keyboard, tactile feedback

Procedia PDF Downloads 300
149 IL6/PI3K/mTOR/GFAP Molecular Pathway Role in COVID-19-Induced Neurodegenerative Autophagy, Impacts and Relatives

Authors: Mohammadjavad Sotoudeheian

Abstract:

COVID-19, which began in December 2019, uses the angiotensin-converting enzyme 2 (ACE2) receptor to enter and spread through the cells. ACE2 mRNA is present in almost every organ, including nasopharynx, lung, as well as the brain. Ports of entry of SARS-CoV-2 into the central nervous system (CNS) may include arterial circulation, while viremia is remarkable. However, it is imperious to develop neurological symptoms evaluation CSF analysis in patients with COVID-19, but theoretically, ACE2 receptors are expressed in cerebellar cells and may be a target for SARS-CoV-2 infection in the brain. Recent evidence agrees that SARS-CoV-2 can impact the brain through direct and indirect injury. Two biomarkers for CNS injury, glial fibrillary acidic protein (GFAP) and neurofilament light chain (NFL) detected in the plasma of patients with COVID-19. NFL, an axonal protein expressed in neurons, is related to axonal neurodegeneration, and GFAP is over-expressed in CNS inflammation. GFAP cytoplasmic accumulation causes Schwan cells to misfunction, so affects myelin generation, reduces neuroskeletal support over NfLs during CNS inflammation, and leads to axonal degeneration. Interleukin-6 (IL-6), which extensively over-express due to interleukin storm during COVID-19 inflammation, regulates gene expression, as well as GFAP through STAT molecular pathway. IL-6 also impresses the phosphoinositide 3-kinase (PI3K)/STAT/smads pathway. The PI3K/ protein kinase B (Akt) pathway is the main modulator upstream of the mammalian target of rapamycin (mTOR), and alterations in this pathway are common in neurodegenerative diseases. Most neurodegenerative diseases show a disruption of autophagic function and display an abnormal increase in protein aggregation that promotes cellular death. Therefore, induction of autophagy has been recommended as a rational approach to help neurons clear abnormal protein aggregates and survive. The mTOR is a major regulator of the autophagic process and is regulated by cellular stressors. The mTORC1 pathway and mTORC2, as complementary and important elements in mTORC1 signaling, have become relevant in the regulation of the autophagic process and cellular survival through the extracellular signal-regulated kinase (ERK) pathway.

Keywords: mTORC1, COVID-19, PI3K, autophagy, neurodegeneration

Procedia PDF Downloads 86
148 Carbon Based Wearable Patch Devices for Real-Time Electrocardiography Monitoring

Authors: Hachul Jung, Ahee Kim, Sanghoon Lee, Dahye Kwon, Songwoo Yoon, Jinhee Moon

Abstract:

We fabricated a wearable patch device including novel patch type flexible dry electrode based on carbon nanofibers (CNFs) and silicone-based elastomer (MED 6215) for real-time ECG monitoring. There are many methods to make flexible conductive polymer by mixing metal or carbon-based nanoparticles. In this study, CNFs are selected for conductive nanoparticles because carbon nanotubes (CNTs) are difficult to disperse uniformly in elastomer compare with CNFs and silver nanowires are relatively high cost and easily oxidized in the air. Wearable patch is composed of 2 parts that dry electrode parts for recording bio signal and sticky patch parts for mounting on the skin. Dry electrode parts were made by vortexer and baking in prepared mold. To optimize electrical performance and diffusion degree of uniformity, we developed unique mixing and baking process. Secondly, sticky patch parts were made by patterning and detaching from smooth surface substrate after spin-coating soft skin adhesive. In this process, attachable and detachable strengths of sticky patch are measured and optimized for them, using a monitoring system. Assembled patch is flexible, stretchable, easily skin mountable and connectable directly with the system. To evaluate the performance of electrical characteristics and ECG (Electrocardiography) recording, wearable patch was tested by changing concentrations of CNFs and thickness of the dry electrode. In these results, the CNF concentration and thickness of dry electrodes were important variables to obtain high-quality ECG signals without incidental distractions. Cytotoxicity test is conducted to prove biocompatibility, and long-term wearing test showed no skin reactions such as itching or erythema. To minimize noises from motion artifacts and line noise, we make the customized wireless, light-weight data acquisition system. Measured ECG Signals from this system are stable and successfully monitored simultaneously. To sum up, we could fully utilize fabricated wearable patch devices for real-time ECG monitoring easily.

Keywords: carbon nanofibers, ECG monitoring, flexible dry electrode, wearable patch

Procedia PDF Downloads 185
147 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques

Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang

Abstract:

Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.

Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE

Procedia PDF Downloads 531
146 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.

Keywords: computer vision, drone control, keypoint detection, openpose

Procedia PDF Downloads 185
145 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables

Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner

Abstract:

High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)

Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line

Procedia PDF Downloads 174
144 Frequency Domain Decomposition, Stochastic Subspace Identification and Continuous Wavelet Transform for Operational Modal Analysis of Three Story Steel Frame

Authors: Ardalan Sabamehr, Ashutosh Bagchi

Abstract:

Recently, Structural Health Monitoring (SHM) based on the vibration of structures has attracted the attention of researchers in different fields such as: civil, aeronautical and mechanical engineering. Operational Modal Analysis (OMA) have been developed to identify modal properties of infrastructure such as bridge, building and so on. Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) are the three most common methods in output only modal identification. FDD, SSI, and CWT operate based on the frequency domain, time domain, and time-frequency plane respectively. So, FDD and SSI are not able to display time and frequency at the same time. By the way, FDD and SSI have some difficulties in a noisy environment and finding the closed modes. CWT technique which is currently developed works on time-frequency plane and a reasonable performance in such condition. The other advantage of wavelet transform rather than other current techniques is that it can be applied for the non-stationary signal as well. The aim of this paper is to compare three most common modal identification techniques to find modal properties (such as natural frequency, mode shape, and damping ratio) of three story steel frame which was built in Concordia University Lab by use of ambient vibration. The frame has made of Galvanized steel with 60 cm length, 27 cm width and 133 cm height with no brace along the long span and short space. Three uniaxial wired accelerations (MicroStarin with 100mv/g accuracy) have been attached to the middle of each floor and gateway receives the data and send to the PC by use of Node Commander Software. The real-time monitoring has been performed for 20 seconds with 512 Hz sampling rate. The test is repeated for 5 times in each direction by hand shaking and impact hammer. CWT is able to detect instantaneous frequency by used of ridge detection method. In this paper, partial derivative ridge detection technique has been applied to the local maxima of time-frequency plane to detect the instantaneous frequency. The extracted result from all three methods have been compared, and it demonstrated that CWT has the better performance in term of its accuracy in noisy environment. The modal parameters such as natural frequency, damping ratio and mode shapes are identified from all three methods.

Keywords: ambient vibration, frequency domain decomposition, stochastic subspace identification, continuous wavelet transform

Procedia PDF Downloads 296
143 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept

Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani

Abstract:

Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.

Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy

Procedia PDF Downloads 344