Search results for: physiological signals
452 An Investigation of Peptide Functionalized Gold Nanoparticles On Colon Cancer Cells For Biomedical Application
Authors: Rolivhuwa Bishop Ramagoma1*, Lynn Cairncross1, , Saartjie Roux1
Abstract:
According to the world health organisation, colon cancer is among the most common cancers diagnosed in both men and women. Specifically, it is the second leading cause of cancer related deaths accounting for over 860 000 deaths worldwide in 2018. Currently, chemotherapy has become an essential component of most cancer treatments. Despite progress in cancer drug development over the previous years, traditional chemotherapeutic drugs still have low selectivity for targeting tumour tissues and are frequently constrained by dose-limiting toxicity. The creation of nanoscale delivery vehicles capable of directly directing treatment into cancer cells has recently caught the interest of researchers. Herein, the development of peptide-functionalized polyethylene glycol gold nanoparticles (Peptide-PEG-AuNPs) as a cellular probe and delivery agent is described, with the higher aim to develop a specific diagnostic prototype and assess their specificity not only against cell lines but primary human cells as well. Gold nanoparticles (AuNPs) were synthesized and stabilized through chemical conjugation. The synthesized AuNPs were characterized, stability in physiological solutions was assessed, their cytotoxicity against colon carcinoma and non-carcinoma skin fibroblasts was also studied. Furthermore, genetic effect through real-time polymerase chain reaction (RT-PCR), localization and uptake, peptide specificity were also determined. In this study, different peptide-AuNPs were found to have preferential toxicity at higher concentrations, as revealed by cell viability assays, however, all AuNPs presented immaculate stability for over 3 months following the method of synthesis. The final obtained peptide-PEG-AuNP conjugates showed good biocompatibility in the presence of high ionic solutions and biological media and good cellular uptake. Formulation of colon cancer specific targeting peptide was successful, additionally, the genes/pathways affected by the treatments were determined through RT-PCR. Primary cells study is still on going with promising results thus far.Keywords: nanotechnology, cancer, diagnosis, therapeutics, gold nanoparticles.
Procedia PDF Downloads 94451 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks
Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee
Abstract:
Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)
Procedia PDF Downloads 109450 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application
Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior
Abstract:
Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks
Procedia PDF Downloads 170449 Vegetable Oil-Based Anticorrosive Coatings for Metals Protection
Authors: Brindusa Balanuca, Raluca Stan, Cristina Ott, Matei Raicopol
Abstract:
The current study aims to develop anti corrosive coatings using vegetable oil (VO)-based polymers. Due to their chemical versatility, reduced costs and more important, higher hydrophobicity, VO’s are great candidates in the field of anti-corrosive materials. Lignin (Ln) derivatives were also used in this research study in order to achieve performant hydrophobic anti-corrosion layers. Methods Through a rational functionalization pathway, the selected VO (linseed oil) is converted to more reactive monomer – methacrylate linseed oil (noted MLO). The synthesized MLO cover the metals surface in a thin layer and through different polymerization techniques (using visible radiation or temperature, respectively) and well-established reaction conditions, is converted to a hydrophobic coating capable to protect the metals against corrosive factors. In order to increase the anti-corrosion protection, lignin (Ln) was selected to be used together with MLO macromonomer. Thus, super hydrophobic protective coatings will be formulated. Results The selected synthetic strategy to convert the VO in more reactive compounds – MLO – has led to a functionalization degree of greater than 80%. The obtained monomers were characterized through NMR and FT-IR by monitoring the characteristic signals after each synthesis step. Using H-NMR data, the functionalization degrees were established. VO-based and also VO-Ln anti corrosion formulations were both photochemical and thermal polymerized in specific reaction conditions (initiators, temperature range, reaction time) and were tested as anticorrosive coatings. Complete and advances characterization of the synthesized materials will be presented in terms of thermal, mechanical and morphological properties. The anticorrosive properties were also evaluated and will be presented. Conclusions Through the design strategy briefly presented, new composite materials for metal corrosion protection were successfully developed, using natural derivatives: vegetable oils and lignin, respectively.Keywords: anticorrosion protection, hydrophobe layers, lignin, methacrylates, vegetable oil
Procedia PDF Downloads 169448 Robotic Exoskeleton Response During Infant Physiological Knee Kinematics
Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno
Abstract:
Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics
Procedia PDF Downloads 118447 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that can use the large amount and variety of data generated during healthcare services every day; one of the significant advantages of these new technologies is the ability to get experience and knowledge from real-world use and to improve their performance continuously. Healthcare systems and institutions can significantly benefit because the use of advanced technologies improves the efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and protect patients' safety. The evolution and the continuous improvement of software used in healthcare must consider the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device's approval. Still, they are necessary to ensure performance, quality, and safety. At the same time, they can be a business opportunity if the manufacturer can define the appropriate regulatory strategy in advance. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems
Procedia PDF Downloads 88446 Biodegradable Poly-ε-Caprolactone-Based Siloxane Polymer
Authors: Maria E. Fortună, Elena Ungureanu, Răzvan Rotaru, Valeria Harabagiu
Abstract:
Polymers are used in a variety of areas due to their unique mechanical and chemical properties. Natural polymers are biodegradable, whereas synthetic polymers are rarely biodegradable but can be modified. As a result, by combining the benefits of natural and synthetic polymers, composite materials that are biodegradable can be obtained with potential for biomedical and environmental applications. However, because of their strong resistance to degradation, it may be difficult to eliminate waste. As a result, interest in developing biodegradable polymers has risen significantly. This research involves obtaining and characterizing two biodegradable poly-ε-caprolactone-polydimethylsiloxane copolymers. A comparison study was conducted using an aminopropyl-terminated polydimethylsiloxane macroinitiator with two distinct molecular weights. The copolymers were obtained by ring-opening polymerization of poly (ɛ-caprolactone) in the presence of aminopropyl-terminated polydimethylsiloxane as initiator and comonomers and stannous 2-ethylhexanoate as a catalyst. The materials were characterized using a number of techniques, including NMR, FTIR, EDX, SEM, AFM, and DSC. Additionally, the water contact angle and water vapor sorption capacity were assessed. Furthermore, the copolymers were examined for environmental susceptibility by conducting biological tests on tomato plants (Lypercosium esculentum), with an accent on biological stability and metabolism. Subsequent to the copolymer's degradation, the dynamics of nitrogen experience evolutionary alterations, validating the progression of the process accompanied by the liberation of organic nitrogen. The biological tests performed (germination index, average seedling height, green and dry biomass) on Lypercosium esculentum, San Marzano variety tomato plants in direct contact with the copolymer indicated normal growth and development, suggesting a minimal toxic effect and, by extension, compatibility of the copolymer with the environment. The total chlorophyll concentration of plant leaves in contact with copolymers was determined, considering the pigment's critical role in photosynthesis and, implicitly, plant metabolism and physiological state.Keywords: biodegradable, biological stability, copolymers, polydimethylsiloxane
Procedia PDF Downloads 22445 Use of Giant Magneto Resistance Sensors to Detect Micron to Submicron Biologic Objects
Authors: Manon Giraud, Francois-Damien Delapierre, Guenaelle Jasmin-Lebras, Cecile Feraudet-Tarisse, Stephanie Simon, Claude Fermon
Abstract:
Early diagnosis or detection of harmful substances at low level is a growing field of high interest. The ideal test should be cheap, easy to use, quick, reliable, specific, and with very low detection limit. Combining the high specificity of antibodies-functionalized magnetic beads used to immune-capture biologic objects and the high sensitivity of a GMR-based sensors, it is possible to even detect these biologic objects one by one, such as a cancerous cell, a bacteria or a disease biomarker. The simplicity of the detection process makes its use possible even for untrained staff. Giant Magneto Resistance (GMR) is a recently discovered effect consisting in the electrical resistance modification of some conductive layers when exposed to a magnetic field. This effect allows the detection of very low variations of magnetic field (typically a few tens of nanoTesla). Magnetic nanobeads coated with antibodies targeting the analytes are mixed with a biological sample (blood, saliva) and incubated for 45 min. Then the mixture is injected in a very simple microfluidic chip and circulates above a GMR sensor that detects changes in the surrounding magnetic field. Magnetic particles do not create a field sufficient to be detected. Therefore, only the biological objects surrounded by several antibodies-functionalized magnetic beads (that have been captured by the complementary antigens) are detected when they move above the sensor. Proof of concept has been carried out on NS1 mouse cancerous cells diluted in PBS which have been bonded to magnetic 200nm particles. Signals were detected in cells-containing samples while none were recorded for negative controls. Binary response was hence assessed for this first biological model. The precise quantification of the analytes and its detection in highly diluted solution is the step now in progress.Keywords: early diagnosis, giant magnetoresistance, lab-on-a-chip, submicron particle
Procedia PDF Downloads 248444 Exoskeleton Response During Infant Physiological Knee Kinematics And Dynamics
Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno
Abstract:
Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics
Procedia PDF Downloads 83443 Impact of CYP3A5 Polymorphism on Tacrolimus to Predict the Optimal Initial Dose Requirements in South Indian Renal Transplant Recipients
Authors: S. Sreeja, Radhakrishnan R. Nair, Noble Gracious, Sreeja S. Nair, M. Radhakrishna Pillai
Abstract:
Background: Tacrolimus is a potent immunosuppressant clinically used for the long term treatment of antirejection of transplanted organs in liver and kidney transplant recipients though dose optimization is poorly managed. However, So far no study has been carried out on the South Indian kidney transplant patients. The objective of this study is to evaluate the potential influence of a functional polymorphism in CYP3A5*3 gene on tacrolimus physiological availability/dose ratio in South Indian renal transplant patients. Materials and Methods: Twenty five renal transplant recipients receiving tacrolimus were enrolled in this study. Their body weight, drug dosage, and therapeutic concentration of Tacrolimus were observed. All patients were on standard immunosuppressive regime of Tacrolimus-Mycophenolate mofetil along with steroids on a starting dose of Tac 0.1 mg/kg/day. CYP3A5 genotyping was performed by PCR followed with RFLP. Conformation of RFLP analysis and variation in the nucleotide sequence of CYP3A5*3 gene were determined by direct sequencing using a validated automated generic analyzer. Results: A significant association was found between tacrolimus per dose/kg/d and CYP3A5 gene (A6986G) polymorphism in the study population. The CYP3A5 *1/*1, *1/*3 and *3/*3 genotypes were detected in 5 (20 %), 5 (20 %) and 15 (60 %) of the 25 graft recipients, respectively. CYP3A5*3 genotypes were found to be a good predictor of tacrolimus Concentration/Dose ratio in kidney transplant recipients. Significantly higher L/D was observed among non-expressors 9.483 ng/mL(4.5- 14.1) as compared with the expressors 5.154 ng/mL (4.42-6.5 ) of CYP3A5. Acute rejection episodes were significantly higher for CYP3A5*1 homozygotes compared to patients with CYP3A5*1/*3 and CYP3A5*3/*3 genotypes (40 % versus 20 % and 13 %, respectively ). The dose normalized TAC concentration (ng/ml/mg/kg) was significantly lower in patients having CYP3A5*1/*3 polymorphism. Conclusion: This is the first study to extensively determine the effect of CYP3A5*3 genetic polymorphism on tacrolimus pharmacokinetics in South Indian renal transplant recipients and also shows that majority of our patients carry mutant allele A6986G in CYP3A5*3 gene. Identification of CYP3A5 polymorphism prior to transplantation could contribute to evaluate the appropriate initial dosage of tacrolimus for each patient.Keywords: kidney transplant patients, CYP3A5 genotype, tacrolimus, RFLP
Procedia PDF Downloads 301442 Comparative Study of sLASER and PRESS Techniques in Magnetic Resonance Spectroscopy of Normal Brain
Authors: Shin Ku Kim, Yun Ah Oh, Eun Hee Seo, Chang Min Dae, Yun Jung Bae
Abstract:
Objectives: The commonly used PRESS technique in magnetic resonance spectroscopy (MRS) has a limitation of incomplete water suppression. The recently developed sLASER technique is known for its improved effectiveness in suppressing water signal. However, no prior study has compared both sequences in a normal human brain. In this study, we firstly aimed to compare the performances of both techniques in brain MRS. Materials and methods: From January 2023 to July 2023, thirty healthy participants (mean age 38 years, 17 male, 13 female) without underlying neurological diseases were enrolled in this study. All participants underwent single-voxel MRS using both PRESS and sLASER techniques on 3T MRI. Two regions-of-interest were allocated in the left medial thalamus and left parietal white matter (WM) by a single reader. The SpectroView Analysis (SW5, Philips, Netherlands) provided automatic measurements, including signal-to-noise ratio (SNR) and peak_height of water, N-acetylaspartate (NAA)-water/Choline (Cho)-water/Creatine (Cr)-water ratios, and NAA-Cr/Cho-Cr ratios. The measurements from PRESS and sLASER techniques were compared using paired T-tests and Bland-Altman methods, and the variability was assessed using coefficients of variation (CV). Results: SNR and peak_heights of the water were significantly lower with sLASER compared to PRESS (left medial thalamus, sLASER SNR/peak_height 2092±475/328±85 vs. PRESS 2811±549/440±105); left parietal WM, 5422±1016/872±196 vs. 7152±1305/1150±278; all, P<0.001, respectively). Accordingly, NAA-water/Cho-water/Cr-water ratios and NAA-Cr/Cho-Cr ratios were significantly higher with sLASER than with PRESS (all, P< 0.001, respectively). The variabilities of NAA-water/Cho-water/Cr-water ratios and Cho-Cr ratio in the left medial thalamus were lower with sLASER than with PRESS (CV, sLASER vs. PRESS, 19.9 vs. 58.1/19.8 vs. 54.7/20.5 vs. 43.9 and 11.5 vs. 16.2) Conclusion: The sLASER technique demonstrated enhanced background water suppression, resulting in increased signals and reduced variability in brain metabolite measurements of MRS. Therefore, sLASER could offer a more precise and stable method for identifying brain metabolites.Keywords: Magnetic resonance spectroscopy, Brain, sLASER, PRESS
Procedia PDF Downloads 46441 Wearable Antenna for Diagnosis of Parkinson’s Disease Using a Deep Learning Pipeline on Accelerated Hardware
Authors: Subham Ghosh, Banani Basu, Marami Das
Abstract:
Background: The development of compact, low-power antenna sensors has resulted in hardware restructuring, allowing for wireless ubiquitous sensing. The antenna sensors can create wireless body-area networks (WBAN) by linking various wireless nodes across the human body. WBAN and IoT applications, such as remote health and fitness monitoring and rehabilitation, are becoming increasingly important. In particular, Parkinson’s disease (PD), a common neurodegenerative disorder, presents clinical features that can be easily misdiagnosed. As a mobility disease, it may greatly benefit from the antenna’s nearfield approach with a variety of activities that can use WBAN and IoT technologies to increase diagnosis accuracy and patient monitoring. Methodology: This study investigates the feasibility of leveraging a single patch antenna mounted (using cloth) on the wrist dorsal to differentiate actual Parkinson's disease (PD) from false PD using a small hardware platform. The semi-flexible antenna operates at the 2.4 GHz ISM band and collects reflection coefficient (Γ) data from patients performing five exercises designed for the classification of PD and other disorders such as essential tremor (ET) or those physiological disorders caused by anxiety or stress. The obtained data is normalized and converted into 2-D representations using the Gabor wavelet transform (GWT). Data augmentation is then used to expand the dataset size. A lightweight deep-learning (DL) model is developed to run on the GPU-enabled NVIDIA Jetson Nano platform. The DL model processes the 2-D images for feature extraction and classification. Findings: The DL model was trained and tested on both the original and augmented datasets, thus doubling the dataset size. To ensure robustness, a 5-fold stratified cross-validation (5-FSCV) method was used. The proposed framework, utilizing a DL model with 1.356 million parameters on the NVIDIA Jetson Nano, achieved optimal performance in terms of accuracy of 88.64%, F1-score of 88.54, and recall of 90.46%, with a latency of 33 seconds per epoch.Keywords: antenna, deep-learning, GPU-hardware, Parkinson’s disease
Procedia PDF Downloads 7440 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis
Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn
Abstract:
Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics
Procedia PDF Downloads 159439 Biodegradable Polymeric Vesicles Containing Magnetic Nanoparticles, Quantum Dots and Anticancer Drugs for Drug Delivery and Imaging
Authors: Fei Ye, Åsa Barrefelt, Manuchehr Abedi-Valugerdi, Khalid M. Abu-Salah, Salman A. Alrokayan, Mamoun Muhammed, Moustapha Hassan
Abstract:
With appropriate encapsulation in functional nanoparticles drugs are more stable in physiological environment and the kinetics of the drug can be more carefully controlled and monitored. Furthermore, targeted drug delivery can be developed to improve chemotherapy in cancer treatment, not only by enhancing intracellular uptake by target cells but also by reducing the adverse effects in non-target organs. Inorganic imaging agents, delivered together with anti-cancer drugs, enhance the local imaging contrast and provide precise diagnosis as well as evaluation of therapy efficacy. We have developed biodegradable polymeric vesicles as a nanocarrier system for multimodal bio-imaging and anticancer drug delivery. The poly (lactic-co-glycolic acid) PLGA) vesicles were fabricated by encapsulating inorganic imaging agents of superparamagnetic iron oxide nanoparticles (SPION), manganese-doped zinc sulfide (MN:ZnS) quantum dots (QDs) and the anticancer drug busulfan into PLGA nanoparticles via an emulsion-evaporation method. T2-weighted magnetic resonance imaging (MRI) of PLGA-SPION-Mn:ZnS phantoms exhibited enhanced negative contrast with r2 relaxivity of approximately 523 s-1 mM-1 Fe. Murine macrophage (J774A) cellular uptake of PLGA vesicles started fluorescence imaging at 2 h and reached maximum intensity at 24 h incubation. The drug delivery ability PLGA vesicles was demonstrated in vitro by release of busulfan. PLGA vesicles degradation was studied in vitro, showing that approximately 32% was degraded into lactic and glycolic acid over a period of 5 weeks. The biodistribution of PLGA vesicles was investigated in vivo by MRI in a rat model. Change of contrast in the liver could be visualized by MRI after 7 min and maximal signal loss detected after 4 h post-injection of PLGA vesicles. Histological studies showed that the presence of PLGA vesicles in organs was shifted from the lungs to the liver and spleen over time.Keywords: biodegradable polymers, multifunctional nanoparticles, quantum dots, anticancer drugs
Procedia PDF Downloads 472438 Metabolically Healthy Obesity and Protective Factors of Cardiovascular Diseases as a Result from a Longitudinal Study in Tebessa (East of Algeria)
Authors: Salima Taleb, Kafila Boulaba, Ahlem Yousfi, Nada Taleb, Difallah Basma
Abstract:
Introduction: Obesity is recognized as a cardiovascular risk factor. It is associated with cardio-metabolic diseases. Its prevalence is increasing significantly in both rich and poor countries. However, there are obese people who have no metabolic disturbance. So we think obesity is not always a risk factor for an abnormal metabolic profile that increases the risk of cardiometabolic problems. However, there is no definition that allows us to identify the individual group Metabolically Healthy but Obese (MHO). Objective: The objective of this study is to evaluate the relationship between MHO and some factors associated with it. Methods: A longitudinal study is a prospective cohort study of 600 participants aged ≥18 years. Metabolic status was assessed by the following parameters: blood pressure, fasting glucose, total cholesterol, HDL cholesterol, LDL cholesterol, and triglycerides. Body Mass Index (BMI) was calculated as weight (in kg) divided by height (m2), BMI = Weight/(Height)². According to the BMI value, our population was divided into four groups: underweight subjects with BMI <18.5 kg/m2, normal weight subjects with BMI = 18.5–24.9 kg/m², overweight subjects with BMI=25–29.9 kg/m², and obese subjects who have (BMI ≥ 30 kg/m²). A value of P < 0.05 was considered significant. Statistical processing was done using the SPSS 25 software. Results: During this study, 194 (32.33%) were identified as MHO among 416 (37%) obese individuals. The prevalence of the metabolically unhealthy phenotype among normal-weight individuals was (13.83%) vs. (37%) in obese individuals. Compared with metabolically healthy normal-weight individuals (10.93%), the prevalence of diabetes was (30.60%) in MHO, (20.59%) in metabolically unhealthy normal weight, and (52.29%) for metabolically unhealthy obese (p = 0.032). Blood pressure was significantly higher in MHO individuals than in metabolically healthy normal-weight individuals and in metabolically unhealthy obese than in metabolically unhealthy normal weight (P < 0.0001). Familial coronary artery disease does not appear to have an effect on the metabolic status of obese and normal-weight patients (P = 0.544). However, waist circumference appears to have an effect on the metabolic status of individuals (P < 0.0001). Conclusion: This study showed a high prevalence of metabolic profile disruption in normal-weight subjects and a high rate of overweight and/or obese people who are metabolically healthy. To understand the physiological mechanism related to these metabolic statuses, a thorough study is needed.Keywords: metabolically health, obesity, factors associated, cardiovascular diseases
Procedia PDF Downloads 117437 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring
Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover
Abstract:
Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels
Procedia PDF Downloads 126436 Free Fibular Flaps in Management of Sternal Dehiscence
Authors: H. N. Alyaseen, S. E. Alalawi, T. Cordoba, É. Delisle, C. Cordoba, A. Odobescu
Abstract:
Sternal dehiscence is defined as the persistent separation of sternal bones that are often complicated with mediastinitis. Etiologies that lead to sternal dehiscence vary, with cardiovascular and thoracic surgeries being the most common. Early diagnosis in susceptible patients is crucial to the management of such cases, as they are associated with high mortality rates. A recent meta-analysis of more than four hundred thousand patients concluded that deep sternal wound infections were the leading cause of mortality and morbidity in patients undergoing cardiac procedures. Long-term complications associated with sternal dehiscence include increased hospitalizations, cardiac infarctions, and renal and respiratory failures. Numerous osteosynthesis methods have been described in the literature. Surgical materials offer enough rigidity to support the sternum and can be flexible enough to allow physiological breathing movements of the chest; however, these materials fall short when managing patients with extensive bone loss, osteopenia, or general poor bone quality, for such cases, flaps offer a better closure system. Early utilization of flaps yields better survival rates compared to delayed closure or to patients treated with sternal rewiring and closed drainage. The utilization of pectoralis major flaps, rectus abdominus, and latissimus muscle flaps have all been described in the literature as great alternatives. Flap selection depends on a variety of factors, mainly the size of the sternal defect, infection, and the availability of local tissues. Free fibular flaps are commonly harvested flaps utilized in reconstruction around the body. In cases regarding sternal reconstruction with free fibular flaps, the literature exclusively discussed the flap applied vertically to the chest wall. We present a different technique applying the free fibular triple barrel flap oriented in a transverse manner, in parallel to the ribs. In our experience, this method could have enhanced results and improved prognosis as it contributes to the normal circumferential shape of the chest wall.Keywords: sternal dehiscence, management, free fibular flaps, novel surgical techniques
Procedia PDF Downloads 93435 Depolymerised Natural Polysaccharides Enhance the Production of Medicinal and Aromatic Plants and Their Active Constituents
Authors: M. Masroor Akhtar Khan, Moin Uddin, Lalit Varshney
Abstract:
Recently, there has been a rapidly expanding interest in finding applications of natural polymers in view of value addition to agriculture. It is now being realized that radiation processing of natural polysaccharides can be beneficially utilized either to improve the existing methodologies used for processing the natural polymers or to impart value addition to agriculture by converting them into more useful form. Gamma-ray irradiation is employed to degrade and lower the molecular weight of some of the natural polysaccharides like alginates, chitosan and carrageenan into small sized oligomers. When these oligomers are applied to plants as foliar sprays, they elicit various kinds of biological and physiological activities, including promotion of plant growth, seed germination, shoot elongation, root growth, flower production, suppression of heavy metal stress, etc. Furthermore, application of these oligomers can shorten the harvesting period of various crops and help in reducing the use of insecticides and chemical fertilizers. In recent years, the oligomers of sodium alginate obtained by irradiating the latter with gamma-rays at 520 kGy dose are being employed. It was noticed that the oligomers derived from the natural polysaccharides could induce growth, photosynthetic efficiency, enzyme activities and most importantly the production of secondary metabolite in the plants like Artemisia annua, Beta vulgaris, Catharanthus roseus, Chrysopogon zizanioides, Cymbopogon flexuosus, Eucalyptus citriodora, Foeniculum vulgare, Geranium sp., Mentha arvensis, Mentha citrata, Mentha piperita, Mentha virdis, Papaver somniferum and Trigonella foenum-graecum. As a result of the application of these oligomers, the yield and/or contents of the active constituents of the aforesaid plants were significantly enhanced. The productivity, as well as quality of medicinal and aromatic plants, may be ameliorated by this novel technique in an economical way as a very little quantity of these irradiated (depolymerised) polysaccharides is needed. Further, this is a very safe technique, as we did not expose the plants directly to radiation. The radiation was used to depolymerize the polysaccharides into oligomers.Keywords: essential oil, medicinal and aromatic plants, plant production, radiation processed polysaccharides, active constituents
Procedia PDF Downloads 443434 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver
Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto
Abstract:
The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC
Procedia PDF Downloads 137433 Beneficial Effect of Micropropagation Coupled with Mycorrhization on Enhancement of Growth Performance of Medicinal Plants
Authors: D. H. Tejavathi
Abstract:
Medicinal plants are globally valuable sources of herbal products. Wild populations of many medicinal plants are facing threat of extinction because of their narrow distribution, endemicity, and degradation of specific habitats. Micropropagation is an established in vitro technique by which large number of clones can be obtained from a small bit of explants in a short span of time within a limited space. Mycorrhization can minimize the transient transplantation shock, experienced by the micropropagated plants when they are transferred from lab to land. AM fungal association improves the physiological status of the host plants through better uptake of water and nutrients, particularly phosphorus. Consequently, the growth performance and biosynthesis of active principles are significantly enhanced in AM fungal treated plants. Bacopa monnieri, Andrographis paniculata, Agave vera-curz, Drymaria cordata and Majorana hortensis, important medicinal plants used in various indigenous systems of medicines, are selected for the present study. They form the main constituents of many herbal formulations. Standard in vitro techniques were followed to obtain the micropropagated plants. Shoot tips and nodal segments were used as explants. Explants were cultured on Murashige and Skoog, and Phillips and Collins media supplemented with various combinations of growth regulators. Multiple shoots were obtained on a media containing both auxins and cytokinins at various concentrations and combinations. Multiple shoots were then transferred to rooting media containing auxins for root induction. Thus, obtained in vitro regenerated plants were subjected to brief acclimatization before transferring them to land. One-month-old in vitro plants were treated with AM fungi, and the symbiotic effect on the overall growth parameters was analyzed. It was found that micropropagation coupled with mycorrhization has significant effect on the enhancement of biomass and biosynthesis of active principles in these selected medicinal plants. In vitro techniques coupled with mycorrhization have opened a possibility of obtaining better clones in respect of enhancement of biomass and biosynthesis of active principles. Beneficial effects of AM fungal association with medicinal plants are discussed.Keywords: cultivation, medicinal plants, micropropagation, mycorrhization
Procedia PDF Downloads 171432 Analysis of Cell Cycle Status in Radiation Non-Targeted Hepatoma Cells Using Flow Cytometry: Evidence of Dose Dependent Response
Authors: Sharmi Mukherjee, Anindita Chakraborty
Abstract:
Cellular irradiation incites complex responses including arrest of cell cycle progression. This article accentuates the effects of radiation on cell cycle status of radiation non-targeted cells. Human Hepatoma HepG2 cells were exposed to increasing doses of γ radiations (1, 2, 4, 6 Gy) and their cell culture media was transferred to non-targeted HepG2 cells cultured in other Petri plates. These radiation non-targeted cells cultured in the ICCM (Irradiated cell conditioned media) were the bystander cells on which cell cycle analysis was performed using flow cytometry. An apparent decrease in the distribution of bystander cells at G0/G1 phase was observed with increased radiation doses upto 4 Gy representing a linear relationship. This was accompanied by a gradual increase in cellular distribution at G2/M phase. Interestingly the number of cells in G2/M phase at 1 and 2 Gy irradiation was not significantly different from each other. However, the percentage of G2 phase cells at 4 and 6 Gy doses were significantly higher than 2 Gy dose indicating the IC50 dose to be between 2 and 4 Gy. Cell cycle arrest is an indirect indicator of genotoxic damage in cells. In this study, bystander stress signals through the cell culture media of irradiated cells disseminated the radiation induced DNA damages in the non-targeted cells which resulted in arrest of the cell cycle progression at G2/M phase checkpoint. This implies that actual radiation biological effects represent a penumbra with effects encompassing a larger area than the actual beam. This article highlights the existence of genotoxic damages as bystander effects of γ rays in human Hepatoma cells by cell cycle analysis and opens up avenues for appraisal of bystander stress communications between tumor cells. Contemplation of underlying signaling mechanisms can be manipulated to maximize damaging effects of radiation with minimum dose and thus has therapeutic applications.Keywords: bystander effect, cell cycle, genotoxic damage, hepatoma
Procedia PDF Downloads 184431 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control
Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin
Abstract:
The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics
Procedia PDF Downloads 75430 Climate Species Lists: A Combination of Methods for Urban Areas
Authors: Andrea Gion Saluz, Tal Hertig, Axel Heinrich, Stefan Stevanovic
Abstract:
Higher temperatures, seasonal changes in precipitation, and extreme weather events are increasingly affecting trees. To counteract the increasing challenges of urban trees, strategies are increasingly being sought to preserve existing tree populations on the one hand and to prepare for the coming years on the other. One such strategy lies in strategic climate tree species selection. The search is on for species or varieties that can cope with the new climatic conditions. Many efforts in German-speaking countries deal with this in detail, such as the tree lists of the German Conference of Garden Authorities (GALK), the project Stadtgrün 2021, or the instruments of the Climate Species Matrix by Prof. Dr. Roloff. In this context, different methods for a correct species selection are offered. One possibility is to select certain physiological attributes that indicate the climate resilience of a species. To calculate the dissimilarity of the present climate of different geographic regions in relation to the future climate of any city, a weighted (standardized) Euclidean distance (SED) for seasonal climate values is calculated for each region of the Earth. The calculation was performed in the QGIS geographic information system, using global raster datasets on monthly climate values in the 1981-2010 standard period. Data from a European forest inventory were used to identify tree species growing in the calculated analogue climate regions. The inventory used is the compilation of georeferenced point data at a 1 km grid resolution on the occurrence of tree species in 21 European countries. In this project, the results of the methodological application are shown for the city of Zurich for the year 2060. In the first step, analog climate regions based on projected climate values for the measuring station Kirche Fluntern (ZH) were searched for. In a further step, the methods mentioned above were applied to generate tree species lists for the city of Zurich. These lists were then qualitatively evaluated with respect to the suitability of the different tree species for the Zurich area to generate a cleaned and thus usable list of possible future tree species.Keywords: climate change, climate region, climate tree, urban tree
Procedia PDF Downloads 106429 Physiological and Psychological Influence on Office Workers during Demand Response
Authors: Megumi Nishida, Naoya Motegi, Takurou Kikuchi, Tomoko Tokumura
Abstract:
In recent years, power system has been changed and flexible power pricing system such as demand response has been sought in Japan. The demand response system is simple in the household sector and the owner, decision-maker, can gain the benefits of power saving. On the other hand, the execution of the demand response in the office building is more complex than household because various people such as owners, building administrators and occupants are involved in making decisions. While the owners benefit from the demand saving, the occupants are forced to be exposed to demand-saved environment certain benefits. One of the reasons is that building systems are usually centralized control and each occupant cannot choose either participate demand response event or not, and contribution of each occupant to demand response is unclear to provide incentives. However, the recent development of IT and building systems enables the personalized control of office environment where each occupant can control the lighting level or temperature around him or herself. Therefore, it can be possible to have a system which each occupant can make a decision of demand response participation in office building. This study investigates the personal behavior upon demand response requests, under the condition where each occupant can adjust their brightness individually in their workspace. Once workers participate in the demand response, their task lights are automatically turned off. The participation rates in the demand response events are compared between four groups which are divided by different motivation, the presence or absence of incentives and the way of participation. The result shows that there are the significant differences of participation rates in demand response event between four groups. The way of participation has a large effect on the participation rate. ‘Opt-out’ group, where the occupants are automatically enrolled in a demand response event if they don't express non-participation, will have the highest participation rate in the four groups. The incentive has also an effect on the participation rate. This study also reports that the impact of low illumination office environment on the occupants, such as stress or fatigue. The electrocardiogram and the questionnaire are used to investigate the autonomic nervous activity and subjective symptoms about the fatigue of the occupants. There is no big difference between dim workspace during demand response event and bright workspace in autonomic nervous activity and fatigue.Keywords: demand response, illumination, questionnaire, electrocardiogram
Procedia PDF Downloads 351428 A Cooperative Signaling Scheme for Global Navigation Satellite Systems
Authors: Keunhong Chae, Seokho Yoon
Abstract:
Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.Keywords: global navigation satellite network, cooperative signaling, data combining, nodes
Procedia PDF Downloads 280427 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials
Authors: Sheikh Omar Sillah
Abstract:
Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring
Procedia PDF Downloads 75426 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning
Authors: T. Bryan , V. Kepuska, I. Kostnaic
Abstract:
A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit
Procedia PDF Downloads 252425 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 94424 Decrease of Aerobic Capacity in Twenty Years in Lithuanian 11–18 Years-Old Youth
Authors: Arunas Emeljanovas, Brigita Mieziene, Tomas Venckunas
Abstract:
Background statement: Level of aerobic capacity in school age children provides important information about the current and future cardiovascular, skeletal and mental health. It is widely recognised that risk factors for modern chronic diseases of the adults have their origins in childhood and adolescence. The aim of the study was to analyse the trends of aerobic capacity across decades within groups of gender and age. Methods. The research included data of participants from the three nationally representative cohort studies performed in Lithuania in the years 1992, 2002 and 2012 among 11 to 18-years-old school children. Total of 18,294 school children were recruited for testing. Only those who had their body weight and height measured and completed 20 m shuttle endurance test were included in the analysis. The total number of students included in the analyses was 15,213 (7608 boys and 7605 girls). The permission to conduct the study was obtained from the Lithuanian Bioethics Committee (permission number BE-2-45). Major findings: Results are performed across gender and age groups. The comparison of shuttle endurance test, controlling for body mass index, indicated that in general there is a constant decrease of aerobic capacity across decades in both genders and age groups. The deterioration in aerobic capacity in boys accounted for 17 to 43 percent across age groups within decades. The biggest decrease was in 14 years-old boys. The deterioration in girls accounted for 19 to 37 percent across age groups with the highest decrease in 11 years-old girls. Though, girls had lower levels of aerobic capacity through all age groups and across three decades. Body mass index, as a covariate, accounted for up to six percent in deterioration of aerobic capacity. Final statement: The detected relationships may reflect the level and pattern of engagement in physical activity and sports where increased activity associates with superior performance in the tests because of the upregulated physiological function and instigated competitive/motivational level. The significance of the decade indirectly supports the importance of the recently changed activity among schoolchildren for this relationship.Keywords: aerobic capacity, cardiovascular health, endurance, school age children
Procedia PDF Downloads 185423 Crossing of the Intestinal Barrier Thanks to Targeted Biologics: Nanofitins
Authors: Solene Masloh, Anne Chevrel, Maxime Culot, Leonardo Scapozza, Magali Zeisser-Labouebe
Abstract:
The limited stability of clinically proven therapeutic antibodies limits their administration by the parenteral route. However, oral administration remains the best alternative as it is the most convenient and less invasive one. Obtaining a targeted treatment based on biologics, which can be orally administered, would, therefore, be an ideal situation to improve patient adherence and compliance. Nevertheless, the delivery of macromolecules through the intestine remains challenging because of their sensitivity to the harsh conditions of the gastrointestinal tract and their low permeability across the intestinal mucosa. To address this challenge, this project aims to demonstrate that targeting receptor-mediated endocytosis followed by transcytosis could maximize the intestinal uptake and transport of large molecules, such as Nanofitins. These affinity proteins of 7 kDa with binding properties similar to antibodies have already demonstrated retained stability in the digestive tract and local efficiency. However, their size does not allow passive diffusion through the intestinal barrier. Nanofitins having a controlled affinity for membrane receptors involved in the transcytosis mechanism used naturally for the transport of large molecules in humans were generated. Proteins were expressed using ribosome display and selected based on affinity to the targeted receptor and other characteristics. Their uptake and transport ex vivo across viable porcine intestines were investigated using an Ussing chambers system. In this paper, we will report the results achieved while addressing the different challenges linked to this study. To validate the ex vivo model, first, we proved the presence of the receptors targeted in humans on the porcine intestine. Then, after the identification of an optimal way of detection of Nanofitins, transport experiments were performed on porcine intestines with viability followed during the time of the experiment. The results, showing that the physiological process of transcytosis is capable of being triggered by the binding of Nanofitins on their target, will be reported here. In conclusion, the results show that Nanofitins can be transported across the intestinal barrier by triggering the receptor-mediated transcytosis and that the ex vivo model is an interesting technique to assess biologics absorption through the intestine.Keywords: ex-vivo, Nanofitins, oral administration, transcytosis
Procedia PDF Downloads 178