Search results for: aircraft cable fault signal
132 Innovative Preparation Techniques: Boosting Oral Bioavailability of Phenylbutyric Acid Through Choline Salt-Based API-Ionic Liquids and Therapeutic Deep Eutectic Systems
Authors: Lin Po-Hsi, Sheu Ming-Thau
Abstract:
Urea cycle disorders (UCD) are rare genetic metabolic disorders that compromise the body's urea cycle. Sodium phenylbutyrate (SPB) is a medication commonly administered in tablet or powder form to lower ammonia levels. Nonetheless, its high sodium content poses risks to sodium-sensitive UCD patients. This necessitates the creation of an alternative drug formulation to mitigate sodium load and optimize drug delivery for UCD patients. This study focused on crafting a novel oral drug formulation for UCD, leveraging choline bicarbonate and phenylbutyric acid. The active pharmaceutical ingredient-ionic liquids (API-ILs) and therapeutic deep eutectic systems (THEDES) were formed by combining these with choline chloride. These systems display characteristics like maintaining a liquid state at room temperature and exhibiting enhanced solubility. This in turn amplifies drug dissolution rate, permeability, and ultimately oral bioavailability. Incorporating choline-based phenylbutyric acid as a substitute for traditional SPB can effectively curtail the sodium load in UCD patients. Our in vitro dissolution experiments revealed that the ILs and DESs, synthesized using choline bicarbonate and choline chloride with phenylbutyric acid, surpassed commercial tablets in dissolution speed. Pharmacokinetic evaluations in SD rats indicated a notable uptick in the oral bioavailability of phenylbutyric acid, underscoring the efficacy of choline salt ILs in augmenting its bioavailability. Additional in vitro intestinal permeability tests on SD rats authenticated that the ILs, formulated with choline bicarbonate and phenylbutyric acid, demonstrate superior permeability compared to their sodium and acid counterparts. To conclude, choline salt ILs developed from choline bicarbonate and phenylbutyric acid present a promising avenue for UCD treatment, with the added benefit of reduced sodium load. They also hold merit in formulation engineering. The sustained-release capabilities of DESs position them favorably for drug delivery, while the low toxicity and cost-effectiveness of choline chloride signal potential in formulation engineering. Overall, this drug formulation heralds a prospective therapeutic avenue for UCD patients.Keywords: phenylbutyric acid, sodium phenylbutyrate, choline salt, ionic liquids, deep eutectic systems, oral bioavailability
Procedia PDF Downloads 113131 Understanding the Role of Concussions as a Risk Factor for Multiple Sclerosis
Authors: Alvin Han, Reema Shafi, Alishba Afaq, Jennifer Gommerman, Valeria Ramaglia, Shannon E. Dunn
Abstract:
Adolescents engaged in contact-sports can suffer from recurrent brain concussions with no loss of consciousness and no need for hospitalization, yet they face the possibility of long-term neurocognitive problems. Recent studies suggest that head concussive injuries during adolescence can also predispose individuals to multiple sclerosis (MS). The underlying mechanisms of how brain concussions predispose to MS is not understood. Here, we hypothesize that: (1) recurrent brain concussions prime microglial cells, the tissue resident myeloid cells of the brain, setting them up for exacerbated responses when exposed to additional challenges later in life; and (2) brain concussions lead to the sensitization of myelin-specific T cells in the peripheral lymphoid organs. Towards addressing these hypotheses, we implemented a mouse model of closed head injury that uses a weight-drop device. First, we calibrated the model in male 12 week-old mice and established that a weight drop from a 3 cm height induced mild neurological symptoms (mean neurological score of 1.6+0.4 at 1 hour post-injury) from which the mice fully recovered by 72 hours post-trauma. Then, we performed immunohistochemistry on the brain of concussed mice at 72 hours post-trauma. Despite mice having recovered from all neurological symptoms, immunostaining for leukocytes (CD45) and IBA-1 revealed no peripheral immune infiltration, but an increase in the intensity of IBA1+ staining compared to uninjured controls, suggesting that resident microglia had acquired a more active phenotype. This microglia activation was most apparent in the white matter tracts in the brain and in the olfactory bulb. Immunostaining for the microglia-specific homeostatic marker TMEM119, showed a reduction in TMEM119+ area in the brain of concussed mice compared to uninjured controls, confirming a loss of this homeostatic signal by microglia after injury. Future studies will test whether single or repetitive concussive injury can worsen or accelerate autoimmunity in male and female mice. Understanding these mechanisms will guide the development of timed and targeted therapies to prevent MS from getting started in people at risk.Keywords: concussion, microglia, microglial priming, multiple sclerosis
Procedia PDF Downloads 100130 Significant Factor of Magnetic Resonance for Survival Outcome in Rectal Cancer Patients Following Neoadjuvant Combined Chemotherapy and Radiation Therapy: Stratification of Lateral Pelvic Lymph Node
Authors: Min Ju Kim, Beom Jin Park, Deuk Jae Sung, Na Yeon Han, Kichoon Sim
Abstract:
Purpose: The purpose of this study is to determine the significant magnetic resonance (MR) imaging factors of lateral pelvic lymph node (LPLN) on the assessment of survival outcomes of neoadjuvant combined chemotherapy and radiation therapy (CRT) in patients with mid/low rectal cancer. Materials and Methods: The institutional review board approved this retrospective study of 63 patients with mid/low rectal cancer who underwent MR before and after CRT and patient consent was not required. Surgery performed within 4 weeks after CRT. The location of LPLNs was divided into following four groups; 1) common iliac, 2) external iliac, 3) obturator, and 4) internal iliac lymph nodes. The short and long axis diameters, numbers, shape (ovoid vs round), signal intensity (homogenous vs heterogenous), margin (smooth vs irregular), and diffusion-weighted restriction of LPLN were analyzed on pre- and post-CRT images. For treatment response using size, lymph node groups were defined as group 1) short axis diameter ≤ 5mm on both MR, group 2) > 5mm change into ≤ 5mm after CRT, and group 3) persistent size > 5mm before and after CRT. Clinical findings were also evaluated. The disease-free survival and overall survival rate were evaluated and the risk factors for survival outcomes were analyzed using cox regression analysis. Results: Patients in the group 3 (persistent size >5mm) showed significantly lower survival rates than the group 1 and 2 (Disease-free survival rates of 36.1% and 78.8, 88.8%, p < 0.001). The size response (group 1-3), multiplicity of LPLN, the level of carcinoembryonic antigen (CEA), patient’s age, T and N stage, vessel invasion, perineural invasion were significant factors affecting disease-free survival rate or overall survival rate using univariate analysis (p < 0.05). The persistent size (group 3) and multiplicity of LPLN were independent risk factors among MR imaging features influencing disease-free survival rate (HR = 10.087, p < 0.05; HR = 4.808, p < 0.05). Perineural invasion and T stage were shown as independent histologic risk factors (HR = 16.594, p < 0.05; HR = 15.891, p < 0.05). Conclusion: The persistent size greater than 5mm and multiplicity of LPLN on both pre- and post-MR after CRT were significant MR factors affecting survival outcomes in the patients with mid/low rectal cancer.Keywords: rectal cancer, MRI, lymph node, combined chemoradiotherapy
Procedia PDF Downloads 149129 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours
Authors: Fikret Yalcinkaya, Hamza Unsal
Abstract:
To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models
Procedia PDF Downloads 179128 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 121127 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 533126 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust
Procedia PDF Downloads 129125 Influence of Atmospheric Circulation Patterns on Dust Pollution Transport during the Harmattan Period over West Africa
Authors: Ayodeji Oluleye
Abstract:
This study used Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) and reanalysis dataset of thirty years (1983-2012) to investigate the influence of the atmospheric circulation on dust transport during the Harmattan period over WestAfrica using TOMS data. The Harmattan dust mobilization and atmospheric circulation pattern were evaluated using a kernel density estimate which shows the areas where most points are concentrated between the variables. The evolution of the Inter-Tropical Discontinuity (ITD), Sea surface Temperature (SST) over the Gulf of Guinea, and the North Atlantic Oscillation (NAO) index during the Harmattan period (November-March) was also analyzed and graphs of the average ITD positions, SST and the NAO were observed on daily basis. The Pearson moment correlation analysis was also employed to assess the effect of atmospheric circulation on Harmattan dust transport. The results show that the departure (increased) of TOMS AI values from the long-term mean (1.64) occurred from around 21st of December, which signifies the rich dust days during winter period. Strong TOMS AI signal were observed from January to March with the maximum occurring in the latter months (February and March). The inter-annual variability of TOMSAI revealed that the rich dust years were found between 1984-1985, 1987-1988, 1997-1998, 1999-2000, and 2002-2004. Significantly, poor dust year was found between 2005 and 2006 in all the periods. The study has found strong north-easterly (NE) trade winds were over most of the Sahelianregion of West Africa during the winter months with the maximum wind speed reaching 8.61m/s inJanuary.The strength of NE winds determines the extent of dust transport to the coast of Gulf of Guinea during winter. This study has confirmed that the presence of the Harmattan is strongly dependent on theSST over Atlantic Ocean and ITD position. The locus of the average SST and ITD positions over West Africa could be described by polynomial functions. The study concludes that the evolution of near surface wind field at 925 hpa, and the variations of SST and ITD positions are the major large scale atmospheric circulation systems driving the emission, distribution, and transport of Harmattan dust aerosols over West Africa. However, the influence of NAO was shown to have fewer significance effects on the Harmattan dust transport over the region.Keywords: atmospheric circulation, dust aerosols, Harmattan, West Africa
Procedia PDF Downloads 308124 Clinical Features, Diagnosis and Treatment Outcomes in Necrotising Autoimmune Myopathy: A Rare Entity in the Spectrum of Inflammatory Myopathies
Authors: Tamphasana Wairokpam
Abstract:
Inflammatory myopathies (IMs) have long been recognised as a heterogenous family of myopathies with acute, subacute, and sometimes chronic presentation and are potentially treatable. Necrotizing autoimmune myopathies (NAM) are a relatively new subset of myopathies. Patients generally present with subacute onset of proximal myopathy and significantly elevated creatinine kinase (CK) levels. It is being increasingly recognised that there are limitations to the independent diagnostic utility of muscle biopsy. Immunohistochemistry tests may reveal important information in these cases. The traditional classification of IMs failed to recognise NAM as a separate entity and did not adequately emphasize the diversity of IMs. This review and case report on NAM aims to highlight the heterogeneity of this entity and focus on the distinct clinical presentation, biopsy findings, specific auto-antibodies implicated, and available treatment options with prognosis. This article is a meta-analysis of literatures on NAM and a case report illustrating the clinical course, investigation and biopsy findings, antibodies implicated, and management of a patient with NAM. The main databases used for the search were Pubmed, Google Scholar, and Cochrane Library. Altogether, 67 publications have been taken as references. Two biomarkers, anti-signal recognition protein (SRP) and anti- hydroxyl methylglutaryl-coenzyme A reductase (HMGCR) Abs, have been found to have an association with NAM in about 2/3rd of cases. Interestingly, anti-SRP associated NAM appears to be more aggressive in its clinical course when compared to its anti-HMGCR associated counterpart. Biopsy shows muscle fibre necrosis without inflammation. There are reports of statin-induced NAM where progression of myopathy has been seen even after discontinuation of statins, pointing towards an underlying immune mechanism. Diagnosisng NAM is essential as it requires more aggressive immunotherapy than other types of IMs. Most cases are refractory to corticosteroid monotherapy. Immunosuppressive therapy with other immunotherapeutic agents such as IVIg, rituximab, mycophenolate mofetil, azathioprine has been explored and found to have a role in the treatment of NAM. In conclusion,given the heterogeneity of NAM, it appears that NAM is not just a single entity but consists of many different forms, despite the similarities in presentation and its classification remains an evolving field. A thorough understanding of underlying mechanism and the clinical correlation with antibodies associated with NAM is essential for efficacious management and disease prognostication.Keywords: inflammatory myopathies, necrotising autoimmune myopathies, anti-SRP antibody, anti-HMGCR antibody, statin induced myopathy
Procedia PDF Downloads 102123 A Delphi Study of Factors Affecting the Forest Biorefinery Development in the Pulp and Paper Industry: The Case of Bio-Based Products
Authors: Natasha Gabriella, Josef-Peter Schöggl, Alfred Posch
Abstract:
Being a mature industry, pulp and paper industry (PPI) possess strength points coming from its existing infrastructure, technology know-how, and abundant availability of biomass. However, the declining trend of the wood-based products sales sends a clear signal to the industry to transform its business model in order to increase its profitability. With the emerging global attention on bio-based economy and circular economy, coupled with the low price of fossil feedstock, the PPI starts to integrate biorefinery as a value-added business model to keep the industry’s competitiveness. Nonetheless, biorefinery as an innovation exposes the PPI with some barriers, of which the uncertainty of the promising product becomes one of the major hurdles. This study aims to assess factors that affect the diffusion and development of forest biorefinery in the PPI, including drivers, barriers, advantages, disadvantages, as well as the most promising bio-based products of forest biorefinery. The study examines the identified factors according to the layer of business environment, being the macro-environment, industry, and strategic group level. Besides, an overview of future state of the identified factors is elaborated as to map necessary improvements for implementing forest biorefinery. A two-phase Delphi method is used to collect the empirical data for the study, comprising of an online-based survey and interviews. Delphi method is an effective communication tools to elicit ideas from a group of experts to further reach a consensus of forecasting future trends. Collaborating a total of 50 experts in the panel, the study reveals that influential factors are found in every layers of business of the PPI. The politic dimension is apparent to have a significant influence for tackling the economy barrier while reinforcing the environmental and social benefits in the macro-environment. In the industry level, the biomass availability appears to be a strength point of the PPI while the knowledge gap on technology and market seem to be barriers. Consequently, cooperation with academia and the chemical industry has to be improved. Human resources issue is indicated as one important premise behind the preceding barrier, along with the indication of the PPI’s resistance towards biorefinery implementation as an innovation. Further, cellulose-based products are acknowledged for near-term product development whereas lignin-based products are emphasized to gain importance in the long-term future.Keywords: forest biorefinery, pulp and paper, bio-based product, Delphi method
Procedia PDF Downloads 277122 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 57121 The Effectiveness of Exercise Therapy on Decreasing Pain in Women with Temporomandibular Disorders and How Their Brains Respond: A Pilot Randomized Controlled Trial
Authors: Zenah Gheblawi, Susan Armijo-Olivo, Elisa B. Pelai, Vaishali Sharma, Musa Tashfeen, Angela Fung, Francisca Claveria
Abstract:
Due to physiological differences between men and women, pain is experienced differently between the two sexes. Chronic pain disorders, notably temporomandibular disorders (TMDs), disproportionately affect women in diagnosis, and pain severity in opposition of their male counterparts. TMDs are a type of musculoskeletal disorder that target the masticatory muscles, temporalis muscle, and temporomandibular joints, causing considerable orofacial pain which can usually be referred to the neck and back. Therapeutic methods are scarce, and are not TMD-centered, with the latest research suggesting that subjects with chronic musculoskeletal pain disorders have abnormal alterations in the grey matter of their brains which can be remedied with exercise, and thus, decreasing the pain experienced. The aim of the study is to investigate the effects of exercise therapy in TMD female patients experiencing chronic jaw pain and to assess the consequential effects on brain activity. In a randomized controlled trial, the effectiveness of an exercise program to improve brain alterations and clinical outcomes in women with TMD pain will be tested. Women with chronic TMD pain will be randomized to either an intervention arm or a placebo control group. Women in the intervention arm will receive 8 weeks of progressive exercise of motor control training using visual feedback (MCTF) of the cervical muscles, twice per week. Women in the placebo arm will receive innocuous transcutaneous electrical nerve stimulation during 8 weeks as well. The primary outcomes will be changes in 1) pain, measured with the Visual Analogue Scale, 2) brain structure and networks, measured by fractional anisotropy (brain structure) and the blood-oxygen level dependent signal (brain networks). Outcomes will be measured at baseline, after 8 weeks of treatment, and 4 months after treatment ends and will determine effectiveness of MCTF in managing TMD, through improved clinical outcomes. Results will directly inform and guide clinicians in prescribing more effective interventions for women with TMD. This study is underway, and no results are available at this point. The results of this study will have substantial implications on the advancement in understanding the scope of plasticity the brain has in regards with pain, and how it can be used to improve the treatment and pain of women with TMD, and more generally, other musculoskeletal disorders.Keywords: exercise therapy, musculoskeletal disorders, physical therapy, rehabilitation, tempomandibular disorders
Procedia PDF Downloads 291120 Psychophysiological Adaptive Automation Based on Fuzzy Controller
Authors: Liliana Villavicencio, Yohn Garcia, Pallavi Singh, Luis Fernando Cruz, Wilfrido Moreno
Abstract:
Psychophysiological adaptive automation is a concept that combines human physiological data and computer algorithms to create personalized interfaces and experiences for users. This approach aims to enhance human learning by adapting to individual needs and preferences and optimizing the interaction between humans and machines. According to neurosciences, the working memory demand during the student learning process is modified when the student is learning a new subject or topic, managing and/or fulfilling a specific task goal. A sudden increase in working memory demand modifies the level of students’ attention, engagement, and cognitive load. The proposed psychophysiological adaptive automation system will adapt the task requirements to optimize cognitive load, the process output variable, by monitoring the student's brain activity. Cognitive load changes according to the student’s previous knowledge, the type of task, the difficulty level of the task, and the overall psychophysiological state of the student. Scaling the measured cognitive load as low, medium, or high; the system will assign a task difficulty level to the next task according to the ratio between the previous-task difficulty level and student stress. For instance, if a student becomes stressed or overwhelmed during a particular task, the system detects this through signal measurements such as brain waves, heart rate variability, or any other psychophysiological variables analyzed to adjust the task difficulty level. The control of engagement and stress are considered internal variables for the hypermedia system which selects between three different types of instructional material. This work assesses the feasibility of a fuzzy controller to track a student's physiological responses and adjust the learning content and pace accordingly. Using an industrial automation approach, the proposed fuzzy logic controller is based on linguistic rules that complement the instrumentation of the system to monitor and control the delivery of instructional material to the students. From the test results, it can be proved that the implemented fuzzy controller can satisfactorily regulate the delivery of academic content based on the working memory demand without compromising students’ health. This work has a potential application in the instructional design of virtual reality environments for training and education.Keywords: fuzzy logic controller, hypermedia control system, personalized education, psychophysiological adaptive automation
Procedia PDF Downloads 79119 Metadiscourse in EFL, ESP and Subject-Teaching Online Courses in Higher Education
Authors: Maria Antonietta Marongiu
Abstract:
Propositional information in discourse is made coherent, intelligible, and persuasive through metadiscourse. The linguistic and rhetorical choices that writers/speakers make to organize and negotiate content matter are intended to help relate a text to its context. Besides, they help the audience to connect to and interpret a text according to the values of a specific discourse community. Based on these assumptions, this work aims to analyse the use of metadiscourse in the spoken performance of teachers in online EFL, ESP, and subject-teacher courses taught in English to non-native learners in higher education. In point of fact, the global spread of Covid 19 has forced universities to transition their in-class courses to online delivery. This has inevitably placed on the instructor a heavier interactional responsibility compared to in-class courses. Accordingly, online delivery needs greater structuring as regards establishing the reader/listener’s resources for text understanding and negotiating. Indeed, in online as well as in in-class courses, lessons are social acts which take place in contexts where interlocutors, as members of a community, affect the ways ideas are presented and understood. Following Hyland’s Interactional Model of Metadiscourse (2005), this study intends to investigate Teacher Talk in online academic courses during the Covid 19 lock-down in Italy. The selected corpus includes the transcripts of online EFL and ESP courses and subject-teachers online courses taught in English. The objective of the investigation is, firstly, to ascertain the presence of metadiscourse in the form of interactive devices (to guide the listener through the text) and interactional features (to involve the listener in the subject). Previous research on metadiscourse in academic discourse, in college students' presentations in EAP (English for Academic Purposes) lessons, as well as in online teaching methodology courses and MOOC (Massive Open Online Courses) has shown that instructors use a vast array of metadiscoursal features intended to express the speakers’ intentions and standing with respect to discourse. Besides, they tend to use directions to orient their listeners and logical connectors referring to the structure of the text. Accordingly, the purpose of the investigation is also to find out whether metadiscourse is used as a rhetorical strategy by instructors to control, evaluate and negotiate the impact of the ongoing talk, and eventually to signal their attitudes towards the content and the audience. Thus, the use of metadiscourse can contribute to the informative and persuasive impact of discourse, and to the effectiveness of online communication, especially in learning contexts.Keywords: discourse analysis, metadiscourse, online EFL and ESP teaching, rhetoric
Procedia PDF Downloads 128118 Acoustic Radiation Pressure Detaches Myoblast from Culture Substrate by Assistance of Serum-Free Medium
Authors: Yuta Kurashina, Chikahiro Imashiro, Kiyoshi Ohnuma, Kenjiro Takemura
Abstract:
Research objectives and goals: To realize clinical applications of regenerative medicine, a mass cell culture is highly required. In a conventional cell culture, trypsinization was employed for cell detachment. However, trypsinization causes proliferation decrease due to injury of cell membrane. In order to detach cells using an enzyme-free method, therefore, this study proposes a novel cell detachment method capable of detaching adherent cells using acoustic radiation pressure exposed to the dish by the assistance of serum-free medium with ITS liquid medium supplement. Methods used In order to generate acoustic radiation pressure, a piezoelectric ceramic plate was glued on a glass plate to configure an ultrasonic transducer. The glass plate and a chamber wall compose a chamber in which a culture dish is placed in glycerol. Glycerol transmits acoustic radiation pressure to adhered cells on the culture dish. To excite a resonance vibration of transducer, AC signal with 29-31 kHz (swept) and 150, 300, and 450 V was input to the transducer for 5 min. As a pretreatment to reduce cell adhesivity, serum-free medium with ITS liquid medium supplement was spread to the culture dish before exposed to acoustic radiation pressure. To evaluate the proposed cell detachment method, C2C12 myoblast cells (8.0 × 104 cells) were cultured on a ø35 culture dish for 48 hr, and then the medium was replaced with the serum-free medium with ITS liquid medium supplement for 24 hr. We replaced the medium with phosphate buffered saline and incubated cells for 10 min. After that, cells were exposed to the acoustic radiation pressure for 5 min. We also collected cells by using trypsinization as control. Cells collected by the proposed method and trypsinization were respectively reseeded in ø60 culture dishes and cultured for 24 hr. Then, the number of proliferated cells was counted. Results achieved: By a phase contrast microscope imaging, shrink of lamellipodia was observed before exposed to acoustic radiation pressure, and no cells remained on the culture dish after the exposed of acoustic radiation pressure. This result suggests that serum-free medium with ITS liquid inhibits adhesivity of cells and acoustic radiation pressure detaches cells from the dish. Moreover, the number of proliferated cells 24 hr after collected by the proposed method with 150 and 300 V is the same or more than that by trypsinization, i.e., cells were proliferated 15% higher with the proposed method using acoustic radiation pressure than with the traditional cell collecting method of trypsinization. These results proved that cells were able to be collected by using the appropriate exposure of acoustic radiation pressure. Conclusions: This study proposed a cell detachment method using acoustic radiation pressure by the assistance of serum-free medium. The proposed method provides an enzyme-free cell detachment method so that it may be used in future clinical applications instead of trypsinization.Keywords: acoustic radiation pressure, cell detachment, enzyme free, ultrasonic transducer
Procedia PDF Downloads 253117 The Markers -mm and dämmo in Amharic: Developmental Approach
Authors: Hayat Omar
Abstract:
Languages provide speakers with a wide range of linguistic units to organize and deliver information. There are several ways to verbally express the mental representations of events. According to the linguistic tools they have acquired, speakers select the one that brings out the most communicative effect to convey their message. Our study focuses on two markers, -mm and dämmo, in Amharic (Ethiopian Semitic language). Our aim is to examine, from a developmental perspective, how they are used by speakers. We seek to distinguish the communicative and pragmatic functions indicated by means of these markers. To do so, we created a corpus of sixty narrative productions of children from 5-6, 7-8 to 10-12 years old and adult Amharic speakers. The experimental material we used to collect our data is a series of pictures without text 'Frog, Where are you?'. Although -mm and dämmo are each used in specific contexts, they are sometimes analyzed as being interchangeable. The suffix -mm is complex and multifunctional. It marks the end of the negative verbal structure, it is found in the relative structure of the imperfect, it creates new words such as adverbials or pronouns, it also serves to coordinate words, sentences and to mark the link between macro-propositions within a larger textual unit. -mm was analyzed as marker of insistence, topic shift marker, element of concatenation, contrastive focus marker, 'bisyndetic' coordinator. On the other hand, dämmo has limited function and did not attract the attention of many authors. The only approach we could find analyzes it in terms of 'monosyndetic' coordinator. The paralleling of these two elements made it possible to understand their distinctive functions and refine their description. When it comes to marking a referent, the choice of -mm or dämmo is not neutral, depending on whether the tagged argument is newly introduced, maintained, promoted or reintroduced. The presence of these morphemes explains the inter-phrastic link. The information is seized by anaphora or presupposition: -mm goes upstream while dämmo arrows downstream, the latter requires new information. The speaker uses -mm or dämmo according to what he assumes to be known to his interlocutors. The results show that -mm and dämmo, although all the speakers use them both, do not always have the same scope according to the speaker and vary according to the age. dämmo is mainly used to mark a contrastive topic to signal the concomitance of events. It is more commonly used in young children’s narratives (F(3,56) = 3,82, p < .01). Some values of -mm (additive) are acquired very early while others are rather late and increase with age (F(3,56) = 3,2, p < .03). The difficulty is due not only because of its synthetic structure but primarily because it is multi-purpose and requires a memory work. It highlights the constituent on which it operates to clarify how the message should be interpreted.Keywords: acquisition, cohesion, connection, contrastive topic, contrastive focus, discourse marker, pragmatics
Procedia PDF Downloads 133116 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination
Authors: Gilberto Goracci, Fabio Curti
Abstract:
This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field
Procedia PDF Downloads 103115 Detection of Glyphosate Using Disposable Sensors for Fast, Inexpensive and Reliable Measurements by Electrochemical Technique
Authors: Jafar S. Noori, Jan Romano-deGea, Maria Dimaki, John Mortensen, Winnie E. Svendsen
Abstract:
Pesticides have been intensively used in agriculture to control weeds, insects, fungi, and pest. One of the most commonly used pesticides is glyphosate. Glyphosate has the ability to attach to the soil colloids and degraded by the soil microorganisms. As glyphosate led to the appearance of resistant species, the pesticide was used more intensively. As a consequence of the heavy use of glyphosate, residues of this compound are increasingly observed in food and water. Recent studies reported a direct link between glyphosate and chronic effects such as teratogenic, tumorigenic and hepatorenal effects although the exposure was below the lowest regulatory limit. Today, pesticides are detected in water by complicated and costly manual procedures conducted by highly skilled personnel. It can take up to several days to get an answer regarding the pesticide content in water. An alternative to this demanding procedure is offered by electrochemical measuring techniques. Electrochemistry is an emerging technology that has the potential of identifying and quantifying several compounds in few minutes. It is currently not possible to detect glyphosate directly in water samples, and intensive research is underway to enable direct selective and quantitative detection of glyphosate in water. This study focuses on developing and modifying a sensor chip that has the ability to selectively measure glyphosate and minimize the signal interference from other compounds. The sensor is a silicon-based chip that is fabricated in a cleanroom facility with dimensions of 10×20 mm. The chip is comprised of a three-electrode configuration. The deposited electrodes consist of a 20 nm layer chromium and 200 nm gold. The working electrode is 4 mm in diameter. The working electrodes are modified by creating molecularly imprinted polymers (MIP) using electrodeposition technique that allows the chip to selectively measure glyphosate at low concentrations. The modification included using gold nanoparticles with a diameter of 10 nm functionalized with 4-aminothiophenol. This configuration allows the nanoparticles to bind to the working electrode surface and create the template for the glyphosate. The chip was modified using electrodeposition technique. An initial potential for the identification of glyphosate was estimated to be around -0.2 V. The developed sensor was used on 6 different concentrations and it was able to detect glyphosate down to 0.5 mgL⁻¹. This value is below the accepted pesticide limit of 0.7 mgL⁻¹ set by the US regulation. The current focus is to optimize the functionalizing procedure in order to achieve glyphosate detection at the EU regulatory limit of 0.1 µgL⁻¹. To the best of our knowledge, this is the first attempt to modify miniaturized sensor electrodes with functionalized nanoparticles for glyphosate detection.Keywords: pesticides, glyphosate, rapid, detection, modified, sensor
Procedia PDF Downloads 176114 The Impact of CSR Satisfaction on Employee Commitment
Authors: Silke Bustamante, Andrea Pelzeter, Andreas Deckmann, Rudi Ehlscheidt, Franziska Freudenberger
Abstract:
Many companies increasingly seek to enhance their attractiveness as an employer to bind their employees. At the same time, corporate responsibility for social and ecological issues seems to become a more important part of an attractive employer brand. It enables the company to match the values and expectations of its members, to signal fairness towards them and to increase its brand potential for positive psychological identification on the employees’ side. In the last decade, several empirical studies have focused this relationship, confirming a positive effect of employees’ CSR perception and their affective organizational commitment. The current paper aims to take a slightly different view by analyzing the impact of another factor on commitment: the weighted employee’s satisfaction with the employer CSR. For that purpose, it is assumed that commitment levels are rather a result of the fulfillment or disappointment of expectations. Hence, instead of merely asking how CSR perception affects commitment, a more complex independent variable is taken into account: a weighted satisfaction construct that summarizes two different factors. Therefore, the individual level of commitment contingent on CSR is conceptualized as a function of two psychological processes: (1) the individual significance that an employee ascribes to specific employer attributes and (2) the individual satisfaction based on the fulfillment of expectation that rely on preceding perceptions of employer attributes. The results presented are based on a quantitative survey that was undertaken among employees of the German service sector. Conceptually a five-dimensional CSR construct (ecology, employees, marketplace, society and corporate governance) and a two-dimensional non-CSR construct (company and workplace) were applied to differentiate employer characteristics. (1) Respondents were asked to indicate the importance of different facets of CSR-related and non-CSR-related employer attributes. By means of a conjoint analysis, the relative importance of each employer attribute was calculated from the data. (2) In addition to this, participants stated their level of satisfaction with specific employer attributes. Both indications were merged to individually weighted satisfaction indexes on the seven-dimensional levels of employer characteristics. The affective organizational commitment of employees (dependent variable) was gathered by applying the established 15-items Organizational Commitment Questionnaire (OCQ). The findings related to the relationship between satisfaction and commitment will be presented. Furthermore, the question will be addressed, how important satisfaction with CSR is in relation to the satisfaction with other attributes of the company in the creation of commitment. Practical as well as scientific implications will be discussed especially with reference to previous results that focused on CSR perception as a commitment driver.Keywords: corporate social responsibility, organizational commitment, employee attitudes/satisfaction, employee expectations, employer brand
Procedia PDF Downloads 265113 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy
Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu
Abstract:
Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films
Procedia PDF Downloads 254112 Nanowire Substrate to Control Differentiation of Mesenchymal Stem Cells
Authors: Ainur Sharip, Jose E. Perez, Nouf Alsharif, Aldo I. M. Bandeas, Enzo D. Fabrizio, Timothy Ravasi, Jasmeen S. Merzaban, Jürgen Kosel
Abstract:
Bone marrow-derived human mesenchymal stem cells (MSCs) are attractive candidates for tissue engineering and regenerative medicine, due to their ability to differentiate into osteoblasts, chondrocytes or adipocytes. Differentiation is influenced by biochemical and biophysical stimuli provided by the microenvironment of the cell. Thus, altering the mechanical characteristics of a cell culture scaffold can directly influence a cell’s microenvironment and lead to stem cell differentiation. Mesenchymal stem cells were cultured on densely packed, vertically aligned magnetic iron nanowires (NWs) and the effect of NWs on the cell cytoskeleton rearrangement and differentiation were studied. An electrochemical deposition method was employed to fabricate NWs into nanoporous alumina templates, followed by a partial release to reveal the NW array. This created a cell growth substrate with free-standing NWs. The Fe NWs possessed a length of 2-3 µm, with each NW having a diameter of 33 nm on average. Mechanical stimuli generated by the physical movement of these iron NWs, in response to a magnetic field, can stimulate osteogenic differentiation. Induction of osteogenesis was estimated using an osteogenic marker, osteopontin, and a reduction of stem cell markers, CD73 and CD105. MSCs were grown on the NWs, and fluorescent microscopy was employed to monitor the expression of markers. A magnetic field with an intensity of 250 mT and a frequency of 0.1 Hz was applied for 12 hours/day over a period of one week and two weeks. The magnetically activated substrate enhanced the osteogenic differentiation of the MSCs compared to the culture conditions without magnetic field. Quantification of the osteopontin signal revealed approximately a seven-fold increase in the expression of this protein after two weeks of culture. Immunostaining staining against CD73 and CD105 revealed the expression of antibodies at the earlier time point (two days) and a considerable reduction after one-week exposure to a magnetic field. Overall, these results demonstrate the application of a magnetic NW substrate in stimulating the osteogenic differentiation of MSCs. This method significantly decreases the time needed to induce osteogenic differentiation compared to commercial biochemical methods, such as osteogenic differentiation kits, that usually require more than two weeks. Contact-free stimulation of MSC differentiation using a magnetic field has potential uses in tissue engineering, regenerative medicine, and bone formation therapies.Keywords: cell substrate, magnetic nanowire, mesenchymal stem cell, stem cell differentiation
Procedia PDF Downloads 194111 Deep Learning for Image Correction in Sparse-View Computed Tomography
Authors: Shubham Gogri, Lucia Florescu
Abstract:
Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net
Procedia PDF Downloads 159110 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia
Authors: Jun Won Kim
Abstract:
Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility
Procedia PDF Downloads 140109 Preparation of β-Polyvinylidene Fluoride Film for Self-Charging Lithium-Ion Battery
Authors: Nursultan Turdakyn, Alisher Medeubayev, Didar Meiramov, Zhibek Bekezhankyzy, Desmond Adair, Gulnur Kalimuldina
Abstract:
In recent years the development of sustainable energy sources is getting extensive research interest due to the ever-growing demand for energy. As an alternative energy source to power small electronic devices, ambient energy harvesting from vibration or human body motion is considered a potential candidate. Despite the enormous progress in the field of battery research in terms of safety, lifecycle and energy density in about three decades, it has not reached the level to conveniently power wearable electronic devices such as smartwatches, bands, hearing aids, etc. For this reason, the development of self-charging power units with excellent flexibility and integrated energy harvesting and storage is crucial. Self-powering is a key idea that makes it possible for the system to operate sustainably, which is now getting more acceptance in many fields in the area of sensor networks, the internet of things (IoT) and implantable in-vivo medical devices. For solving this energy harvesting issue, the self-powering nanogenerators (NGS) were proposed and proved their high effectiveness. Usually, sustainable power is delivered through energy harvesting and storage devices by connecting them to the power management circuit; as for energy storage, the Li-ion battery (LIB) is one of the most effective technologies. Through the movement of Li ions under the driving of an externally applied voltage source, the electrochemical reactions generate the anode and cathode, storing the electrical energy as the chemical energy. In this paper, we present a simultaneous process of converting the mechanical energy into chemical energy in a way that NG and LIB are combined as an all-in-one power system. The electrospinning method was used as an initial step for the development of such a system with a β-PVDF separator. The obtained film showed promising voltage output at different stress frequencies. X-ray diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FT-IR) analysis showed a high percentage of β phase of PVDF polymer material. Moreover, it was found that the addition of 1 wt.% of BTO (Barium Titanate) results in higher quality fibers. When comparing pure PVDF solution with 20 wt.% content and the one with BTO added the latter was more viscous. Hence, the sample was electrospun uniformly without any beads. Lastly, to test the sensor application of such film, a particular testing device has been developed. With this device, the force of a finger tap can be applied at different frequencies so that electrical signal generation is validated.Keywords: electrospinning, nanogenerators, piezoelectric PVDF, self-charging li-ion batteries
Procedia PDF Downloads 162108 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure
Procedia PDF Downloads 321107 An Analysis of LoRa Networks for Rainforest Monitoring
Authors: Rafael Castilho Carvalho, Edjair de Souza Mota
Abstract:
As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest
Procedia PDF Downloads 84106 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots
Authors: Mrinalini Ranjan, Sudheesh Chethil
Abstract:
Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots
Procedia PDF Downloads 173105 Optimization of MAG Welding Process Parameters Using Taguchi Design Method on Dead Mild Steel
Authors: Tadele Tesfaw, Ajit Pal Singh, Abebaw Mekonnen Gezahegn
Abstract:
Welding is a basic manufacturing process for making components or assemblies. Recent welding economics research has focused on developing the reliable machinery database to ensure optimum production. Research on welding of materials like steel is still critical and ongoing. Welding input parameters play a very significant role in determining the quality of a weld joint. The metal active gas (MAG) welding parameters are the most important factors affecting the quality, productivity and cost of welding in many industrial operations. The aim of this study is to investigate the optimization process parameters for metal active gas welding for 60x60x5mm dead mild steel plate work-piece using Taguchi method to formulate the statistical experimental design using semi-automatic welding machine. An experimental study was conducted at Bishoftu Automotive Industry, Bishoftu, Ethiopia. This study presents the influence of four welding parameters (control factors) like welding voltage (volt), welding current (ampere), wire speed (m/min.), and gas (CO2) flow rate (lit./min.) with three different levels for variability in the welding hardness. The objective functions have been chosen in relation to parameters of MAG welding i.e., welding hardness in final products. Nine experimental runs based on an L9 orthogonal array Taguchi method were performed. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the welding characteristics of dead mild steel plate and used in order to obtain optimum levels for every input parameter at 95% confidence level. The optimal parameters setting was found is welding voltage at 22 volts, welding current at 125 ampere, wire speed at 2.15 m/min and gas flow rate at 19 l/min by using the Taguchi experimental design method within the constraints of the production process. Finally, six conformations welding have been carried out to compare the existing values; the predicated values with the experimental values confirm its effectiveness in the analysis of welding hardness (quality) in final products. It is found that welding current has a major influence on the quality of welded joints. Experimental result for optimum setting gave a better hardness of welding condition than initial setting. This study is valuable for different material and thickness variation of welding plate for Ethiopian industries.Keywords: Weld quality, metal active gas welding, dead mild steel plate, orthogonal array, analysis of variance, Taguchi method
Procedia PDF Downloads 479104 3D-Printing of Waveguide Terminations: Effect of Material Shape and Structuring on Their Characteristics
Authors: Lana Damaj, Vincent Laur, Azar Maalouf, Alexis Chevalier
Abstract:
Matched termination is an important part of the passive waveguide components. It is typically used at the end of a waveguide transmission line to prevent reflections and improve signal quality. Waveguide terminations (loads) are commonly used in microwave and RF applications. In traditional microwave architectures, usually, waveguide termination consists of a standard rectangular waveguide made by a lossy resistive material, and ended by shorting metallic plate. These types of terminations are used, to dissipate the energy as heat. However, these terminations may increase the size and the weight of the overall system. New alternative solution consists in developing terminations based on 3D-printing of materials. Designing such terminations is very challenging since it should meet the requirements imposed by the system. These requirements include many parameters such as the absorption, the power handling capability in addition to the cost, the size and the weight that have to be minimized. 3D-printing is a shaping process that enables the production of complex geometries. It allows to find best compromise between requirements. In this paper, a comparison study has been made between different existing and new shapes of waveguide terminations. Indeed, 3D printing of absorbers makes it possible to study not only standard shapes (wedge, pyramid, tongue) but also more complex topologies such as exponential ones. These shapes have been designed and simulated using CST MWS®. The loads have been printed using the carbon-filled PolyLactic Acid, conductive PLA from ProtoPasta. Since the terminations has been characterized in the X-band (from 8GHz to 12GHz), the rectangular waveguide standard WR-90 has been selected. The classical wedge shape has been used as a reference. First, all loads have been simulated with the same length and two parameters have been compared: the absorption level (level of |S11|) and the dissipated power density. This study shows that the concave exponential pyramidal shape has the better absorption level and the convex exponential pyramidal shape has the better dissipated power density level. These two loads have been printed in order to measure their properties. A good agreement between the simulated and measured reflection coefficient has been obtained. Furthermore, a study of material structuring based on the honeycomb hexagonal structure has been investigated in order to vary the effective properties. In the final paper, the detailed methodology and the simulated and measured results will be presented in order to show how 3D-printing can allow controlling mass, weight, absorption level and power behaviour.Keywords: additive manufacturing, electromagnetic composite materials, microwave measurements, passive components, power handling capacity (PHC), 3D-printing
Procedia PDF Downloads 18103 Investigation of the IL23R Psoriasis/PsA Susceptibility Locus
Authors: Shraddha Rane, Richard Warren, Stephen Eyre
Abstract:
L-23 is a pro-inflammatory molecule that signals T cells to release cytokines such as IL-17A and IL-22. Psoriasis is driven by a dysregulated immune response, within which IL-23 is now thought to play a key role. Genome-wide association studies (GWAS) have identified a number of genetic risk loci that support the involvement of IL-23 signalling in psoriasis; in particular a robust susceptibility locus at a gene encoding a subunit of the IL-23 receptor (IL23R) (Stuart et al., 2015; Tsoi et al., 2012). The lead psoriasis-associated SNP rs9988642 is located approximately 500 bp downstream of IL23R but is in tight linkage disequilibrium (LD) with a missense SNP rs11209026 (R381Q) within IL23R (r2 = 0.85). The minor (G) allele of rs11209026 is present in approximately 7% of the population and is protective for psoriasis and several other autoimmune diseases including IBD, ankylosing spondylitis, RA and asthma. The psoriasis-associated missense SNP R381Q causes an arginine to glutamine substitution in a region of the IL23R protein between the transmembrane domain and the putative JAK2 binding site in the cytoplasmic portion. This substitution is expected to affect the receptor’s surface localisation or signalling ability, rather than IL23R expression. Recent studies have also identified a psoriatic arthritis (PsA)-specific signal at IL23R; thought to be independent from the psoriasis association (Bowes et al., 2015; Budu-Aggrey et al., 2016). The lead PsA-associated SNP rs12044149 is intronic to IL23R and is in LD with likely causal SNPs intersecting promoter and enhancer marks in memory CD8+ T cells (Budu-Aggrey et al., 2016). It is therefore likely that the PsA-specific SNPs affect IL23R function via a different mechanism compared with the psoriasis-specific SNPs. It could be hypothesised that the risk allele for PsA located within the IL23R promoter causes an increase IL23R expression, relative to the protective allele. An increased expression of IL23R might then lead to an exaggerated immune response. The independent genetic signals identified for psoriasis and PsA in this locus indicate that different mechanisms underlie these two conditions; although likely both affecting the function of IL23R. It is very important to further characterise these mechanisms in order to better understand how the IL-23 receptor and its downstream signalling is affected in both diseases. This will help to determine how psoriasis and PsA patients might differentially respond to therapies, particularly IL-23 biologics. To investigate this further we have developed an in vitro model using CD4 T cells which express either wild type IL23R and IL12Rβ1 or mutant IL23R (R381Q) and IL12Rβ1. Model expressing different isotypes of IL23R is also underway to investigate the effects on IL23R expression. We propose to further investigate the variants for Ps and PsA and characterise key intracellular processes related to the variants.Keywords: IL23R, psoriasis, psoriatic arthritis, SNP
Procedia PDF Downloads 167