Search results for: time dependent magnetic field intensity
26822 The Impact of Environmental Corporate Social Responsibility (ECSR) and the Perceived Moral Intensity on the Intention of Ethical Investment
Authors: Chiung-Yao Huang, Yu-Cheng Lin, Chiung-Hui Chen
Abstract:
This study seeks to examine perceived environmental corporate social responsibility (ECSR) with a focus on negative environmental questions, related to intention of ethical investment intention after a environmental failure recovery. An empirical test was employed to test the hypotheses. We manipulated the information on negative ECSR activities of a hypothetical firm in a experimental design with a failure recovery treatment. The company’s negative ECSR recovery was depicted in a positive perspective (depicting a follow-up strong social action), whereas in the negative ECSR treatment it was described in a negative perspective (depicting a follow-up non social action). In both treatments, information about other key characteristics of the focal company were kept constant. Investors’ intentions to invest in the company’s stock were evaluated by multi-item scales. Results indicate that positive ECSR recovery information about a firm enhances investors’ intentions to invest in the company’s stock. In addition, perceived moral intensity has a significant impact on the intention of ethical investment and that perceived moral intensity also serves as a key moderating variable in the relationship between negative ECSR and the intention of ethical investment. Finally, theoretical and managerial implications of the findings are discussed. Practical implications: The results suggest that managers may need to be aware of perceived moral intensity as a key variable in restoring the intention of ethical investment. The results further suggest that perceived moral intensity has a direct, and it also has an moderating influence between ECSR and the intention of ethical investment. Originality/value: In an attempt to deepen the understanding of how investors perceptions of firm environmental CSR are connected with other investor‐related outcomes through ECSR recovery, the present research proposes a comprehensive model which encompasses ECSR and other key relationship constructs after a ECSR failure and recovery.Keywords: ethical investment, Environmental Corporate Social Responsibility(ECSR), ECSR recovery, moral intensity
Procedia PDF Downloads 35026821 Natural Factors of Interannual Variability of Winter Precipitation over the Altai Krai
Authors: Sukovatov K.Yu., Bezuglova N.N.
Abstract:
Winter precipitation variability over the Altai Krai was investigated by retrieving temporal patterns. The spectral singular analysis was used to describe the variance distribution and to reduce the precipitation data into a few components (modes). The associated time series were related to large-scale atmospheric and oceanic circulation indices by using lag cross-correlation and wavelet-coherence analysis. GPCC monthly precipitation data for rectangular field limited by 50-550N, 77-880E and monthly climatological circulation index data for the cold season were used to perform SSA decomposition and retrieve statistics for analyzed parameters on the time period 1951-2017. Interannual variability of winter precipitation over the Altai Krai are mostly caused by three natural factors: intensity variations of momentum exchange between mid and polar latitudes over the North Atlantic (explained variance 11.4%); wind speed variations in equatorial stratosphere (quasi-biennial oscillation, explained variance 15.3%); and surface temperature variations for equatorial Pacific sea (ENSO, explained variance 2.8%). It is concluded that under the current climate conditions (Arctic amplification and increasing frequency of meridional processes in mid-latitudes) the second and the third factors are giving more significant contribution into explained variance of interannual variability for cold season atmospheric precipitation over the Altai Krai than the first factor.Keywords: interannual variability, winter precipitation, Altai Krai, wavelet-coherence
Procedia PDF Downloads 18826820 Association Between Short-term NOx Exposure and Asthma Exacerbations in East London: A Time Series Regression Model
Authors: Hajar Hajmohammadi, Paul Pfeffer, Anna De Simoni, Jim Cole, Chris Griffiths, Sally Hull, Benjamin Heydecker
Abstract:
Background: There is strong interest in the relationship between short-term air pollution exposure and human health. Most studies in this field focus on serious health effects such as death or hospital admission, but air pollution exposure affects many people with less severe impacts, such as exacerbations of respiratory conditions. A lack of quantitative analysis and inconsistent findings suggest improved methodology is needed to understand these effectsmore fully. Method: We developed a time series regression model to quantify the relationship between daily NOₓ concentration and Asthma exacerbations requiring oral steroids from primary care settings. Explanatory variables include daily NOₓ concentration measurements extracted from 8 available background and roadside monitoring stations in east London and daily ambient temperature extracted for London City Airport, located in east London. Lags of NOx concentrations up to 21 days (3 weeks) were used in the model. The dependent variable was the daily number of oral steroid courses prescribed for GP registered patients with asthma in east London. A mixed distribution model was then fitted to the significant lags of the regression model. Result: Results of the time series modelling showed a significant relationship between NOₓconcentrations on each day and the number of oral steroid courses prescribed in the following three weeks. In addition, the model using only roadside stations performs better than the model with a mixture of roadside and background stations.Keywords: air pollution, time series modeling, public health, road transport
Procedia PDF Downloads 14226819 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 13626818 Magnetoelastically Induced Perpendicular Magnetic Anisotropy and Perpendicular Exchange Bias of CoO/CoPt Multilayer Films
Authors: Guo Lei, Wang Yue, Nakamura Yoshio, Shi Ji
Abstract:
Recently, perpendicular exchange bias (PEB) is introduced as an active topic attracting continuous efforts. Since its discovery, extrinsic control of PEB has been proposed, due to its scientific significance in spintronic devices and potential application in high density magnetic random access memory with perpendicular magnetic tunneling junction (p-MTJ). To our knowledge, the researches aiming to controlling PEB so far are focused mainly on enhancing the interfacial exchange coupling by adjusting the FM/AFM interface roughness, or optimizing the crystalline structures of FM or AFM layer by employing different seed layers. In present work, the effects of magnetoelastically induced PMA on PEB have been explored in [CoO5nm/CoPt5nm]5 multilayer films. We find the PMA strength of FM layer also plays an important role on PEB at the FM/AFM interface and it is effective to control PEB of [CoO5nm/CoPt5nm]5 multilayer films by changing the magnetoelastically induced PMA of CoPt layer. [CoO5nm/CoPt5nm]5 multilayer films were deposited by magnetron sputtering on fused quartz substrate at room temperature, then annealed at 100°C, 250°C, 300°C and 375°C for 3h, respectively. XRD results reveal that all the samples are well crystallized with preferred fcc CoPt (111) orientation. The continuous multilayer structure with sharp component transition at the CoO5nm/CoPt5nm interface are identified clearly by transmission electron microscopy (TEM), x-ray reflectivity (XRR) and atomic force microscope (AFM). CoPt layer in-plane tensile stress is calculated by sin2φ method, and we find it increases gradually upon annealing from 0.99 GPa (as-deposited) up to 3.02 GPa (300oC-annealed). As to the magnetic property, significant enhancement of PMA is achieved in [CoO5nm/CoPt5nm]5 multilayer films after annealing due to the increase of CoPt layer in-plane tensile stress. With the enhancement of magnetoelastically induced PMA, great improvement of PEB is also achieved in [CoO5nm/CoPt5nm]5 multilayer films, which increases from 130 Oe (as-deposited) up to 1060 Oe (300oC-annealed), showing the same change tendency as PMA and the strong correlation with CoPt layer in-plane tensile stress. We consider it is the increase of CoPt layer in-plane tensile stress that leads to the enhancement of PMA, and thus the enhancement of magnetoelastically induced PMA results in the improvement of PEB in [CoO5nm/CoPt5nm]5 multilayer films.Keywords: perpendicular exchange bias, magnetoelastically induced perpendicular magnetic anisotropy, CoO5nm/CoPt5nm]5 multilayer film with in-plane stress, perpendicular magnetic tunneling junction
Procedia PDF Downloads 46226817 The Application of Conceptual Metaphor Theory to the Treatment of Depression
Abstract:
Conceptual Metaphor Theory (CMT) proposes that metaphor is fundamental to human thought. CMT utilizes embodied cognition, in that emotions are conceptualized as effects on the body because of a coupling of one’s bodily experiences and one’s somatosensory system. Time perception is a function of embodied cognition and conceptual metaphor in that one’s experience of time is inextricably dependent on one’s perception of the world around them. A hallmark of depressive disorders is the distortion in one’s perception of time, such as neurological dysfunction and psychomotor retardation, and yet, to the author’s best knowledge, previous studies have not before linked CMT, embodied cognition, and depressive disorders. Therefore, the focus of this paper is the investigation of how the applications of CMT and embodied cognition (especially regarding time perception) have promise in improving current techniques to treat depressive disorders. This paper aimed to extend, through a thorough review of literature, the theoretical basis required to further research into CMT and embodied cognition’s application in treating time distortion related symptoms of depressive disorders. Future research could include the development of brain training technologies that capitalize on the principles of CMT, with the aim of promoting cognitive remediation and cognitive activation to mitigate symptoms of depressive disorder.Keywords: depression, conceptual metaphor theory, embodied cognition, time
Procedia PDF Downloads 16226816 Effect of High-Intensity Core Muscle Exercises Training on Sport Performance in Dancers
Authors: Che Hsiu Chen, Su Yun Chen, Hon Wen Cheng
Abstract:
Traditional core stability, core endurance, and balance exercises on a stable surface with isometric muscle actions, low loads, and multiple repetitions, which may not improvements the swimming and running economy performance. However, the effects of high intensity core muscle exercise training on jump height, sprint, and aerobic fitness remain unclear. The purpose of this study was to examine whether high intensity core muscle exercises training could improve sport performances in dancers. Thirty healthy university dancer students (28 women and 2 men; age 20.0 years, height 159.4 cm, body mass 52.7 kg) were voluntarily participated in this study, and each participant underwent five suspension exercises (e.g., hip abduction in plank alternative, hamstring curl, 45-degree row, lunge and oblique crunch). Each type of exercise was performed for 30-second, with 30-second of rest between exercises, two times per week for eight weeks and each exercise session was increased by 10-second every week. We measured agility, explosive force, anaerobic and cardiovascular fitness in dancer performance before and after eight weeks of training. The results showed that the 8-week high intensity core muscle training would significantly increase T-test agility (7.78%), explosive force of acceleration (3.35%), vertical jump height (8.10%), jump power (6.95%), lower extremity anaerobic ability (7.10%) and oxygen uptake efficiency slope (4.15%). Therefore, it can be concluded that eight weeks of high intensity core muscle exercises training can improve not only agility, sprint ability, vertical jump ability, anaerobic and but also cardiovascular fitness measures as well.Keywords: balance, jump height, sprint, maximal oxygen uptake
Procedia PDF Downloads 40726815 Clustering-Based Detection of Alzheimer's Disease Using Brain MR Images
Authors: Sofia Matoug, Amr Abdel-Dayem
Abstract:
This paper presents a comprehensive survey of recent research studies to segment and classify brain MR (magnetic resonance) images in order to detect significant changes to brain ventricles. The paper also presents a general framework for detecting regions that atrophy, which can help neurologists in detecting and staging Alzheimer. Furthermore, a prototype was implemented to segment brain MR images in order to extract the region of interest (ROI) and then, a classifier was employed to differentiate between normal and abnormal brain tissues. Experimental results show that the proposed scheme can provide a reliable second opinion that neurologists can benefit from.Keywords: Alzheimer, brain images, classification techniques, Magnetic Resonance Images MRI
Procedia PDF Downloads 30226814 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image
Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias
Abstract:
Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals
Procedia PDF Downloads 7326813 Optimization of Radiation Therapy with a Nanotechnology Based Enzymatic Therapy
Authors: R. D. Esposito, V. M. Barberá, P. García Morales, P. Dorado Rodríguez, J. Sanz, M. Fuentes, D. Planes Meseguer, M. Saceda, L. Fernández Fornos, M. P. Ventero
Abstract:
Results obtained by our group on glioblastoma multiforme (GBM) primary cultures , show a dramatic potentiation of radiation effects when 2 units/ml of D-amino acid oxidase (DAO) enzyme are added, free or immobilized in magnetic nanoparticles, to irradiated samples just after the irradiation. Cell cultures were exposed to radiation doses of 7Gy and 15Gy of 6 MV photons from a clinical linear accelerator. At both doses, we observed a clear enhancing effect of radiation-induced damages due to the addition of DAO.Keywords: D-amino Acid Oxidase (DAO) enzyme, magnetic particles, nanotechnology, radiation therapy enhancement
Procedia PDF Downloads 52326812 Efficient Pre-Concentration of As (III) Using Guanidine-Modified Magnetic Mesoporous Silica in the Food Sample
Authors: Majede Modheji, Hamid Emadi, Hossein Vojoudi
Abstract:
An efficient magnetic mesoporous structure was designed and prepared for the facile pre-concentration of As(III) ions. To prepare the sorbent, a core-shell magnetic silica nanoparticle was covered by MCM-41 like structure, and then the surface was modified by guanidine via an amine linker. The prepared adsorbent was investigated as an effective and sensitive material for the adsorption of arsenic ions from the aqueous solution applying a normal batch method. The imperative variables of the adsorption were studied to increase efficiency. The dynamic and static processes were tested that matched a pseudo-second order of kinetic model and the Langmuir isotherm model, respectively. The sorbent reusability was investigated, and it was confirmed that the designed product could be applied at best for six cycles successively without any significant efficiency loss. The synthesized product was tested to determine and pre-concentrate trace amounts of arsenic ions in rice and natural waters as a real sample. A desorption process applying 5 mL of hydrochloric acid (0.5 mol L⁻¹) as an eluent exhibited about 98% recovery of the As(III) ions adsorbed on the GA-MSMP sorbent.Keywords: arsenic, adsorption, mesoporous, surface modification, MCM-41
Procedia PDF Downloads 14926811 Characteristics of Photoluminescence in Resonant Quasiperiodic Double-period Quantum Wells
Authors: C. H. Chang, R. Z. Qiu, C. W. Tsao, Y. H. Cheng, C. H. Chen, W. J. Hsueh
Abstract:
Characteristics of photoluminescence (PL) in a resonant quasi-periodic double-period quantum wells (DPQW) are demonstrated. The maximum PL intensity in the DPQW is remarkably greater than that in a traditional periodic QW (PQW) under the Bragg or anti-Bragg conditions. The optimal PL spectrum in the DPQW has an asymmetrical form instead of the symmetrical form in the PQW. Moreover, there are two large values of PL intensity in the DPQW, which also differs from the PQW.Keywords: Photoluminescence, quantum wells, quasiperiodic structure
Procedia PDF Downloads 71926810 Trends of Seasonal and Annual Rainfall in the South-Central Climatic Zone of Bangladesh Using Mann-Kendall Trend Test
Authors: M. T. Islam, S. H. Shakif, R. Hasan, S. H. Kobi
Abstract:
Investigation of rainfall trends is crucial considering climate change, food security, and the economy of a particular region. This research aims to study seasonal and annual precipitation trends and their abrupt changes over time in the south-central climatic zone of Bangladesh using monthly time series data of 50 years (1970-2019). A trend-free pre-whitening method has been employed to make necessary adjustments for autocorrelations in the rainfall data. Trends in rainfall and their intensity have been observed using the non-parametric Mann-Kendall test and Theil-Sen estimator. Significant changes and fluctuation points in the data series have been detected using the sequential Mann-Kendall test at the 95% confidence limit. The study findings show that most of the rainfall stations in the study area have a decreasing precipitation pattern throughout all seasons. The maximum decline in the rainfall intensity has been found for the Tangail station (-8.24 mm/year) during monsoon. Madaripur and Chandpur stations have shown slight positive trends in post-monsoon rainfall. In terms of annual precipitation, a negative rainfall pattern has been identified in each station, with a maximum decrement (-) of 14.48 mm/year at Chandpur. However, all the trends are statistically non-significant within the 95% confidence interval, and their monotonic association with time ranges from very weak to weak. From the sequential Mann-Kendall test, the year of changing points for annual and seasonal downward precipitation trends occur mostly after the 90s for Dhaka and Barishal stations. For Chandpur, the fluctuation points arrive after the mid-70s in most cases.Keywords: trend analysis, Mann-Kendall test, Theil-Sen estimator, sequential Mann-Kendall test, rainfall trend
Procedia PDF Downloads 8026809 Miracle Fruit Application in Sour Beverages: Effect of Different Concentrations on the Temporal Sensory Profile and Overall Linking
Authors: Jéssica F. Rodrigues, Amanda C. Andrade, Sabrina C. Bastos, Sandra B. Coelho, Ana Carla M. Pinheiro
Abstract:
Currently, there is a great demand for the use of natural sweeteners due to the harmful effects of the high sugar and artificial sweeteners consumption on the health. Miracle fruit, which is known for its unique ability to modify the sour taste in sweet taste, has been shown to be a good alternative sweetener. However, it has a high production cost, being important to optimize lower contents to be used. Thus, the aim of this study was to assess the effect of different miracle fruit contents on the temporal (Time-intensity - TI and Temporal Dominance of Sensations - TDS) sensory profile and overall linking of lemonade, to determine the better content to be used as a natural sweetener in sour beverages. TI and TDS results showed that the concentrations of 150 mg, 300 mg and 600 mg miracle fruit were effective in reducing the acidity and promoting the sweet perception in lemonade. Furthermore, the concentrations of 300 mg and 600 mg obtained similar profiles. Through the acceptance test, the concentration of 300 mg miracle fruit was shown to be an efficient substitute for sucrose and sucralose in lemonade, once they had similar hedonic values between ‘I liked it slightly’ and ‘I liked it moderately’. Therefore, 300mg miracle fruit consists in an adequate content to be used as a natural sweetener of lemonade. The results of this work will help the food industry on the efficient application of a new natural sweetener- the Miracle fruit extract in sour beverages, reducing costs and providing a product that meets the consumer desires.Keywords: acceptance, natural sweetener, temporal dominance of sensations, time-intensity
Procedia PDF Downloads 24926808 Iron Yoke Dipole with High Quality Field for Collector Ring FAIR
Authors: Tatyana Rybitskaya, Alexandr Starostenko, Kseniya Ryabchenko
Abstract:
Collector ring (CR) of FAIR project is a large acceptance storage ring and field quality plays a major role in the magnet design. The CR will use normal conducting dipole magnets. There will be 24 H-type sector magnets with a maximum field value of 1.6 T. The integrated over the length of the magnet field quality as a function of radius is ∆B.l/B.l = ±1x10⁻⁴. Below 1.6 T the value ∆B.l/B.l can be higher with a linear approximation up to ±2.5x10⁻⁴ at the field level of 0.8 T. An iron-dominated magnet with required field quality is produced with standard technology as the quality is dominated by the yoke geometry.Keywords: conventional magnet, iron yoke dipole, harmonic terms, particle accelerators
Procedia PDF Downloads 14626807 Influence of Confined Acoustic Phonons on the Shubnikov – de Haas Magnetoresistance Oscillations in a Doped Semiconductor Superlattice
Authors: Pham Ngoc Thang, Le Thai Hung, Nguyen Quang Bau
Abstract:
The influence of confined acoustic phonons on the Shubnikov – de Haas magnetoresistance oscillations in a doped semiconductor superlattice (DSSL), subjected in a magnetic field, DC electric field, and a laser radiation, has been theoretically studied based on quantum kinetic equation method. The analytical expression for the magnetoresistance in a DSSL has been obtained as a function of external fields, DSSL parameters, and especially the quantum number m characterizing the effect of confined acoustic phonons. When m goes to zero, the results for bulk phonons in a DSSL could be achieved. Numerical calculations are also achieved for the GaAs:Si/GaAs:Be DSSL and compared with other studies. Results show that the Shubnikov – de Haas magnetoresistance oscillations amplitude decrease as the increasing of phonon confinement effect.Keywords: Shubnikov–de Haas magnetoresistance oscillations, quantum kinetic equation, confined acoustic phonons, laser radiation, doped semiconductor superlattices
Procedia PDF Downloads 31726806 Mean-Field Type Modeling of Non-Local Congestion in Pedestrian Crowd Dynamics
Authors: Alexander Aurell
Abstract:
One of the latest trends in the modeling of human crowds is the mean-field game approach. In the mean-field game approach, the motion of a human crowd is described by a nonstandard stochastic optimal control problem. It is nonstandard since congestion is considered, introduced through a dependence in the performance functional on the distribution of the crowd. This study extends the class of mean-field pedestrian crowd models to allow for non-local congestion and arbitrary, but finitely, many interacting crowds. The new congestion feature grants pedestrians a 'personal space' where crowding is undesirable. The model is treated as a mean-field type game which is derived from a particle picture. This, in contrast to a mean-field game, better describes a situation where the crowd can be controlled by a central planner. The latter is suitable for decentralized situations. Solutions to the mean-field type game are characterized via a Pontryagin-type Maximum Principle.Keywords: congestion, crowd dynamics, interacting populations, mean-field approximation, optimal control
Procedia PDF Downloads 44526805 Corticomotor Excitability after Two Different Repetitive Transcranial Magnetic Stimulation Protocols in Ischemic Stroke Patients
Authors: Asrarul Fikri Abu Hassan, Muhammad Hafiz bin Hanafi, Jafri Malin Abdullah
Abstract:
This study is to compare the motor evoked potential (MEP) changes using different settings of repetitive transcranial magnetic stimulation (rTMS) in the post-haemorrhagic stroke patient which treated conservatively. The goal of the study is to determine changes in corticomotor excitability and functional outcome after repetitive transcranial magnetic stimulation (rTMS) therapy regime. 20 post-stroke patients with upper limb hemiparesis were studied due to haemorrhagic stroke. One of the three settings; (I) Inhibitory setting, or (II) facilitatory setting, or (III) control group, no excitatory or inhibitory setting have been applied randomly during the first meeting. The motor evoked potential (MEP) were recorded before and after application of the rTMS setting. Functional outcomes were evaluated using the Barthel index score. We found pre-treatment MEP values of the lesional side were lower compared to post-treatment values in both settings. In contrast, we found that the pre-treatment MEP values of the non-lesional side were higher compared to post-treatment values in both settings. Interestingly, patients with treatment, either facilitatory setting and inhibitory setting have faster motor recovery compared to the control group. Our data showed both settings might improve the MEP of the upper extremity and functional outcomes in the haemorrhagic stroke patient.Keywords: Barthel index, corticomotor excitability, motor evoked potential, repetitive transcranial magnetic stimulation, stroke
Procedia PDF Downloads 15926804 Student Researchers and Industry Partnerships Improve Health Management with Data Driven Decisions
Authors: Carole A. South-Winter
Abstract:
Research-based learning gives students the opportunity to experience problems that require critical thinking and idea development. The skills they gain in working through these problems 'hands-on,' develop into attributes that benefit their careers in the professional field. The partnerships developed between students and industries give advantages to both sides. The students gain knowledge and skills that will increase their likelihood of success in the future and the industries are given research on new advancements that will give them a competitive advantage in their given field of work. The future of these partnerships is dependent on the success of current programs, enabling the enhancement and improvement of the research efforts. Once more students can complete research, there will be an increase in reliability of the results for each industry. The overall goal is to continue the support for research-based learning and the partnerships formed between students and industries.Keywords: global healthcare, industry partnerships, research-driven decisions, short-term study abroad
Procedia PDF Downloads 12626803 Modeling Competition Between Subpopulations with Variable DNA Content in Resource-Limited Microenvironments
Authors: Parag Katira, Frederika Rentzeperis, Zuzanna Nowicka, Giada Fiandaca, Thomas Veith, Jack Farinhas, Noemi Andor
Abstract:
Resource limitations shape the outcome of competitions between genetically heterogeneous pre-malignant cells. One example of such heterogeneity is in the ploidy (DNA content) of pre-malignant cells. A whole-genome duplication (WGD) transforms a diploid cell into a tetraploid one and has been detected in 28-56% of human cancers. If a tetraploid subclone expands, it consistently does so early in tumor evolution, when cell density is still low, and competition for nutrients is comparatively weak – an observation confirmed for several tumor types. WGD+ cells need more resources to synthesize increasing amounts of DNA, RNA, and proteins. To quantify resource limitations and how they relate to ploidy, we performed a PAN cancer analysis of WGD, PET/CT, and MRI scans. Segmentation of >20 different organs from >900 PET/CT scans were performed with MOOSE. We observed a strong correlation between organ-wide population-average estimates of Oxygen and the average ploidy of cancers growing in the respective organ (Pearson R = 0.66; P= 0.001). In-vitro experiments using near-diploid and near-tetraploid lineages derived from a breast cancer cell line supported the hypothesis that DNA content influences Glucose- and Oxygen-dependent proliferation-, death- and migration rates. To model how subpopulations with variable DNA content compete in the resource-limited environment of the human brain, we developed a stochastic state-space model of the brain (S3MB). The model discretizes the brain into voxels, whereby the state of each voxel is defined by 8+ variables that are updated over time: stiffness, Oxygen, phosphate, glucose, vasculature, dead cells, migrating cells and proliferating cells of various DNA content, and treat conditions such as radiotherapy and chemotherapy. Well-established Fokker-Planck partial differential equations govern the distribution of resources and cells across voxels. We applied S3MB on sequencing and imaging data obtained from a primary GBM patient. We performed whole genome sequencing (WGS) of four surgical specimens collected during the 1ˢᵗ and 2ⁿᵈ surgeries of the GBM and used HATCHET to quantify its clonal composition and how it changes between the two surgeries. HATCHET identified two aneuploid subpopulations of ploidy 1.98 and 2.29, respectively. The low-ploidy clone was dominant at the time of the first surgery and became even more dominant upon recurrence. MRI images were available before and after each surgery and registered to MNI space. The S3MB domain was initiated from 4mm³ voxels of the MNI space. T1 post and T2 flair scan acquired after the 1ˢᵗ surgery informed tumor cell densities per voxel. Magnetic Resonance Elastography scans and PET/CT scans informed stiffness and Glucose access per voxel. We performed a parameter search to recapitulate the GBM’s tumor cell density and ploidy composition before the 2ⁿᵈ surgery. Results suggest that the high-ploidy subpopulation had a higher Glucose-dependent proliferation rate (0.70 vs. 0.49), but a lower Glucose-dependent death rate (0.47 vs. 1.42). These differences resulted in spatial differences in the distribution of the two subpopulations. Our results contribute to a better understanding of how genomics and microenvironments interact to shape cell fate decisions and could help pave the way to therapeutic strategies that mimic prognostically favorable environments.Keywords: tumor evolution, intra-tumor heterogeneity, whole-genome doubling, mathematical modeling
Procedia PDF Downloads 7326802 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 10126801 Magnetic Resonance Imaging for Assessment of the Quadriceps Tendon Cross-Sectional Area as an Adjunctive Diagnostic Parameter in Patients with Patellofemoral Pain Syndrome
Authors: Jae Ni Jang, SoYoon Park, Sukhee Park, Yumin Song, Jae Won Kim, Keum Nae Kang, Young Uk Kim
Abstract:
Objectives: Patellofemoral pain syndrome (PFPS) is a common clinical condition characterized by anterior knee pain. Here, we investigated the quadriceps tendon cross-sectional area (QTCSA) as a novel predictor for the diagnosis of PFPS. By examining the association between the QTCSA and PFPS, we aimed to provide a more valuable diagnostic parameter and more equivocal assessment of the diagnostic potential of PFPS by comparing the QTCSA with the quadriceps tendon thickness (QTT), a traditional measure of quadriceps tendon hypertrophy. Patients and Methods: This retrospective study included 30 patients with PFPS and 30 healthy participants who underwent knee magnetic resonance imaging. T1-weighted turbo spin echo transverse magnetic resonance images were obtained. The QTCSA was measured on the axial-angled phases of the images by drawing outlines, and the QTT was measured at the most hypertrophied quadriceps tendon. Results: The average QTT and QTCSA for patients with PFPS (6.33±0.80 mm and 155.77±36.60 mm², respectively) were significantly greater than those for healthy participants (5.77±0.36 mm and 111.90±24.10 mm2, respectively; both P<0.001). We used a receiver operating characteristic curve to confirm the sensitivities and specificities for both the QTT and QTCSA as predictors of PFPS. The optimal diagnostic cutoff value for QTT was 5.98 mm, with a sensitivity of 66.7%, a specificity of 70.0%, and an area under the curve of 0.75 (0.62–0.88). The optimal diagnostic cutoff value for QTCSA was 121.04 mm², with a sensitivity of 73.3%, a specificity of 70.0%, and an area under the curve of 0.83 (0.74–0.93). Conclusion: The QTCSA was found to be a more reliable diagnostic indicator for PFPS than QTT.Keywords: patellofemoral pain syndrome, quadriceps muscle, hypertrophy, magnetic resonance imaging
Procedia PDF Downloads 5026800 3D Modeling for Frequency and Time-Domain Airborne EM Systems with Topography
Authors: C. Yin, B. Zhang, Y. Liu, J. Cai
Abstract:
Airborne EM (AEM) is an effective geophysical exploration tool, especially suitable for ridged mountain areas. In these areas, topography will have serious effects on AEM system responses. However, until now little study has been reported on topographic effect on airborne EM systems. In this paper, an edge-based unstructured finite-element (FE) method is developed for 3D topographic modeling for both frequency and time-domain airborne EM systems. Starting from the frequency-domain Maxwell equations, a vector Helmholtz equation is derived to obtain a stable and accurate solution. Considering that the AEM transmitter and receiver are both located in the air, the scattered field method is used in our modeling. The Galerkin method is applied to discretize the Helmholtz equation for the final FE equations. Solving the FE equations, the frequency-domain AEM responses are obtained. To accelerate the calculation speed, the response of source in free-space is used as the primary field and the PARDISO direct solver is used to deal with the problem with multiple transmitting sources. After calculating the frequency-domain AEM responses, a Hankel’s transform is applied to obtain the time-domain AEM responses. To check the accuracy of present algorithm and to analyze the characteristic of topographic effect on airborne EM systems, both the frequency- and time-domain AEM responses for 3 model groups are simulated: 1) a flat half-space model that has a semi-analytical solution of EM response; 2) a valley or hill earth model; 3) a valley or hill earth with an abnormal body embedded. Numerical experiments show that close to the node points of the topography, AEM responses demonstrate sharp changes. Special attentions need to be paid to the topographic effects when interpreting AEM survey data over rugged topographic areas. Besides, the profile of the AEM responses presents a mirror relation with the topographic earth surface. In comparison to the topographic effect that mainly occurs at the high-frequency end and early time channels, the EM responses of underground conductors mainly occur at low frequencies and later time channels. For the signal of the same time channel, the dB/dt field reflects the change of conductivity better than the B-field. The research of this paper will serve airborne EM in the identification and correction of the topographic effects.Keywords: 3D, Airborne EM, forward modeling, topographic effect
Procedia PDF Downloads 31726799 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models
Authors: Ravi Ande, Mousumi Hazari
Abstract:
One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine
Procedia PDF Downloads 9226798 Time-Domain Analysis Approaches of Soil-Structure Interaction: A Comparative Study
Authors: Abdelrahman Taha, Niloofar Malekghaini, Hamed Ebrahimian, Ramin Motamed
Abstract:
This paper compares the substructure and direct methods for soil-structure interaction (SSI) analysis in the time domain. In the substructure SSI method, the soil domain is replaced by a set of springs and dashpots, also referred to as the impedance function, derived through the study of the behavior of a massless rigid foundation. The impedance function is inherently frequency dependent, i.e., it varies as a function of the frequency content of the structural response. To use the frequency-dependent impedance function for time-domain SSI analysis, the impedance function is approximated at the fundamental frequency of the structure-soil system. To explore the potential limitations of the substructure modeling process, a two-dimensional reinforced concrete frame structure is modeled using substructure and direct methods in this study. The results show discrepancies between the simulated responses of the substructure and the direct approaches. To isolate the effects of higher modal responses, the same study is repeated using a harmonic input motion, in which a similar discrepancy is still observed between the substructure and direct approaches. It is concluded that the main source of discrepancy between the substructure and direct SSI approaches is likely attributed to the way the impedance functions are calculated, i.e., assuming a massless rigid foundation without considering the presence of the superstructure. Hence, a refined impedance function, considering the presence of the superstructure, shall be developed. This refined impedance function is expected to significantly improve the simulation accuracy of the substructure approach for structural systems whose behavior is dominated by the fundamental mode response.Keywords: direct approach, impedance function, soil-structure interaction, substructure approach
Procedia PDF Downloads 11726797 Comparative Analysis of Turbulent Plane Jets from a Sharp-Edged Orifice, a Beveled-Edge Orifice and a Radially Contoured Nozzle
Authors: Ravinesh C. Deo
Abstract:
This article investigates through experiments the flow characteristics of plane jets from sharp-edged orifice-plate, beveled-edge and radially contoured nozzle. The first two configurations exhibit saddle-backed velocity profiles while the third shows a top-hat. A vena contracta is found for the jet emanating from orifice at x/h = 3 while the contoured case displays a potential core extending to the range x/h = 5. A spurt in jet pressure on the centerline supports vena contracta for the orifice-jet. Momentum thicknesses and integral length scales elongate linearly with x although the growth of the shear-layer and large-scale eddies for the orifice are greater than the contoured case. The near-field spectrum exhibits higher frequency of the primary eddies that concur with enhanced turbulence intensity. Importantly, highly “turbulent” state of the orifice-jet prevails in the far-field where the spectra confirm more energetic secondary eddies associated with greater flapping amplitude of the orifice-jet.Keywords: orifice, beveled-edge-orifice, radially contoured nozzle, plane jets
Procedia PDF Downloads 15426796 Multiple Linear Regression for Rapid Estimation of Subsurface Resistivity from Apparent Resistivity Measurements
Authors: Sabiu Bala Muhammad, Rosli Saad
Abstract:
Multiple linear regression (MLR) models for fast estimation of true subsurface resistivity from apparent resistivity field measurements are developed and assessed in this study. The parameters investigated were apparent resistivity (ρₐ), horizontal location (X) and depth (Z) of measurement as the independent variables; and true resistivity (ρₜ) as the dependent variable. To achieve linearity in both resistivity variables, datasets were first transformed into logarithmic domain following diagnostic checks of normality of the dependent variable and heteroscedasticity to ensure accurate models. Four MLR models were developed based on hierarchical combination of the independent variables. The generated MLR coefficients were applied to another data set to estimate ρₜ values for validation. Contours of the estimated ρₜ values were plotted and compared to the observed data plots at the colour scale and blanking for visual assessment. The accuracy of the models was assessed using coefficient of determination (R²), standard error (SE) and weighted mean absolute percentage error (wMAPE). It is concluded that the MLR models can estimate ρₜ for with high level of accuracy.Keywords: apparent resistivity, depth, horizontal location, multiple linear regression, true resistivity
Procedia PDF Downloads 27626795 Two-Warehouse Inventory Model for Deteriorating Items with Inventory-Level-Dependent Demand under Two Dispatching Policies
Authors: Lei Zhao, Zhe Yuan, Wenyue Kuang
Abstract:
This paper studies two-warehouse inventory models for a deteriorating item considering that the demand is influenced by inventory levels. The problem mainly focuses on the optimal order policy and the optimal order cycle with inventory-level-dependent demand in two-warehouse system for retailers. It considers the different deterioration rates and the inventory holding costs in owned warehouse (OW) and rented warehouse (RW), and the conditions of transportation cost, allowed shortage and partial backlogging. Two inventory models are formulated: last-in first-out (LIFO) model and first-in-first-out (FIFO) model based on the policy choices of LIFO and FIFO, and a comparative analysis of LIFO model and FIFO model is made. The study finds that the FIFO policy is more in line with realistic operating conditions. Especially when the inventory holding cost of OW is high, and there is no difference or big difference between deterioration rates of OW and RW, the FIFO policy has better applicability. Meanwhile, this paper considers the differences between the effects of warehouse and shelf inventory levels on demand, and then builds retailers’ inventory decision model and studies the factors of the optimal order quantity, the optimal order cycle and the average inventory cost per unit time. To minimize the average total cost, the optimal dispatching policies are provided for retailers’ decisions.Keywords: FIFO model, inventory-level-dependent, LIFO model, two-warehouse inventory
Procedia PDF Downloads 27926794 Optical Vortex in Asymmetric Arcs of Rotating Intensity
Authors: Mona Mihailescu, Rebeca Tudor, Irina A. Paun, Cristian Kusko, Eugen I. Scarlat, Mihai Kusko
Abstract:
Specific intensity distributions in the laser beams are required in many fields: optical communications, material processing, microscopy, optical tweezers. In optical communications, the information embedded in specific beams and the superposition of multiple beams can be used to increase the capacity of the communication channels, employing spatial modulation as an additional degree of freedom, besides already available polarization and wavelength multiplexing. In this regard, optical vortices present interest due to their potential to carry independent data which can be multiplexed at the transmitter and demultiplexed at the receiver. Also, in the literature were studied their combinations: 1) axial or perpendicular superposition of multiple optical vortices or 2) with other laser beam types: Bessel, Airy. Optical vortices, characterized by stationary ring-shape intensity and rotating phase, are achieved using computer generated holograms (CGH) obtained by simulating the interference between a tilted plane wave and a wave passing through a helical phase object. Here, we propose a method to combine information through the reunion of two CGHs. One is obtained using the helical phase distribution, characterized by its topological charge, m. The other is obtained using conical phase distribution, characterized by its radial factor, r0. Each CGH is obtained using plane wave with different tilts: km and kr for CGH generated from helical phase object and from conical phase object, respectively. These reunions of two CGHs are calculated to be phase optical elements, addressed on the liquid crystal display of a spatial light modulator, to optically process the incident beam for investigations of the diffracted intensity pattern in far field. For parallel reunion of two CGHs and high values of the ratio between km and kr, the bright ring from the first diffraction order, specific for optical vortices, is changed in an asymmetric intensity pattern: a number of circle arcs. Both diffraction orders (+1 and -1) are asymmetrical relative to each other. In different planes along the optical axis, it is observed that this asymmetric intensity pattern rotates around its centre: in the +1 diffraction order the rotation is anticlockwise and in the -1 diffraction order, the rotation is clockwise. The relation between m and r0 controls the diameter of the circle arcs and the ratio between km and kr controls the number of arcs. For perpendicular reunion of the two CGHs and low values of the ratio between km and kr, the optical vortices are multiplied and focalized in different planes, depending on the radial parameter. The first diffraction order contains information about both phase objects. It is incident on the phase masks placed at the receiver, computed using the opposite values for topological charge or for the radial parameter and displayed successively. In all, the proposed method is exploited in terms of constructive parameters, for the possibility offered by the combination of different types of beams which can be used in robust optical communications.Keywords: asymmetrical diffraction orders, computer generated holograms, conical phase distribution, optical vortices, spatial light modulator
Procedia PDF Downloads 31126793 Heart and Plasma LDH and CK in Response to Intensive Treadmill Running and Aqueous Extraction of Red Crataegus pentagyna in Male Rats
Authors: A. Abdi, A. Barari, A. Hojatollah Nikbakht, Khosro Ebrahim
Abstract:
Aim: The purpose of the current study was to investigate the effect of a high intensity treadmill running training (8 weeks) with or without aqueous extraction of Crataegus pentagyna on heart and plasma LDH and CK. Design: Thirty-two Wistar male rats (4-6 weeks old, 125-135 gr weight) were used. Animals were randomly assigned into training (n = 16) and control (n = 16) groups and further divided into saline-control (SC, n = 8), saline-training (ST, n = 8), red Crataegus pentagyna extraction -control (CPEC, n = 8), and red Crataegus pentagyna extraction -training (CPET, n = 8) groups. Training groups have performed a high-intensity running program 34 m/min on 0% grade, 60 min/day, 5 days/week) on a motor-driven treadmill for 8 weeks. Animals were fed orally with Crataegus extraction and saline solution (500mg/kg body weight/or 10ml/kg body weight) for last six weeks. Seventy- two hours after the last training session, rats were sacrificed; plasma and heart were excised and immediately frozen in liquid nitrogen. LDH and CK levels were measured by colorimetric method. Statistical analysis was performed using a one way analysis of variance and Tukey test. Significance was accepted at P = 0.05. Results: Result showed that consumption crataegus lowers LDH and CK in heart and plasma. Also the heart LDH and CK were lower in the CPET compared to the ST, while plasma LDH and CK in CPET was higher than the ST. The results of ANOVA showed that the due high-intensity exercise and consumption crataegus, there are significant differences between levels of hearth LDH (P < 0/001), plasma (P < 0/006) and hearth (P < 0/001) CK. Conclusion: It appears that high-intensity exercise led to increased tissue damage and inflammatory factors in plasma. In other hand, consumption aqueous extraction of Red Crataegus maybe inhibits these factors and prevents muscle and heart damage.Keywords: LDH, CK, crataegus, intensity
Procedia PDF Downloads 437