Search results for: activation code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2327

Search results for: activation code

287 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver

Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin

Abstract:

National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.

Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band

Procedia PDF Downloads 357
286 Outcome of Emergency Response Team System in In-Hospital Cardiac Arrest

Authors: Jirapat Suriyachaisawat, Ekkit Surakarn

Abstract:

Introduction: To improve early detection and mortality rate of In- Hospital Cardiac arrest, Emergency Response Team (ERT) system was planned and implemented since June 2009 to detect pre-arrest conditions and for any concerns. The ERT consisted of on duty physicians and nurses from emergency department. ERT calling criteria consisted of acute change of HR < 40 or > 130 beats per minute, systolic blood pressure < 90mmHg, respiratory rate <8 or > 28 breaths per minute, O2 saturation < 90%, acute change in conscious state, acute chest pain or worried about the patients. From the data on ERT system implementation in our hospital in early phase (during June 2009-2011), there was no statistic significance in difference in In-Hospital cardiac arrest incidence and overall hospital mortality rate. Since the introduction of the ERT service in our hospital, we have conducted continuous educational campaign to improve awareness in an attempt to increase use of the service. Methods: To investigate outcome of ERT system in In-Hospital cardiac arrest and overall hospital mortality rate. We conducted a prospective, controlled before-and after examination of the long term effect of a ERT system on the incidence of cardiac arrest. We performed Chi -square analysis to find statistic significance. Results: Of a total 623 ERT cases from June 2009 until December 2012, there were 72 calls in 2009, 196 calls in 2010 ,139 calls in 2011 and 245 calls in 2012.The number of ERT calls per 1000 admissions in year 2009-10 was 7.69, 5.61 in 2011 and 9.38 in 2013. The number of Code blue calls per 1000 admissions decreased significantly from 2.28 to 0.99 per 1000 admissions (P value < 0.001). The incidence of cardiac arrest decreased progressively from 1.19 to 0.34 per 1000 admissions and significant in difference in year 2012 (P value < 0.001). The overall hospital mortality rate decreased by 8 % from 15.43 to 14.43 per 1000 admissions (P value 0.095). Conclusions: ERT system implementation was associated with progressive reduction in cardiac arrests over three year period, especially statistic significant in difference in 4th year after implementation. We also found an inverse association between number of ERT use and the risk of occurrence of cardiac arrests, But we have not found difference in overall hospital mortality rate.

Keywords: emergency response team, ERT, cardiac arrest, emergency medicine

Procedia PDF Downloads 301
285 Applicability of Polyisobutylene-Based Polyurethane Structures in Biomedical Disciplines: Some Calcification and Protein Adsorption Studies

Authors: Nihan Nugay, Nur Cicek Kekec, Kalman Toth, Turgut Nugay, Joseph P. Kennedy

Abstract:

In recent years, polyurethane structures are paving the way for elastomer usage in biology, human medicine, and biomedical application areas. Polyurethanes having a combination of high oxidative and hydrolytic stability and excellent mechanical properties are focused due to enhancing the usage of PUs especially for implantable medical device application such as cardiac-assist. Currently, unique polyurethanes consisting of polyisobutylenes as soft segments and conventional hard segments, named as PIB-based PUs, are developed with precise NCO/OH stoichiometry (∽1.05) for obtaining PIB-based PUs with enhanced properties (i.e., tensile stress increased from ∽11 to ∽26 MPa and elongation from ∽350 to ∽500%). Static and dynamic mechanical properties were optimized by examining stress-strain graphs, self-organization and crystallinity (XRD) traces, rheological (DMA, creep) profiles and thermal (TGA, DSC) responses. Annealing procedure was applied for PIB-based PUs. Annealed PIB-based PU shows ∽26 MPa tensile strength, ∽500% elongation, and ∽77 Microshore hardness with excellent hydrolytic and oxidative stability. The surface characters of them were examined with AFM and contact angle measurements. Annealed PIB-based PU exhibits the higher segregation of individual segments and surface hydrophobicity thus annealing significantly enhances hydrolytic and oxidative stability by shielding carbamate bonds by inert PIB chains. According to improved surface and microstructure characters, greater efforts are focused on analyzing protein adsorption and calcification profiles. In biomedical applications especially for cardiological implantations, protein adsorption inclination on polymeric heart valves is undesirable hence protein adsorption from blood serum is followed by platelet adhesion and subsequent thrombus formation. The protein adsorption character of PIB-based PU examines by applying Bradford assay in fibrinogen and bovine serum albumin solutions. Like protein adsorption, calcium deposition on heart valves is very harmful because vascular calcification has been proposed activation of osteogenic mechanism in the vascular wall, loss of inhibitory factors, enhance bone turnover and irregularities in mineral metabolism. The calcium deposition on films are characterized by incubating samples in simulated body fluid solution and examining SEM images and XPS profiles. PIB-based PUs are significantly more resistant to hydrolytic-oxidative degradation, protein adsorption and calcium deposition than ElastEonTM E2A, a commercially available PDMS-based PU, widely used for biomedical applications.

Keywords: biomedical application, calcification, polyisobutylene, polyurethane, protein adsorption

Procedia PDF Downloads 245
284 Optimal Beam for Accelerator Driven Systems

Authors: M. Paraipan, V. M. Javadova, S. I. Tyutyunnikov

Abstract:

The concept of energy amplifier or accelerator driven system (ADS) involves the use of a particle accelerator coupled with a nuclear reactor. The accelerated particle beam generates a supplementary source of neutrons, which allows the subcritical functioning of the reactor, and consequently a safe exploitation. The harder neutron spectrum realized ensures a better incineration of the actinides. The almost generalized opinion is that the optimal beam for ADS is represented by protons with energy around 1 GeV (gigaelectronvolt). In the present work, a systematic analysis of the energy gain for proton beams with energy from 0.5 to 3 GeV and ion beams from deuteron to neon with energies between 0.25 and 2 AGeV is performed. The target is an assembly of metallic U-Pu-Zr fuel rods in a bath of lead-bismuth eutectic coolant. The rods length is 150 cm. A beryllium converter with length 110 cm is used in order to maximize the energy released in the target. The case of a linear accelerator is considered, with a beam intensity of 1.25‧10¹⁶ p/s, and a total accelerator efficiency of 0.18 for proton beam. These values are planned to be achieved in the European Spallation Source project. The energy gain G is calculated as the ratio between the energy released in the target to the energy spent to accelerate the beam. The energy released is obtained through simulation with the code Geant4. The energy spent is calculating by scaling from the data about the accelerator efficiency for the reference particle (proton). The analysis concerns the G values, the net power produce, the accelerator length, and the period between refueling. The optimal energy for proton is 1.5 GeV. At this energy, G reaches a plateau around a value of 8 and a net power production of 120 MW (megawatt). Starting with alpha, ion beams have a higher G than 1.5 GeV protons. A beam of 0.25 AGeV(gigaelectronvolt per nucleon) ⁷Li realizes the same net power production as 1.5 GeV protons, has a G of 15, and needs an accelerator length 2.6 times lower than for protons, representing the best solution for ADS. Beams of ¹⁶O or ²⁰Ne with energy 0.75 AGeV, accelerated in an accelerator with the same length as 1.5 GeV protons produce approximately 900 MW net power, with a gain of 23-25. The study of the evolution of the isotopes composition during irradiation shows that the increase in power production diminishes the period between refueling. For a net power produced of 120 MW, the target can be irradiated approximately 5000 days without refueling, but only 600 days when the net power reaches 1 GW (gigawatt).

Keywords: accelerator driven system, ion beam, electrical power, energy gain

Procedia PDF Downloads 128
283 Seismic Assessment of Non-Structural Component Using Floor Design Spectrum

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Experiences in the past earthquakes have clearly demonstrated the necessity of seismic design and assessment of Non-Structural Components (NSCs) particularly in post-disaster structures such as hospitals, power plants, etc. as they have to be permanently functional and operational. Meeting this objective is contingent upon having proper seismic performance of both structural and non-structural components. Proper seismic design, analysis, and assessment of NSCs can be attained through generation of Floor Design Spectrum (FDS) in a similar fashion as target spectrum for structural components. This paper presents the developed methodology to generate FDS directly from corresponding Uniform Hazard Spectrum (UHS) (i.e. design spectra for structural components). The methodology is based on the experimental and numerical analysis of a database of 27 real Reinforced Concrete (RC) buildings which are located in Montreal, Canada. The buildings were tested by Ambient Vibration Measurements (AVM) and their dynamic properties have been extracted and used as part of the approach. Database comprises 12 low-rises, 10 medium-rises, and 5 high-rises and they are mostly designated as post-disaster\emergency shelters by the city of Montreal. The buildings are subjected to 20 compatible seismic records to UHS of Montreal and Floor Response Spectra (FRS) are developed for every floors in two horizontal direction considering four different damping ratios of NSCs (i.e. 2, 5, 10, and 20 % viscous damping). Generated FRS (approximately 132’000 curves) are statistically studied and the methodology is proposed to generate the FDS directly from corresponding UHS. The approach is capable of generating the FDS for any selection of floor level and damping ratio of NSCs. It captures the effect of: dynamic interaction between primary (structural) and secondary (NSCs) systems, higher and torsional modes of primary structure. These are important improvements of this approach compared to conventional methods and code recommendations. Application of the proposed approach are represented here through two real case-study buildings: one low-rise building and one medium-rise. The proposed approach can be used as practical and robust tool for seismic assessment and design of NSCs especially in existing post-disaster structures.

Keywords: earthquake engineering, operational and functional components, operational modal analysis, seismic assessment and design

Procedia PDF Downloads 201
282 Focus Group Study Exploring Researchers Perspective on Open Science Policy

Authors: E. T. Svahn

Abstract:

Knowledge about the factors that influence the exchange between research and society is of the utmost importance for developing collaboration between different actors, especially in future science policy development and the creation of support structures for researchers. Among other things, how researchers look at the surrounding open science policy environment and what conditions and attitudes they have for interacting with it. This paper examines the Finnish researchers' attitudes towards open science policies in 2020. Open science is an integrated part of researchers' daily lives and supports not only the effectiveness of research outputs but also the quality of research. Open science policy in ideal situation is seen as a supporting structure that enables the exchange between research and society, but in other situation, it can end up being red tape generating obstacles and hindering possibilities of making science in an efficient way. Results of this study were carried out through focus group interviews. This qualitative research method was selected because it aims to understand the phenomenon under study. In addition, focus group interviews produce diverse and rich material that would not be available with other research methods. Focus group interviews have well-established applications in social science, especially in understanding the perspectives and experiences of research subjects. In this study, focus groups were used in studying the mindset and actions of researchers. Each group's size was between 4-10 people, and the aim was to bring out different perspectives on the subject. The interviewer enabled the presentation of different perceptions and opinions, and the focus group interviews were recorded and written as text. The material was analysed using grounded theory method. The results are presented as thematic areas, theoretical model, and as direct quotations. Attitudes towards open science policy can vary greatly depending on the research area. This study shows that the open science policy demands in medicine, technology, and natural sciences compared to social sciences, educational sciences, and the humanities, varies somewhat. The variation in attitudes between different research areas can thus be largely explained by the fact that the research output and ethical code vary significantly between certain subjects. This study aims to increase understanding of the nuances to what extent open science policies should be tailored for different disciplines and research areas.

Keywords: focus group interview, grounded theory, open science policy, science policy

Procedia PDF Downloads 139
281 Comparison of the Efficacy of Ketamine-Propofol versus Thiopental Sodium-Fentanyl in Procedural Sedation in the Emergency Department: A Randomized Double-Blind Clinical Trial

Authors: Maryam Bahreini, Mostafa Talebi Garekani, Fatemeh Rasooli, Atefeh Abdollahi

Abstract:

Introduction: Procedural sedation and analgesia have been desirable to handle painful procedures. The trend to find the agent with more efficacy and less complications is still controversial; thus, many sedative regimens have been studied. This study tried to assess the effectiveness and adverse effects of thiopental sodium-fentanyl with the known medication, ketamine-propofol for procedural sedation in the emergency department. Methods: Consenting patients were enrolled in this randomized double-blind trial to receive either 1:1 ketamine-propofol (KP) or thiopental-fentanyl (TF) 1:1 mg: Mg proportion on a weight-based dosing basis to reach the sedation level of American Society of Anesthesiologist class III/IV. The respiratory and hemodynamic complications, nausea and vomiting, recovery agitation, patient recall and satisfaction, provider satisfaction and recovery time were compared. The study was registered in Iranian randomized Control Trial Registry (Code: IRCT2015111325025N1). Results: 96 adult patients were included and randomized, 47 in the KP group and 49 in the TF group. 2.1% in the KP group and 8.1 % in the TF group experienced transient hypoxia leading to performing 4.2 % versus 8.1 % airway maneuvers for 2 groups, respectively; however, no statistically significant difference was observed between 2 combinations, and there was no report of endotracheal placement or further admission. Patient and physician satisfaction were significantly higher in the KP group. There was no difference in respiratory, gastrointestinal, cardiovascular and psychiatric adverse events, recovery time and patient recall of the procedure between groups. The efficacy and complications were not related to the type of procedure or patients’ smoking or addiction trends. Conclusion: Ketamine-propofol and thiopental-fentanyl combinations were effectively comparable although KP resulted in higher patient and provider satisfaction. It is estimated that thiopental fentanyl combination can be as potent and efficacious as ketofol with relatively similar incidence of adverse events in procedural sedation.

Keywords: adverse effects, conscious sedation, fentanyl, propofol, ketamine, safety, thiopental

Procedia PDF Downloads 204
280 Long-Term Outcome of Emergency Response Team System in In-Hospital Cardiac Arrest

Authors: Jirapat Suriyachaisawat, Ekkit Surakarn

Abstract:

Introduction: To improve early detection and mortality rate of in-hospital cardiac arrest, Emergency Response Team (ERT) system was planned and implemented since June 2009 to detect pre-arrest conditons and for any concerns. The ERT consisted of on duty physicians and nurses from emergency department. ERT calling criteria consisted of acute change of HR < 40 or > 130 beats per minute, systolic blood pressure < 90 mmHg, respiratory rate <8 or >28 breaths per minute, O2 saturation <90%, acute change in conscious state, acute chest pain or worry about the patients. From the data on ERT system implementation in our hospital in early phase (during June 2009-2011), there was no statistic significance in difference in in-hospital cardiac arrest incidence and overall hospital mortality rate. Since the introduction of the ERT service in our hospital, we have conducted continuous educational campaign to improve awareness in an attempt to increase use of the service. Methods: To investigate outcome of ERT system in in-hospital cardiac arrest and overall hospital mortality rate, we conducted a prospective, controlled before-and after examination of the long term effect of a ERT system on the incidence of cardiac arrest. We performed chi-square analysis to find statistic significance. Results: Of a total 623 ERT cases from June 2009 until December 2012, there were 72 calls in 2009, 196 calls in 2010, 139 calls in 2011 and 245 calls in 2012. The number of ERT calls per 1000 admissions in year 2009-10 was 7.69; 5.61 in 2011 and 9.38 in 2013. The number of code blue calls per 1000 admissions decreased significantly from 2.28 to 0.99 per 1000 admissions (P value < 0.001). The incidence of cardiac arrest decreased progressively from 1.19 to 0.34 per 1000 admissions and significant in difference in year 2012 (P value < 0.001 ). The overall hospital mortality rate decreased by 8 % from 15.43 to 14.43 per 1000 admissions (P value 0.095). Conclusions: ERT system implementation was associated with progressive reduction in cardiac arrests over three year period, especially statistic significant in difference in 4th year after implementation. We also found an inverse association between number of ERT use and the risk of occurrence of cardiac arrests, but we have not found difference in overall hospital mortality rate.

Keywords: cardiac arrest, outcome, in-hospital, ERT

Procedia PDF Downloads 190
279 Anatomical and Histological Analysis of Salpinx and Ovary in Anatolian Wild Goat (Capra aegagrus aegagrus)

Authors: Gulseren Kirbas, Mushap Kuru, Buket Bakir, Ebru Karadag Sari

Abstract:

Capra (mountain goat) is a genus comprising nine species. The domestic goat (C. aegagrus hircus) is a subspecies of the wild goat that is domesticated. This study aimed to determine the anatomical structure of the salpinx and ovary of the Anatolian wild goat (C. aegagrus aegagrus). Animals that were taken to the Kafkas University Wildlife Rescue and Rehabilitation Center, Kars, Turkey, because of various reasons, such as traffic accidents and firearm injuries, were used in this study. The salpinges and ovaries of four wild goats of similar ages, which could not be rescued by the Center despite all interventions, were dissected. Measurements were taken from the right-left salpinx and ovary using digital calipers. The weights of each ovary and salpinx were measured using a precision scale (min: 0.0001 g − max: 220 g, code: XB220A; Precisa, Swiss). The histological structure of the tissues was examined after weighing the organs. The tissue samples were fixed in 10% formaldehyde for 24 h. Then a routine procedure was applied, and the tissues were embedded in paraffin. Mallory’s modified triple staining was used to demonstrate the general structure of the salpinx. The salpinx was found to consist of three different regions (infundibulum, ampulla, and isthmus). These regions consisted of tunica mucosa, tunica muscularis, and tunica serosa. The prismatic epithelial cells were observed in the lamina epithelialis of tunica mucosa in every region, but the prismatic fimbrae cells occurred most in the infundibulum. The ampulla was distinguished by its many mucosal folds. It was the longest region of the salpinx and was joined to the isthmus via the ampullary–isthmus junction. Isthmus was the caudal end of the salpinx joined to the uterus and had the thickest tunica muscularis compared with the other regions. The mean length of the ovary was 13.22 ± 1.27 mm, width was 8.46 ± 0.88 mm, the thickness was 5.67 ± 0.79 mm, and weight was 0.59 ± 0.17 g. The average length of the salpinx was 58.11 ± 14.02 mm, width was 0.80 ± 0.22 mm, the thickness was 0.41 ± 0.01 mm, and weight was 0.30 ± 0.08 g. In conclusion, the Anatolian wild goat, which is included in wildlife diversity in Turkey, has been disappearing due to illegal and uncontrolled hunting as well as traffic accidents in recent years. These findings are believed to contribute to the literature.

Keywords: Anatolian wild goat, anatomy, ovary, salpinx

Procedia PDF Downloads 210
278 Musical Notation Reading versus Alphabet Reading-Comparison and Implications for Teaching Music Reading to Students with Dyslexia

Authors: Ora Geiger

Abstract:

Reading is a cognitive process of deciphering visual signs to produce meaning. During the reading process, written information of symbols and signs is received in the person’s eye and processed in the brain. This definition is relevant to both the reading of letters and the reading of musical notation. But while the letters of the alphabet are signs determined arbitrarily, notes are recorded systematically on a staff, with the location of each note on the staff indicating its relative pitch. In this paper, the researcher specifies the characteristics of alphabet reading in comparison to musical notation reading, and discusses the question whether a person diagnosed with dyslexia will necessarily have difficulty in reading musical notes. Dyslexia is a learning disorder that makes it difficult to acquire alphabet-reading skills due to difficulties expressed in the identification of letters, spelling, and other language deciphering skills. In order to read, one must be able to connect a symbol with a sound and to join the sounds into words. A person who has dyslexia finds it difficult to translate a graphic symbol into the sound that it represents. When teaching reading to children diagnosed with dyslexia, the multi-sensory approach, supporting the activation and involvement of most of the senses in the learning process, has been found to be particularly effective. According to this approach, when most senses participate in the reading learning process, it becomes more effective. During years of experience, the researcher, who is a music specialist, has been following the music reading learning process of elementary school age students, some of them diagnosed with Dyslexia, while studying to play soprano (descant) recorder. She argues that learning music reading while studying to play a musical instrument is a multi-sensory experience by its nature. The senses involved are: sight, hearing, touch, and the kinesthetic sense (motion), which provides the brain with information on the relative positions of the body. In this way, the learner experiences simultaneously visual, auditory, tactile, and kinesthetic impressions. The researcher concludes that there should be no contra-indication for teaching standard music reading to children with dyslexia if an appropriate process is offered. This conclusion is based on two main characteristics of music reading: (1) musical notation system is a systematic, logical, relative set of symbols written on a staff; and (2) music reading learning connected with playing a musical instrument is by its nature a multi-sensory activity since it combines sight, hearing, touch, and movement. This paper describes music reading teaching procedures and provides unique teaching methods that have been found to be effective for students who were diagnosed with Dyslexia. It provides theoretical explanations in addition to guidelines for music education practices.

Keywords: alphabet reading, dyslexia, multisensory teaching method, music reading, recorder playing

Procedia PDF Downloads 353
277 LTE Modelling of a DC Arc Ignition on Cold Electrodes

Authors: O. Ojeda Mena, Y. Cressault, P. Teulet, J. P. Gonnet, D. F. N. Santos, MD. Cunha, M. S. Benilov

Abstract:

The assumption of plasma in local thermal equilibrium (LTE) is commonly used to perform electric arc simulations for industrial applications. This assumption allows to model the arc using a set of magneto-hydromagnetic equations that can be solved with a computational fluid dynamic code. However, the LTE description is only valid in the arc column, whereas in the regions close to the electrodes the plasma deviates from the LTE state. The importance of these near-electrode regions is non-trivial since they define the energy and current transfer between the arc and the electrodes. Therefore, any accurate modelling of the arc must include a good description of the arc-electrode phenomena. Due to the modelling complexity and computational cost of solving the near-electrode layers, a simplified description of the arc-electrode interaction was developed in a previous work to study a steady high-pressure arc discharge, where the near-electrode regions are introduced at the interface between arc and electrode as boundary conditions. The present work proposes a similar approach to simulate the arc ignition in a free-burning arc configuration following an LTE description of the plasma. To obtain the transient evolution of the arc characteristics, appropriate boundary conditions for both the near-cathode and the near-anode regions are used based on recent publications. The arc-cathode interaction is modeled using a non-linear surface heating approach considering the secondary electron emission. On the other hand, the interaction between the arc and the anode is taken into account by means of the heating voltage approach. From the numerical modelling, three main stages can be identified during the arc ignition. Initially, a glow discharge is observed, where the cold non-thermionic cathode is uniformly heated at its surface and the near-cathode voltage drop is in the order of a few hundred volts. Next, a spot with high temperature is formed at the cathode tip followed by a sudden decrease of the near-cathode voltage drop, marking the glow-to-arc discharge transition. During this stage, the LTE plasma also presents an important increase of the temperature in the region adjacent to the hot spot. Finally, the near-cathode voltage drop stabilizes at a few volts and both the electrode and plasma temperatures reach the steady solution. The results after some seconds are similar to those presented for thermionic cathodes.

Keywords: arc-electrode interaction, thermal plasmas, electric arc simulation, cold electrodes

Procedia PDF Downloads 109
276 Fe3O4 Decorated ZnO Nanocomposite Particle System for Waste Water Remediation: An Absorptive-Photocatalytic Based Approach

Authors: Prateek Goyal, Archini Paruthi, Superb K. Misra

Abstract:

Contamination of water resources has been a major concern, which has drawn attention to the need to develop new material models for treatment of effluents. Existing conventional waste water treatment methods remain ineffective sometimes and uneconomical in terms of remediating contaminants like heavy metal ions (mercury, arsenic, lead, cadmium and chromium); organic matter (dyes, chlorinated solvents) and high salt concentration, which makes water unfit for consumption. We believe that nanotechnology based strategy, where we use nanoparticles as a tool to remediate a class of pollutants would prove to be effective due to its property of high surface area to volume ratio, higher selectivity, sensitivity and affinity. In recent years, scientific advancement has been made to study the application of photocatalytic (ZnO, TiO2 etc.) nanomaterials and magnetic nanomaterials in remediating contaminants (like heavy metals and organic dyes) from water/wastewater. Our study focuses on the synthesis and monitoring remediation efficiency of ZnO, Fe3O4 and Fe3O4 coated ZnO nanoparticulate system for the removal of heavy metals and dyes simultaneously. Multitude of ZnO nanostructures (spheres, rods and flowers) using multiple routes (microwave & hydrothermal approach) offers a wide range of light active photo catalytic property. The phase purity, morphology, size distribution, zeta potential, surface area and porosity in addition to the magnetic susceptibility of the particles were characterized by XRD, TEM, CPS, DLS, BET and VSM measurements respectively. Further on, the introduction of crystalline defects into ZnO nanostructures can also assist in light activation for improved dye degradation. Band gap of a material and its absorbance is a concrete indicator for photocatalytic activity of the material. Due to high surface area, high porosity and affinity towards metal ions and availability of active surface sites, iron oxide nanoparticles show promising application in adsorption of heavy metal ions. An additional advantage of having magnetic based nanocomposite is, it offers magnetic field responsive separation and recovery of the catalyst. Therefore, we believe that ZnO linked Fe3O4 nanosystem would be efficient and reusable. Improved photocatalytic efficiency in addition to adsorption for environmental remediation has been a long standing challenge, and the nano-composite system offers the best of features which the two individual metal oxides provide for nanoremediation.

Keywords: adsorption, nanocomposite, nanoremediation, photocatalysis

Procedia PDF Downloads 232
275 Electrospray Plume Characterisation of a Single Source Cone-Jet for Micro-Electronic Cooling

Authors: M. J. Gibbons, A. J. Robinson

Abstract:

Increasing expectations on small form factor electronics to be more compact while increasing performance has driven conventional cooling technologies to a thermal management threshold. An emerging solution to this problem is electrospray (ES) cooling. ES cooling enables two phase cooling by utilising Coulomb forces for energy efficient fluid atomization. Generated charged droplets are accelerated to the grounded target surface by the applied electric field and surrounding gravitational force. While in transit the like charged droplets enable plume dispersion and inhibit droplet coalescence. If the electric field is increased in the cone-jet regime, a subsequent increase in the plume spray angle has been shown. Droplet segregation in the spray plume has been observed, with primary droplets in the plume core and satellite droplets positioned on the periphery of the plume. This segregation is facilitated by inertial and electrostatic effects. This result has been corroborated by numerous authors. These satellite droplets are usually more densely charged and move at a lower relative velocity to that of the spray core due to the radial decay of the electric field. Previous experimental research by Gomez and Tang has shown that the number of droplets deposited on the periphery can be up to twice that of the spray core. This result has been substantiated by a numerical models derived by Wilhelm et al., Oh et al. and Yang et al. Yang et al. showed from their numerical model, that by varying the extractor potential the dispersion radius of the plume also varies proportionally. This research aims to investigate this dispersion density and the role it plays in the local heat transfer coefficient profile (h) of ES cooling. This will be carried out for different extractor – target separation heights (H2), working fluid flow rates (Q), and extractor applied potential (V2). The plume dispersion will be recorded by spraying a 25 µm thick, joule heated steel foil and by recording the thermal footprint of the ES plume using a Flir A-40 thermal imaging camera. The recorded results will then be analysed by in-house developed MATLAB code.

Keywords: electronic cooling, electrospray, electrospray plume dispersion, spray cooling

Procedia PDF Downloads 382
274 Defence Ethics : A Performance Measurement Framework for the Defence Ethics Program

Authors: Allyson Dale, Max Hlywa

Abstract:

The Canadian public expects the highest moral standards from Canadian Armed Forces (CAF) members and Department of National Defence (DND) employees. The Chief, Professional Conduct and Culture (CPCC) stood up in April 2021 with the mission of ensuring that the defence culture and members’ conduct are aligned with the ethical principles and values that the organization aspires towards. The Defence Ethics Program (DEP), which stood up in 1997, is a values-based ethics program for individuals and organizations within the DND/CAF and now falls under CPCC. The DEP is divided into five key functional areas, including policy, communications, collaboration, training and education, and advice and guidance. The main focus of the DEP is to foster an ethical culture within defence so that members and organizations perform to the highest ethical standards. The measurement of organizational ethics is often complex and challenging. In order to monitor whether the DEP is achieving its intended outcomes, a performance measurement framework (PMF) was developed using the Director General Military Personnel Research and Analysis (DGMPRA) PMF development process. This evidence-based process is based on subject-matter expertise from the defence team. The goal of this presentation is to describe each stage of the DGMPRA PMF development process and to present and discuss the products of the DEP PMF (e.g., logic model). Specifically, first, a strategic framework was developed to provide a high-level overview of the strategic objectives, mission, and vision of the DEP. Next, Key Performance Questions were created based on the objectives in the strategic framework. A logic model detailing the activities, outputs (what is produced by the program activities), and intended outcomes of the program were developed to demonstrate how the program works. Finally, Key Performance Indicators were developed based on both the intended outcomes in the logic model and the Key Performance Questions in order to monitor program effectiveness. The Key Performance Indicators measure aspects of organizational ethics such as ethical conduct and decision-making, DEP collaborations, and knowledge and awareness of the Defence Ethics Code while leveraging ethics-related items from multiple DGMPRA surveys where appropriate.

Keywords: defence ethics, ethical culture, organizational performance, performance measurement framework

Procedia PDF Downloads 91
273 Communication in the Sciences: A Discourse Analysis of Biology Research Articles and Magazine Articles

Authors: Gayani Ranawake

Abstract:

Effective communication is widely regarded as an important aspect of any discipline. This particular study deals with written communication in science. Writing conventions and linguistic choices play a key role in conveying the message effectively to a target audience. Scientists are responsible for conveying their findings or research results not only to their discourse community but also to the general public. Recognizing appropriate linguistic choices is crucial since they vary depending on the target audience. The majority of scientists can communicate effectively with their discourse community, but public engagement seems more challenging to them. There is a lack of research into the language use of scientists, and in particular how it varies by discipline and audience (genre). A better understanding of the different linguistic conventions used in effective science writing by scientists for scientists and by scientists for the public will help to guide scientists who are familiar with their discourse community norms to write effectively for the public. This study investigates the differences and similarities of linguistic choices in biology articles written by scientists for their discourse community and biology magazine articles written by scientists and science communicators for the general public. This study is a part of a larger project investigating linguistic differences in different genres of science academic writing. The sample for this particular study is composed of 20 research articles from the journal Biological Reviews and 20 magazine articles from the magazine Australian Popular Science. Differences in the linguistic devices were analyzed using Hyland’s metadiscourse model for academic writing proposed in 2005. The frequency of the usage of interactive resources (transitions, frame markers, endophoric markers, evidentials and code glosses) and interactional resources (hedges, boosters, attitude markers, self-mentions and engagement markers) were compared and contrasted using the NVivo textual analysis tool. The results clearly show the differences in the frequency of usage of interactional and interactive resources in the two disciplines under investigation. The findings of this study provide a reference guide for scientists and science writers to understand the differences in the linguistic choices between the two genres. This will be particularly helpful for scientists who are proficient at writing for their discourse community, but not for the public.

Keywords: discourse analysis, linguistic choices, metadiscourse, science writing

Procedia PDF Downloads 131
272 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids

Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho

Abstract:

In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.

Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model

Procedia PDF Downloads 106
271 Library Outreach After COVID: Making the Case for In-Person Library Visits

Authors: Lucas Berrini

Abstract:

Academic libraries have always struggled with engaging with students and faculty. Striking the balance between what the community needs and what the library can afford has also been a point of contention for libraries. As academia begins to return to a new normal after COVID, library staff are rethinking how remind patrons that the library is open and ready for business. NC Wesleyan, a small liberal arts school in eastern North Carolina, decided to be proactive and reach out to the academic community. After shutting down in 2020 for COVID, the campus library saw a marked decrease in in-person attendance. For a small school whose operational budget was tied directly to tuition payments, it was imperative for the library to remind faculty and staff that they were open for business. At the beginning of the Summer 2022 term and continuing into the fall, the reference team created a marketing plan using email, physical meetings, and virtual events targeted at students and faculty as well as community members who utilized the facilities prior to COVID. The email blasts were gentle reminders that the building was open and available for use The target audiences were the community at large. Several of the emails contained reminders of previous events in the library that were student centered. The next phase of the email campaign centers on reminding the community about the libraries physical and electronic resources, including the makerspace lab. Language will indicate that student voices are needed, and a QR code is included for students to leave feedback as to what they want to see in the library. The final phase of the email blasts were faculty focused and invited them to connect with library reference staff for an in-person consultation on their research needs. While this phase is ongoing, the response has been positive, and staff are compiling data in hopes of working with administration to implement some of the requested services and materials. These email blasts will be followed up by in-person meetings with faculty and students who responded to the QR codes. This research is ongoing. This type of targeted outreach is new for Wesleyan. It is the hope of the library that by the end of Fall 2022, there will be a plan in place to address the needs and concerns of the students and faculty. Furthermore, the staff hopes to create a new sense of community for the students and staff of the university.

Keywords: academic, education, libraries, outreach

Procedia PDF Downloads 80
270 A PHREEQC Reactive Transport Simulation for Simply Determining Scaling during Desalination

Authors: Andrew Freiburger, Sergi Molins

Abstract:

Freshwater is a vital resource; yet, the supply of clean freshwater is diminishing as the consequence of melting snow and ice from global warming, pollution from industry, and an increasing demand from human population growth. The unsustainable trajectory of diminishing water resources is projected to jeopardize water security for billions of people in the 21st century. Membrane desalination technologies may resolve the growing discrepancy between supply and demand by filtering arbitrary feed water into a fraction of renewable, clean water and a fraction of highly concentrated brine. The leading hindrance of membrane desalination is fouling, whereby the highly concentrated brine solution encourages micro-organismal colonization and/or the precipitation of occlusive minerals (i.e. scale) upon the membrane surface. Thus, an understanding of brine formation is necessary to mitigate membrane fouling and to develop efficacious desalination technologies that can bolster the supply of available freshwater. This study presents a reactive transport simulation of brine formation and scale deposition during reverse osmosis (RO) desalination. The simulation conceptually represents the RO module as a one-dimensional domain, where feed water directionally enters the domain with a prescribed fluid velocity and is iteratively concentrated in the immobile layer of a dual porosity model. Geochemical PHREEQC code numerically evaluated the conceptual model with parameters for the BW30-400 RO module and for real water feed sources – e.g. the Red and Mediterranean seas, and produced waters from American oil-wells, based upon peer-review data. The presented simulation is computationally simpler, and hence less resource intensive, than the existent and more rigorous simulations of desalination phenomena, like TOUGHREACT. The end-user may readily prepare input files and execute simulations on a personal computer with open source software. The graphical results of fouling-potential and brine characteristics may therefore be particularly useful as the initial tool for screening candidate feed water sources and/or informing the selection of an RO module.

Keywords: desalination, PHREEQC, reactive transport, scaling

Procedia PDF Downloads 123
269 Comparison of Monte Carlo Simulations and Experimental Results for the Measurement of Complex DNA Damage Induced by Ionizing Radiations of Different Quality

Authors: Ifigeneia V. Mavragani, Zacharenia Nikitaki, George Kalantzis, George Iliakis, Alexandros G. Georgakilas

Abstract:

Complex DNA damage consisting of a combination of DNA lesions, such as Double Strand Breaks (DSBs) and non-DSB base lesions occurring in a small volume is considered as one of the most important biological endpoints regarding ionizing radiation (IR) exposure. Strong theoretical (Monte Carlo simulations) and experimental evidence suggests an increment of the complexity of DNA damage and therefore repair resistance with increasing linear energy transfer (LET). Experimental detection of complex (clustered) DNA damage is often associated with technical deficiencies limiting its measurement, especially in cellular or tissue systems. Our groups have recently made significant improvements towards the identification of key parameters relating to the efficient detection of complex DSBs and non-DSBs in human cellular systems exposed to IR of varying quality (γ-, X-rays 0.3-1 keV/μm, α-particles 116 keV/μm and 36Ar ions 270 keV/μm). The induction and processing of DSB and non-DSB-oxidative clusters were measured using adaptations of immunofluorescence (γH2AX or 53PB1 foci staining as DSB probes and human repair enzymes OGG1 or APE1 as probes for oxidized purines and abasic sites respectively). In the current study, Relative Biological Effectiveness (RBE) values for DSB and non-DSB induction have been measured in different human normal (FEP18-11-T1) and cancerous cell lines (MCF7, HepG2, A549, MO59K/J). The experimental results are compared to simulation data obtained using a validated microdosimetric fast Monte Carlo DNA Damage Simulation code (MCDS). Moreover, this simulation approach is implemented in two realistic clinical cases, i.e. prostate cancer treatment using X-rays generated by a linear accelerator and a pediatric osteosarcoma case using a 200.6 MeV proton pencil beam. RBE values for complex DNA damage induction are calculated for the tumor areas. These results reveal a disparity between theory and experiment and underline the necessity for implementing highly precise and more efficient experimental and simulation approaches.

Keywords: complex DNA damage, DNA damage simulation, protons, radiotherapy

Procedia PDF Downloads 306
268 The Mitigation of Quercetin on Lead-Induced Neuroinflammation in a Rat Model: Changes in Neuroinflammatory Markers and Memory

Authors: Iliyasu Musa Omoyine, Musa Sunday Abraham, Oladele Sunday Blessing, Iliya Ibrahim Abdullahi, Ibegbu Augustine Oseloka, Nuhu Nana-Hawau, Animoku Abdulrazaq Amoto, Yusuf Abdullateef Onoruoiza, Sambo Sohnap James, Akpulu Steven Peter, Ajayi Abayomi

Abstract:

The neuroprotective role of inflammation from detrimental intrinsic and extrinsic factors has been reported. However, the overactivation of astrocytes and microglia due to lead toxicity produce excessive pro-inflammatory cytokines, mediating neurodegenerative diseases. The present study investigated the mitigatory effects of quercetin on neuroinflammation, correlating with memory function in lead-exposed rats. In this study, Wistar rats were administered orally with Quercetin (Q: 60 mg/kg) and Succimer as a standard drug (S: 10 mg/kg) for 21 days after lead exposure (Pb: 125 mg/kg) of 21 days or in combination with Pb, once daily for 42 days. Working and reference memory was assessed using an Eight-arm radial water maze (8-ARWM). The changes in brain lead level, the neuronal nitric oxide synthase (nNOS) activity, and the level of neuroinflammatory markers such as tumour necrosis factor-alpha (TNF-α) and Interleukin 1 Beta (IL-1β) were determined. Immunohistochemically, astrocyte expression was evaluated. The results showed that the brain level of lead was increased significantly in lead-exposed rats. The expression of astrocytes increased in the CA3 and CA1 regions of the hippocampus, and the levels of brain TNF-α and IL-1β increased in lead-exposed rats. Lead impaired reference and working memory by increasing reference memory errors and working memory incorrect errors in lead-exposed rats. However, quercetin treatment effectively improved memory and inhibited neuroinflammation by reducing astrocytes’ expression and the levels of TNF-α and IL-1β. The expression of astrocytes and the levels of TNF-α and IL-1β correlated with memory function. The possible explanation for quercetin’s anti-neuroinflammatory effect is that it modulates the activity of cellular proteins involved in the inflammatory response; inhibits the transcription factor of nuclear factor-kappa B (NF-κB), which regulates the expression of proinflammatory molecules; inhibits kinases required for the synthesis of Glial fibrillary acidic protein (GFAP) and modifies the phosphorylation of some proteins, which affect the structure and function of intermediate filament proteins; and, lastly, induces Cyclic-AMP Response Element Binding (CREB) activation and neurogenesis as a compensatory mechanism for memory deficits and neuronal cell death. In conclusion, the levels of neuroinflammatory markers negatively correlated with memory function. Thus, quercetin may be a promising therapy in neuroinflammation and memory dysfunction in populations prone to lead exposure.

Keywords: lead, quercetin, neuroinflammation, memory

Procedia PDF Downloads 31
267 Understanding the Influence of Fibre Meander on the Tensile Properties of Advanced Composite Laminates

Authors: Gaoyang Meng, Philip Harrison

Abstract:

When manufacturing composite laminates, the fibre directions within the laminate are never perfectly straight and inevitably contain some degree of stochastic in-plane waviness or ‘meandering’. In this work we aim to understand the relationship between the degree of meandering of the fibre paths, and the resulting uncertainty in the laminate’s final mechanical properties. To do this, a numerical tool is developed to automatically generate meandering fibre paths in each of the laminate's 8 plies (using Matlab) and after mapping this information into finite element simulations (using Abaqus), the statistical variability of the tensile mechanical properties of a [45°/90°/-45°/0°]s carbon/epoxy (IM7/8552) laminate is predicted. The stiffness, first ply failure strength and ultimate failure strength are obtained. Results are generated by inputting the degree of variability in the fibre paths and the laminate is then examined in all directions (from 0° to 359° in increments of 1°). The resulting predictions are output as flower (polar) plots for convenient analysis. The average fibre orientation of each ply in a given laminate is determined by the laminate layup code [45°/90°/-45°/0°]s. However, in each case, the plies contain increasingly large amounts of in-plane waviness (quantified by the standard deviation of the fibre direction in each ply across the laminate. Four different amounts of variability in the fibre direction are tested (2°, 4°, 6° and 8°). Results show that both the average tensile stiffness and the average tensile strength decrease, while the standard deviations increase, with an increasing degree of fibre meander. The variability in stiffness is found to be relatively insensitive to the rotation angle, but the variability in strength is sensitive. Specifically, the uncertainty in laminate strength is relatively low at orientations centred around multiples of 45° rotation angle, and relatively high between these rotation angles. To concisely represent all the information contained in the various polar plots, rotation-angle dependent Weibull distribution equations are fitted to the data. The resulting equations can be used to quickly estimate the size of the errors bars for the different mechanical properties, resulting from the amount of fibre directional variability contained within the laminate. A longer term goal is to use these equations to quickly introduce realistic variability at the component level.

Keywords: advanced composite laminates, FE simulation, in-plane waviness, tensile properties, uncertainty quantification

Procedia PDF Downloads 78
266 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor

Authors: Hao Yan, Xiaobing Zhang

Abstract:

The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.

Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model

Procedia PDF Downloads 77
265 Dynamic EEG Desynchronization in Response to Vicarious Pain

Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy

Abstract:

The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.

Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition

Procedia PDF Downloads 273
264 Health Psychology Intervention: Identifying Early Symptoms in Neurological Disorders

Authors: Simon B. N. Thompson

Abstract:

Early indicator of neurological disease has been proposed by the expanded Thompson Cortisol Hypothesis which suggests that yawning is linked to rises in cortisol levels. Cortisol is essential to the regulation of the immune system and pathological yawning is a symptom of multiple sclerosis (MS). Electromyography activity (EMG) in the jaw muscles typically rises when the muscles are moved – extended or flexed; and yawning has been shown to be highly correlated with cortisol levels in healthy people. It is likely that these elevated cortisol levels are also seen in people with MS. The possible link between EMG in the jaw muscles and rises in saliva cortisol levels during yawning were investigated in a randomized controlled trial of 60 volunteers aged 18-69 years who were exposed to conditions that were designed to elicit the yawning response. Saliva samples were collected at the start and after yawning, or at the end of the presentation of yawning-provoking stimuli, in the absence of a yawn, and EMG data was additionally collected during rest and yawning phases. Hospital Anxiety and Depression Scale, Yawning Susceptibility Scale, General Health Questionnaire, demographic, and health details were collected and the following exclusion criteria were adopted: chronic fatigue, diabetes, fibromyalgia, heart condition, high blood pressure, hormone replacement therapy, multiple sclerosis, and stroke. Significant differences were found between the saliva cortisol samples for the yawners, t (23) = -4.263, p = 0.000, as compared with the non-yawners between rest and post-stimuli, which was non-significant. There were also significant differences between yawners and non-yawners for the EMG potentials with the yawners having higher rest and post-yawning potentials. Significant evidence was found to support the Thompson Cortisol Hypothesis suggesting that rises in cortisol levels are associated with the yawning response. Further research is underway to explore the use of cortisol as a potential diagnostic tool as an assist to the early diagnosis of symptoms related to neurological disorders. Bournemouth University Research & Ethics approval granted: JC28/1/13-KA6/9/13. Professional code of conduct, confidentiality, and safety issues have been addressed and approved in the Ethics submission. Trials identification number: ISRCTN61942768. http://www.controlled-trials.com/isrctn/

Keywords: cortisol, electromyography, neurology, yawning

Procedia PDF Downloads 577
263 The Participation of Graduates and Students of Social Work in the Erasmus Program: a Case Study in the Portuguese context – the Polytechnic of Leiria

Authors: Cezarina da Conceição Santinho Maurício, José Duque Vicente

Abstract:

Established in 1987, the Erasmus Programme is a program for the exchange of higher education students. Its purposes are several. The mobility developed has contributed to the promotion of multiple learning, the internalization the feeling of belonging to a community, and the consolidation of cooperation between entities or universities. It also allows the experience of a European experience, considering multilingualism one of the bases of the European project and vehicle to achieve the union in diversity. The program has progressed and introduced changes Erasmus+ currently offers a wide range of opportunities for higher education, vocational education and training, school education, adult education, youth, and sport. These opportunities are open to students and other stakeholders, such as teachers. Portugal was one of the countries that readily adhered to this program, assuming itself as an instrument of internationalization of polytechnic and university higher education. Students and social work teachers have been involved in this mobility of learning and multicultural interactions. The presence and activation of this program was made possible by Portugal's joining the European Union. This event was reflected in the field of portuguese social work and contributes to its approach to the reality of european social work. Historically, the Portuguese social work has built a close connection with the Latin American world and, in particular, with Brazil. There are several examples that can be identified in the different historical stages. This is the case of the post-revolution period of 1974 and the presence of the reconceptualization movement, the struggle for enrollment in the higher education circuit, the process of winning a bachelor's degree, and postgraduate training (the first doctorates of social work were carried out in Brazilian universities). This influence is also found in the scope of the authors and the theoretical references used. This study examines the participation of graduates and students of social work in the Erasmus program. The following specific goals were outlined: to identify the host countries and universities; to investigate the dimension and type of mobility made, understand the learning and experiences acquired, identify the difficulties felt, capture their perspectives on social work and the contribution of this experience in training. In the methodological field, the option fell on a qualitative methodology, with the application of semi-structured interviews to graduates and students of social work with Erasmus mobility experience. Once the graduates agreed, the interviews were recorded and transcribed, analyzed according to the previously defined analysis categories. The findings emphasize the importance of this experience for students and graduates in informal and formal learning. The authors conclude with recommendations to reinforce this mobility, either at the individual level or as a project built for the group or collective.

Keywords: erasmus programme, graduates and students of social work, participation, social work

Procedia PDF Downloads 137
262 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 138
261 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing

Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou

Abstract:

The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.

Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation

Procedia PDF Downloads 101
260 Factors Influencing Telehealth Services for Diabetes Care in Nepal: A Mixed Method Study

Authors: Sumitra Sharma, Christina Parker, Kathleen Finlayson, Clint Douglas, Niall Higgins

Abstract:

Background: Telehealth services have potential to increase accessibility, utilization, and effectiveness of healthcare services. As the telehealth services are yet to integrate within regular hospital services in Nepal, the use of the telehealth services among adults with diabetes is scarce. Prior to implementation of telehealth services for adults with diabetes, it is necessary to examine influencing factors of telehealth services. Objective: This study aimed to investigate factors influencing telehealth services for diabetes care in Nepal. Methods: This study used a mixed-method study design which included a cross-sectional survey among adults with diabetes and semi-structured interviews among key healthcare professionals of Nepal. The study was conducted in a medical out-patient department of a tertiary hospital of Nepal. The survey adapted a previously validated questionnaire, while semi-structured questions for interviews were developed from literature review and experts consultation. All interviews were audio-recorded, and inductive content analysis was used to code transcripts and develop themes. For a survey, a descriptive analysis, chi-square test, and Mann Whitney U test were used to analyze the data. Results: One hundred adults with diabetes were participated in a survey, and seven healthcare professionals were recruited for interviews. In a survey, just over half of the participants (53%) were male, and others were female. Almost all participants (98%) owned a mobile phone, and 67% of them had a computer with internet access at home. Majority of participants had experience in using Facebook messenger (95%), followed by Viber (60%) and Zoom (26%). Almost all of the participants (96%) were willing to use telehealth services. There were significant associations between female sex and participants living 10 km away from the hospital with their willingness to use telehealth services. There was a significant association between participants' self-perception of good health status with their willingness to use video-conference calls and phone calls to use telehealth services. Seven themes were developed from interview data which are related to predisposing, reinforcing, and enabling factors influencing telehealth services for diabetes care in Nepal. Conclusion: In summary, several factors were found to influence the use of telehealth services for diabetes care in Nepal. For effective implementation of a sustainable telehealth services for adults with diabetes in Nepal, these factors need to be considered.

Keywords: contributing factors, diabetes mellitus, developing countries, telemedicine, telecare

Procedia PDF Downloads 59
259 Approach-Avoidance Conflict in the T-Maze: Behavioral Validation for Frontal EEG Activity Asymmetries

Authors: Eva Masson, Andrea Kübler

Abstract:

Anxiety disorders (AD) are the most prevalent psychological disorders. However, far from most affected individuals are diagnosed and receive treatment. This gap is probably due to the diagnosis criteria, relying on symptoms (according to the DSM-5 definition) with no objective biomarker. Approach-avoidance conflict tasks are one common approach to simulate such disorders in a lab setting, with most of the paradigms focusing on the relationships between behavior and neurophysiology. Approach-avoidance conflict tasks typically place participants in a situation where they have to make a decision that leads to both positive and negative outcomes, thereby sending conflicting signals that trigger the Behavioral Inhibition System (BIS). Furthermore, behavioral validation of such paradigms adds credibility to the tasks – with overt conflict behavior, it is safer to assume that the task actually induced a conflict. Some of those tasks have linked asymmetrical frontal brain activity to induced conflicts and the BIS. However, there is currently no consensus for the direction of the frontal activation. The authors present here a modified version of the T-Maze paradigm, a motivational conflict desktop task, in which behavior is recorded simultaneously to the recording of high-density EEG (HD-EEG). Methods: In this within-subject design, HD-EEG and behavior of 35 healthy participants was recorded. EEG data was collected with a 128 channels sponge-based system. The motivational conflict desktop task consisted of three blocks of repeated trials. Each block was designed to record a slightly different behavioral pattern, to increase the chances of eliciting conflict. This variety of behavioral patterns was however similar enough to allow comparison of the number of trials categorized as ‘overt conflict’ between the blocks. Results: Overt conflict behavior was exhibited in all blocks, but always for under 10% of the trials, in average, in each block. However, changing the order of the paradigms successfully introduced a ‘reset’ of the conflict process, therefore providing more trials for analysis. As for the EEG correlates, the authors expect a different pattern for trials categorized as conflict, compared to the other ones. More specifically, we expect an elevated alpha frequency power in the left frontal electrodes at around 200ms post-cueing, compared to the right one (relative higher right frontal activity), followed by an inversion around 600ms later. Conclusion: With this comprehensive approach of a psychological mechanism, new evidence would be brought to the frontal asymmetry discussion, and its relationship with the BIS. Furthermore, with the present task focusing on a very particular type of motivational approach-avoidance conflict, it would open the door to further variations of the paradigm to introduce different kinds of conflicts involved in AD. Even though its application as a potential biomarker sounds difficult, because of the individual reliability of both the task and peak frequency in the alpha range, we hope to open the discussion for task robustness for neuromodulation and neurofeedback future applications.

Keywords: anxiety, approach-avoidance conflict, behavioral inhibition system, EEG

Procedia PDF Downloads 27
258 Mobile Phone Text Reminders and Voice Call Follow-ups Improve Attendance for Community Retail Pharmacy Refills; Learnings from Lango Sub-region in Northern Uganda

Authors: Jonathan Ogwal, Louis H. Kamulegeya, John M. Bwanika, Davis Musinguzi

Abstract:

Introduction: Community retail Pharmacy drug distribution points (CRPDDP) were implemented in the Lango sub-region as part of the Ministry of Health’s response to improving access and adherence to antiretroviral treatment (ART). Clients received their ART refills from nearby local pharmacies; as such, the need for continuous engagement through mobile phone appointment reminders and health messages. We share learnings from the implementation of mobile text reminders and voice call follow-ups among ART clients attending the CRPDDP program in northern Uganda. Methods: A retrospective data review of electronic medical records from four pharmacies allocated for CRPDDP in the Lira and Apac districts of the Lango sub-region in Northern Uganda was done from February to August 2022. The process involved collecting phone contacts of eligible clients from the health facility appointment register and uploading them onto a messaging platform customized by Rapid-pro, an open-source software. Client information, including code name, phone number, next appointment date, and the allocated pharmacy for ART refill, was collected and kept confidential. Contacts received appointment reminder messages and other messages on positive living as an ART client. Routine voice call follow-ups were done to ascertain the picking of ART from the refill pharmacy. Findings: In total, 1,354 clients were reached from the four allocated pharmacies found in urban centers. 972 clients received short message service (SMS) appointment reminders, and 382 were followed up through voice calls. The majority (75%) of the clients returned for refills on the appointed date, 20% returned within four days after the appointment date, and the remaining 5% needed follow-up where they reported that they were not in the district by the appointment date due to other engagements. Conclusion: The use of mobile text reminders and voice call follow-ups improves the attendance of community retail pharmacy refills.

Keywords: antiretroviral treatment, community retail drug distribution points, mobile text reminders, voice call follow-up

Procedia PDF Downloads 92