Search results for: electronic continuum correction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2403

Search results for: electronic continuum correction

393 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials

Authors: Sheikh Omar Sillah

Abstract:

Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.

Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring

Procedia PDF Downloads 57
392 Clinical Advice Services: Using Lean Chassis to Optimize Nurse-Driven Telephonic Triage of After-Hour Calls from Patients

Authors: Eric Lee G. Escobedo-Wu, Nidhi Rohatgi, Fouzel Dhebar

Abstract:

It is challenging for patients to navigate through healthcare systems after-hours. This leads to delays in care, patient/provider dissatisfaction, inappropriate resource utilization, readmissions, and higher costs. It is important to provide patients and providers with effective clinical decision-making tools to allow seamless connectivity and coordinated care. In August 2015, patient-centric Stanford Health Care established Clinical Advice Services (CAS) to provide clinical decision support after-hours. CAS is founded on key Lean principles: Value stream mapping, empathy mapping, waste walk, takt time calculations, standard work, plan-do-check-act cycles, and active daily management. At CAS, Clinical Assistants take the initial call and manage all non-clinical calls (e.g., appointments, directions, general information). If the patient has a clinical symptom, the CAS nurses take the call and utilize standardized clinical algorithms to triage the patient to home, clinic, urgent care, emergency department, or 911. Nurses may also contact the on-call physician based on the clinical algorithm for further direction and consultation. Since August 2015, CAS has managed 228,990 calls from 26 clinical specialties. Reporting is built into the electronic health record for analysis and data collection. 65.3% of the after-hours calls are clinically related. Average clinical algorithm adherence rate has been 92%. An average of 9% of calls was escalated by CAS nurses to the physician on call. An average of 5% of patients was triaged to the Emergency Department by CAS. Key learnings indicate that a seamless connectivity vision, cascading, multidisciplinary ownership of the problem, and synergistic enterprise improvements have contributed to this success while striving for continuous improvement.

Keywords: after hours phone calls, clinical advice services, nurse triage, Stanford Health Care

Procedia PDF Downloads 162
391 Tourist Behavior Towards Blockchain-Based Payments

Authors: A. Šapkauskienė, A. Mačerinskienė, R. Andrulienė, R. Bruzgė, S. Masteika, K. Driaunys

Abstract:

The COVID-19 pandemic has affected not only world markets and economies but also the daily lives of customers and their payment habits. The pandemic has accelerated the digital transformation, so the role of technology will become even more important post-COVID. Although the popularity of cryptocurrencies has reached unprecedented heights, there are still obstacles, such as a lack of consumer experience and distrust of these technologies, so exploring the role of cryptocurrency and blockchain in the context of international travel becomes extremely important. Research on tourists’ intentions to use cryptocurrencies for payment purposes is limited due to the small number of research studies. To fill this research gap, an exploratory study based on the analysis of survey data was conducted. The purpose of the research is to explore how the behavior of tourists has changed making their financial transactions when paying for the tourism services in order to determine the intention to pay in cryptocurrencies. Behavioral intention can be examined as a dependent variable that is useful for the study of the acceptance of blockchain as cutting-edge technology. Therefore, this study examines the intention of travelers to use cryptocurrencies in electronic payments for tourism services. Several studies have shown that the intention to accept payments in a cryptocurrency is affected by the perceived usefulness of these payments and the perceived ease of use. The findings deepen our understanding of the readiness of service users to apply for blockchain-based payment in the tourism sector. The tourism industry has to focus not only on the technology but on consumers who can use cryptocurrencies, creating new possibilities and increasing business competitiveness. Based on research results, suggestions are made to guide future research on the use of cryptocurrencies by tourists in the tourism industry. Therefore, in line with the rapid expansion of virtual currency users, market capitalization, and payment in cryptographic currencies, it is necessary to explore the possibilities of implementing a blockchain-based system aiming to promote the use of services in the tourism sector as the most affected by the pandemic.

Keywords: behavioral intention, blockchain-based payment, cryptocurrency, tourism

Procedia PDF Downloads 96
390 Demographic Profile, Risk Factors and In-hospital Outcomes of Acute Coronary Syndrome (ACS) in Young Population, in Pakistan-Single Center Real World Experience

Authors: Asma Qudrat, Abid Ullah, Rafi Ullah, Ali Raza, Shah Zeb, Syed Ali Shan Ul-Haq, Shahkar Ahmed Shah, Attiya Hameed Khan, Saad Zaheer, Umama Qasim, Kiran Jamal, Zahoor khan

Abstract:

Objectives: Coronary artery disease (CAD) is the major public health issue associated with high mortality and morbidity rate worldwide. Young patients with ACS have unique characteristics with different demographic profiles and risk factors. The precise diagnosis and early risk stratification is important in guiding treatment and predicting the prognosis of young patients with ACS. To evaluate the associated demographics, risk factors, and outcomes profile of ACS in young age patients. Methods: The research follow a retrospective design, the single centre study of patients diagnosis with the first event of ACS in young age (>18 and <40) were included. Data collection included demographic profiles, risk factors, and in-hospital outcomes of young ACS patients. The patient’s data was retrieved through Electronic Medical Records (EMR) of Peshawar Institute of Cardiology (PIC), and all characteristic were assessed. Results: In this study, 77% were male, and 23% were female patients. The risk factors were assessed with CAD and shown significant results (P < 0.01). The most common presentation was STEMI, with (45%) most in ACS young patients. The angiographic pattern showed single vessel disease (SVD) in 49%, double vessel disease (DVD) in 17% and triple vessel disease (TVD) was found in 10%, and Left Artery Disease (LAD) (54%) was present to be the most common involved artery. Conclusion: It is concluded that the male sex was predominant in ACS young age patients. SVD was the common coronary angiographic finding. Risk factors showed significant results towards CAD and common presentations.

Keywords: coronary artery disease, Non-ST elevation myocardial infarction, ST elevation myocardial infarction, unstable angina, acute coronary syndrome

Procedia PDF Downloads 145
389 Assessing Students’ Readiness for an Open and Distance Learning Higher Education Environment

Authors: Upasana G. Singh, Meera Gungea

Abstract:

Learning is no more confined to the traditional classroom, teacher, and student interaction. Many universities offer courses through the Open and Distance Learning (ODL) mode, attracting a diversity of learners in terms of age, gender, and profession to name a few. The ODL mode has surfaced as one of the famous sought-after modes of learning, allowing learners to invest in their educational growth without hampering their personal and professional commitments. This mode of learning, however, requires that those who ultimately choose to adopt it must be prepared to undertake studies through such medium. The purpose of this research is to assess whether students who join universities offering courses through the ODL mode are ready to embark and study within such a framework. This study will be helpful to unveil the challenges students face in such an environment and thus contribute to developing a framework to ease adoption and integration into the ODL environment. Prior to the implementation of e-learning, a readiness assessment is essential for any institution that wants to adopt any form of e-learning. Various e-learning readiness assessment models have been developed over the years. However, this study is based on a conceptual model for e-Learning Readiness Assessment which is a ‘hybrid model’. This hybrid model consists of 4 main parameters: 1) Technological readiness, 2) Culture readiness, 3) Content readiness, and 4) Demographics factors, with 4 sub-areas, namely, technology, innovation, people and self-development. The model also includes the attitudes of users towards the adoption of e-learning as an important aspect of assessing e-learning readiness. For this study, some factors and sub-factors of the hybrid model have been considered and adapted, together with the ‘Attitude’ component. A questionnaire was designed based on the models and students where the target population were students enrolled at the Open University of Mauritius, in undergraduate and postgraduate courses. Preliminary findings indicate that most (68%) learners have an average knowledge about ODL form of learning, despite not many (72%) having previous experience with ODL. Despite learning through ODL 74% of learners preferred hard copy learning material and 48% found difficulty in reading learning material on electronic devices.

Keywords: open learning, distance learning, student readiness, a hybrid model

Procedia PDF Downloads 98
388 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 252
387 Baseline Data for Insecticide Resistance Monitoring in Tobacco Caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) on Cole Crops

Authors: Prabhjot Kaur, B.K. Kang, Balwinder Singh

Abstract:

The tobacco caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) is an agricultural important pest species. S. litura has a wide host range of approximately recorded 150 plant species worldwide. In Punjab, this pest attains sporadic status primarily on cauliflower, Brassica oleracea (L.). This pest destroys vegetable crop and particularly prefers the cruciferae family. However, it is also observed feeding on other crops such as arbi, Colocasia esculenta (L.), mung bean, Vigna radiata (L.), sunflower, Helianthus annuus (L.), cotton, Gossypium hirsutum (L.), castor, Ricinus communis (L.), etc. Larvae of this pest completely devour the leaves of infested plant resulting in huge crop losses which ranges from 50 to 70 per cent. Indiscriminate and continuous use of insecticides has contributed in development of insecticide resistance in insects and caused the environmental degradation as well. Moreover, a base line data regarding the toxicity of the newer insecticides would help in understanding the level of resistance developed in this pest and any possible cross-resistance there in, which could be assessed in advance. Therefore, present studies on development of resistance in S. litura against four new chemistry insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) were carried out in the Toxicology laboratory, Department of Entomology, Punjab Agricultural University, Ludhiana, Punjab, India during the year 2011-12. Various stages of S. litura (eggs, larvae) were collected from four different locations (Malerkotla, Hoshiarpur, Amritsar and Samrala) of Punjab. Resistance is developed in third instars of lepidopterous pests. Therefore, larval bioassays were conducted to estimate the response of field populations of thirty third-instar larvae of S. litura under laboratory conditions at 25±2°C and 65±5 per cent relative humidity. Leaf dip bioassay technique with diluted insecticide formulations recommended by Insecticide Resistance Action Committee (IRAC) was performed in the laboratory with seven to ten treatments depending on the insecticide class, respectively. LC50 values were estimated by probit analysis after correction to record control mortality data which was used to calculate the resistance ratios (RR). The LC50 values worked out for emamectin benzoate, chlorantraniliprole, indoxacarb, spinosad are 0.081, 0.088, 0.380, 4.00 parts per million (ppm) against pest populations collected from Malerkotla; 0.051, 0.060, 0.250, 3.00 (ppm) of Amritsar; 0.002, 0.001, 0.0076, 0.10 ppm for Samrala and 0.000014, 0.00001, 0.00056, 0.003 ppm against pest population of Hoshiarpur, respectively. The LC50 values for populations collected from these four locations were in the order Malerkotla>Amritsar>Samrala>Hoshiarpur for the insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) tested. Based on LC50 values obtained, emamectin benzoate (0.000014 ppm) was found to be the most toxic among all the tested populations, followed by chlorantraniliprole (0.00001 ppm), indoxacarb (0.00056 ppm) and spinosad (0.003 ppm), respectively. The pairwise correlation coefficients of LC50 values indicated that there was lack of cross resistance for emamectin benzoate, chlorantraniliprole, spinosad, indoxacarb in populations of S. litura from Punjab. These insecticides may prove to be promising substitutes for the effective control of insecticide resistant populations of S. litura in Punjab state, India.

Keywords: Spodoptera litura, insecticides, toxicity, resistance

Procedia PDF Downloads 327
386 Modification of Hexagonal Boron Nitride Induced by Focused Laser Beam

Authors: I. Wlasny, Z. Klusek, A. Wysmolek

Abstract:

Hexagonal boron nitride is a representative of a widely popular class of two-dimensional Van Der Waals materials. It finds its uses, among others, in construction of complexly layered heterostructures. Hexagonal boron nitride attracts great interest because of its properties characteristic for wide-gap semiconductors as well as an ultra-flat surface.Van Der Waals heterostructures composed of two-dimensional layered materials, such as transition metal dichalcogenides or graphene give hope for miniaturization of various electronic and optoelectronic elements. In our presentation, we will show the results of our investigations of the not previously reported modification of the hexagonal boron nitride layers with focused laser beam. The electrostatic force microscopy (EFM) images reveal that the irradiation leads to changes of the local electric fields for a wide range of laser wavelengths (from 442 to 785 nm). These changes are also accompanied by alterations of crystallographic structure of the material, as reflected by Raman spectra. They exhibit high stability and remain visible after at least five months. This behavior can be explained in terms of photoionization of the defect centers in h-BN which influence non-uniform electrostatic field screening by the photo-excited charge carriers. Analyzed changes influence local defect structure, and thus the interatomic distances within the lattice. These effects can be amplified by the piezoelectric character of hexagonal boron nitride, similar to that found in nitrides (e.g., GaN, AlN). Our results shed new light on the optical properties of the hexagonal boron nitride, in particular, those associated with electron-phonon coupling. Our study also opens new possibilities for h-BN applications in layered heterostructures where electrostatic fields can be used in tailoring of the local properties of the structures for use in micro- and nanoelectronics or field-controlled memory storage. This work is supported by National Science Centre project granted on the basis of the decision number DEC-2015/16/S/ST3/00451.

Keywords: atomic force microscopy, hexagonal boron nitride, optical properties, raman spectroscopy

Procedia PDF Downloads 160
385 An Exploratory Study of E-Learning Stakeholders’ Experiences of Developing, Implementing and Enhancing E-Courses in One Saudi University

Authors: Zahra Alqahtani

Abstract:

The use of e-learning technologies is gaining momentum in all educational institutions of the world, including Saudi universities. In the e-learning context, there is a growing need and concern among Saudi universities to improve and enhance quality assurance for e-learning systems. Practicing quality assurance activities and applying quality standards in e-learning in Saudi universities is thought to reduce the negative viewpoints of some stakeholders and ensure stakeholders’ satisfaction and needs. As a contribution to improving the quality of e-learning method in Saudi universities, the main purpose of this study is to explore and investigate strategies for the development of quality assurance in e-learning in one university in Saudi Arabia, which is considered a good reference university using the best and ongoing practices in e-learning systems among Saudi universities. In order to ensure the quality of its e-learning methods, Saudi university has adopted Quality Matters Standards as a controlling guide for the quality of its blended and full e-course electronic courses. Furthermore, quality assurance can be further improved if a variety of perspectives are taken into consideration from the comprehensive viewpoints of faculty members, administrative staff, and students.This qualitative research involved the use of different types of interviews, as well as documents that contain data related to e-learning methods in the Saudi university environment. This exploratory case study was undertaken, from the perspectives of various participants, to understand the phenomenon of quality assurance using an inductive technique.The results revealed six main supportive factors that assist in ensuring the quality of e-learning in the Saudi university environment. Essentially, these factors are institutional support, faculty member support, evaluation of faculty, quality of e-course design, technology support, and student support, which together have a remarkable positive effect on quality, forming intrinsic columns connected by bricks leading to quality e-learning. Quality Matters standards are considered to have a strong impact on improving faculty members' skills and on the development of high-quality blended and full e-courses.

Keywords: E-learning, quality assurance, quality matters standards, KKU-supportive factors

Procedia PDF Downloads 105
384 The Common Location and the Intensity of Surface Electrical Stimulation on the Thorax and Abdomen Areas: A Systematic Review

Authors: Vu Hoang Thu Huong

Abstract:

Background: Surface electrical stimulation (SES) is a popular non-invasive approach that offers a wide range of treatments for many diseases of physical therapy. It involves applying electrical stimulation to the skin via surface electrodes to stimulate nerve fibers. SES was regularly used to treat the back and upper or lower extremities, but it was rarely used to treat the chest and abdomen. SES on the thorax and abdomen should be administered with more attention because crucial organs are under those areas (i.e., heart, lungs, liver, etc.). In these areas, safety precautions are suggested, and some SES applications might even be a contraindication. The fact that physical therapists have less experience with SES in these situations can also be attributed to these. Although a few earlier studies applied it to these settings and discovered hopeful results, none of them highlight the relationship between the intensity of SES and its depth of impact for safety considerations. Objective: To assure feasibility when using SES in these areas, the purpose of this study is to summarize the common location and intensity of those areas that have been conducted in previous studies. Method: A thorough systematic review was conducted to determine the common surface electrode position for the thorax and abdomen areas. The studies with the randomized controlled design were systematically searched using inclusion and exclusion criteria through nine electronic databases, including Pubmed, Scopus, etc., between 1975 and Dec 2021. Results: Thirty-three studies with over 1800 participants and 4 types of SES (TENS, IFC, NMES, and FES) with various categories of department hospitals were found. Following an anterior, lateral, and posterior observation, the particular SES positions found that it concentrated on 6 regions (the thoracic, abdomen, upper lateral, lower lateral, upper back, and lower back regions), and its intensity for each region was also summarized. Conclusion: This systematic review figured out the popular locations of SES in the thorax and abdominal areas as well as a summarized maximum of intensity that was found in previous studies with outstanding outcomes.

Keywords: surface electrical stimulation, electrical stimulation, thoracic, abdomen, abdominal.

Procedia PDF Downloads 90
383 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage

Authors: Andrew Laming, John Hattie, Mark Wilson

Abstract:

Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.  

Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean

Procedia PDF Downloads 59
382 Impact of COVID-19 on Radiology Training in Australia and New Zealand

Authors: Preet Gill, Danus Ravindran

Abstract:

These The COVID-19 pandemic resulted in widespread implications for medical specialist training programs worldwide, including radiology. The objective of this study was to investigate the impact of COVID-19 on the Australian and New Zealand radiology trainee experience and well-being, as well as to compare the Australasian experience with that reported by other countries. An anonymised electronic online questionnaire was disseminated to all training members of the Royal Australian and New Zealand College of Radiologists who were radiology trainees during the 2020 – 2022 clinical years. Trainees were questioned about their experience from the beginning of the COVID-19 pandemic in Australasia (March 2020) to the time of survey completion. Participation was voluntary. Questions assessed the impact of the pandemic across multiple domains, including workload (inpatient/outpatient & individual modality volume), teaching, supervision, external learning opportunities, redeployment and trainee wellbeing. Survey responses were collated and compared with other peer reviewed publications. Answer options were primarily in categorical format (nominal and ordinal subtypes, as appropriate). An opportunity to provide free text answers to a minority of questions was provided. While our results mirror that of other countries, which demonstrated reduced case exposure and increased remote teaching and supervision, responses showed variation in the methods utilised by training sites during the height of the pandemic. A significant number of trainees were affected by examination cancellations/postponements and had subspecialty training rotations postponed. The majority of trainees felt that the pandemic had a negative effect on their training. In conclusion, the COVID-19 pandemic has had a significant impact on radiology trainees across Australia and New Zealand. The present study has highlighted the extent of these effects, with most aspects of training impacted. Opportunities exist to utilise this information to create robust workplace strategies to mitigate these negative effects should the need arise in the future.

Keywords: COVID-19, radiology, training, pandemic

Procedia PDF Downloads 54
381 Information Technology: Assessing Indian Realities Vis-à-Vis World Trade Organisation Disciplines

Authors: Saloni Khanderia

Abstract:

The World Trade Organisation’s (WTO) Information Technology Agreement (ITA), was concluded at the Singapore Ministerial Conference in 1996. The ITA is considered to be one of the biggest tariff-cutting deals because it eliminates all customs-related duties on the exportation of specific categories of information technology products to the territory of any other signatory to the Agreement. Over time, innovations in the information and communication technology (ICT) sector mandated the consideration of expanding the list of products covered by the ITA, which took place in the form of ITA-II negotiations during the WTO’s Nairobi Ministerial Conference. India, which was an original Member of the ITA-I, however, decided to opt-out of the negotiations to expand the list of products covered by the agreement. Instead, it preferred to give priority to its national policy initiative, namely the ‘Make-in-India’ programme [the MiI programme], which embarks upon fostering the domestic production of, inter alia, the ICT sector. India claims to have abstained from the ITA-II negotiations by stating that the zero-tariff regime created by the ITA-I debilitated its electronics-manufacturing sectors and on the contrary resulted in an over-reliance on imported electronic inputs. The author undertakes doctrinal research to examine India’s decision to opt-out of ITA-II negotiations, against the backdrop of the MiI Programme, which endeavours to improve productivity across-the-board. This paper accordingly scrutinises the tariff-cutting strategies of India to weigh the better alternative for India. Apropos, it examines whether initiatives like the MiI programme could plausibly resuscitate the ailing domestic electronics-manufacturing sector. The author opines that the country’s present decision to opt-out of ITA-II negotiations should be perceived as a welcome step. Thus, market-oriented reforms such as the MiI Programme, which focuses on indigenous innovation to improve domestic manufacturing in the ICT sector, should instead, in the present circumstances gain priority. Consequently, the MiI Programme would aid in moulding the country’s current tariff policy in a manner that will concurrently assist the promotion and sustenance of domestic manufacturing in the IT sector.

Keywords: electronics-manufacturing sector, information technology agreement, make in india programme, world trade organisation

Procedia PDF Downloads 221
380 Charge Transport of Individual Thermoelectric Bi₂Te₃ Core-Poly(3,4-Ethylenedioxythiophene):Polystyrenesulfonate Shell Nanowires Determined Using Conductive Atomic Force Microscopy and Spectroscopy

Authors: W. Thongkham, K. Sinthiptharakoon, K. Tantisantisom, A. Klamchuen, P. Khanchaitit, K. Jiramitmongkon, C. Lertsatitthanakorn, M. Liangruksa

Abstract:

Due to demands of sustainable energy, thermoelectricity converting waste heat into electrical energy has become one of the intensive fields of worldwide research. However, such harvesting technology has shown low device performance in the temperature range below 150℃. In this work, a hybrid nanowire of inorganic bismuth telluride (Bi₂Te₃) and organic poly(3,4-ethylenedioxythiophene):polystyrenesulfonate (PEDOT:PSS) synthesized using a simple in-situ one-pot synthesis, enhancing efficiency of the nanowire-incorporated PEDOT:PSS-based thermoelectric converter is highlighted. Since the improvement is ascribed to the increased electrical conductivity of the thermoelectric host material, the individual hybrid nanowires are investigated using voltage-dependent conductive atomic force microscopy (CAFM) and spectroscopy (CAFS) considering that the electrical transport measurement can be performed either on insulating or conducting areas of the sample. Correlated with detailed chemical information on the crystalline structure and compositional profile of the nanowire core-shell structure, an electrical transporting pathway through the nanowire and the corresponding electronic-band structure have been determined, in which the native oxide layer on the Bi₂Te₃ surface is not considered, and charge conduction on the topological surface states of Bi₂Te₃ is suggested. Analyzing the core-shell nanowire synthesized using the conventional mixing of as-prepared Bi₂Te₃ nanowire with PEDOT:PSS for comparison, the oxide-removal effect of the in-situ encapsulating polymeric layer is further supported. The finding not only provides a structural information for mechanistic determination of the thermoelectricity, but it also encourages new approach toward more appropriate encapsulation and consequently higher efficiency of the nanowire-based thermoelectric generation.

Keywords: electrical transport measurement, hybrid Bi₂Te₃-PEDOT:PSS nanowire, nanoencapsulation, thermoelectricity, topological insulator

Procedia PDF Downloads 191
379 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection

Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten

Abstract:

Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.

Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection

Procedia PDF Downloads 323
378 Experimental Investigation of Nucleate Pool Boiling Heat Transfer on Laser-Structured Copper Surfaces of Different Patterns

Authors: Luvindran Sugumaran, Mohd Nashrul Mohd Zubir, Kazi Md Salim Newaz, Tuan Zaharinie Tuan Zahari, Suazlan Mt Aznam, Aiman Mohd Halil

Abstract:

With reference to Energy Roadmap 2050, the minimization of greenhouse gas emissions and the enhancement of energy efficiency are the two key factors that could facilitate a radical change in the world's energy infrastructure. However, the energy demands of electronic devices skyrocketed with the advent of the digital age. Currently, the two-phase cooling technique based on phase change pool boiling heat transfer has received a lot of attention because of its potential to fully utilize the latent heat of the fluid and produce a highly effective heat dissipation capacity while keeping the equipment's operating temperature within an acceptable range. There are numerous strategies available for the alteration of heating surfaces, but finding the best, simplest, and most dependable one remains a challenge. Lately, surface texturing via laser ablation has been used in a variety of investigations, demonstrating its significant potential for enhancing the pool boiling heat transfer performance. In this research, the nucleate pool boiling heat transfer performance of laser-structured copper surfaces of different patterns was investigated. The bare copper surface serves as a reference to compare the performance of laser-structured surfaces. It was observed that the heat transfer coefficients were increased with the increase of surface area ratio and the ratio of the peak-to-valley height of the microstructure. Laser machined grain structure produced extra nucleation sites, which ultimately caused the improved pool boiling performance. Due to an increase in nucleation site density and surface area, the enhanced nucleate boiling served as the primary heat transfer mechanism. The pool boiling performance of the laser-structured copper surfaces is superior to the bare copper surface in all aspects.

Keywords: heat transfer coefficient, laser structuring, micro structured surface, pool boiling

Procedia PDF Downloads 69
377 Experimental Investigation of Nucleate Pool Boiling Heat Transfer on Laser-Structured Copper Surfaces of Different Patterns

Authors: Luvindran Sugumaran, Mohd Nashrul Mohd Zubir, Kazi Md Salim Newaz, Tuan Zaharinie Tuan Zahari, Suazlan Mt Aznam, Aiman Mohd Halil

Abstract:

With reference to Energy Roadmap 2050, the minimization of greenhouse gas emissions, and the enhancement of energy efficiency are the two key factors that could facilitate a radical change in the world's energy infrastructure. However, the energy demands of electronic devices skyrocketed with the advent of the digital age. Currently, the two-phase cooling technique based on phase change pool boiling heat transfer has received a lot of attention because of its potential to fully utilize the latent heat of the fluid and produce a highly effective heat dissipation capacity while keeping the equipment's operating temperature within an acceptable range. There are numerous strategies available for the alteration of heating surfaces, but to find the best, simplest, and most dependable one remains a challenge. Lately, surface texturing via laser ablation has been used in a variety of investigations, demonstrating its significant potential for enhancing the pool boiling heat transfer performance. In this research, the nucleate pool boiling heat transfer performance of laser-structured copper surfaces of different patterns was investigated. The bare copper surface serves as a reference to compare the performance of laser-structured surfaces. It was observed that the heat transfer coefficients were increased with the increase of surface area ratio and the ratio of the peak-to-valley height of the microstructure. Laser machined grain structure produced extra nucleation sites, which ultimately caused the improved pool boiling performance. Due to an increase in nucleation site density and surface area, the enhanced nucleate boiling served as the primary heat transfer mechanism. The pool boiling performance of the laser-structured copper surfaces is superior to the bare copper surface in all aspects.

Keywords: heat transfer coefficient, laser structuring, micro structured surface, pool boiling

Procedia PDF Downloads 66
376 Experimental Investigation of Nucleate Pool Boiling Heat Transfer on Laser-Structured Copper Surfaces of Different Patterns

Authors: Luvindran Sugumaran, Mohd Nashrul Mohd Zubir, Kazi Md. Salim Newaz, Tuan Zaharinie Tuan Zahari, Suazlan Mt Aznam, Aiman Mohd Halil

Abstract:

With reference to Energy Roadmap 2050, the minimization of greenhouse gas emissions and the enhancement of energy efficiency are the two key factors that could facilitate a radical change in the world's energy infrastructure. However, the energy demands of electronic devices skyrocketed with the advent of the digital age. Currently, the two-phase cooling technique based on phase change pool boiling heat transfer has received a lot of attention because of its potential to fully utilize the latent heat of the fluid and produce a highly effective heat dissipation capacity while keeping the equipment's operating temperature within an acceptable range. There are numerous strategies available for the alteration of heating surfaces, but to find the best, simplest, and most dependable one remains a challenge. Lately, surface texturing via laser ablation has been used in a variety of investigations, demonstrating its significant potential for enhancing the pool boiling heat transfer performance. In this research, the nucleate pool boiling heat transfer performance of laser-structured copper surfaces of different patterns was investigated. The bare copper surface serves as a reference to compare the performance of laser-structured surfaces. It was observed that the heat transfer coefficients were increased with the increase of surface area ratio and the ratio of the peak-to-valley height of the microstructure. Laser-machined grain structure produced extra nucleation sites, which ultimately caused the improved pool boiling performance. Due to an increase in nucleation site density and surface area, the enhanced nucleate boiling served as the primary heat transfer mechanism. The pool boiling performance of the laser-structured copper surfaces is superior to the bare copper surface in all aspects.

Keywords: heat transfer coefficient, laser structuring, micro structured surface, pool boiling

Procedia PDF Downloads 69
375 De Novo Design of Functional Metalloproteins for Biocatalytic Reactions

Authors: Ketaki D. Belsare, Nicholas F. Polizzi, Lior Shtayer, William F. DeGrado

Abstract:

Nature utilizes metalloproteins to perform chemical transformations with activities and selectivities that have long been the inspiration for design principles in synthetic and biological systems. The chemical reactivities of metalloproteins are directly linked to local environment effects produced by the protein matrix around the metal cofactor. A complete understanding of how the protein matrix provides these interactions would allow for the design of functional metalloproteins. The de novo computational design of proteins have been successfully used in design of active sites that bind metals like di-iron, zinc, copper containing cofactors; however, precisely designing active sites that can bind small molecule ligands (e.g., substrates) along with metal cofactors is still a challenge in the field. The de novo computational design of a functional metalloprotein that contains a purposefully designed substrate binding site would allow for precise control of chemical function and reactivity. Our research strategy seeks to elucidate the design features necessary to bind the cofactor protoporphyrin IX (hemin) in close proximity to a substrate binding pocket in a four helix bundle. First- and second-shell interactions are computationally designed to control orientation, electronic structure, and reaction pathway of the cofactor and substrate. The design began with a parameterized helical backbone that positioned a single histidine residue (as an axial ligand) to receive a second-shell H-bond from a Threonine on the neighboring helix. The metallo-cofactor, hemin was then manually placed in the binding site. A structural feature, pi-bulge was introduced to give substrate access to the protoporphyrin IX. These de novo metalloproteins are currently being tested for their activity towards hydroxylation and epoxidation. The de novo designed protein shows hydroxylation of aniline to 4-aminophenol. This study will help provide structural information of utmost importance in understanding de novo computational design variables impacting the functional activities of a protein.

Keywords: metalloproteins, protein design, de novo protein, biocatalysis

Procedia PDF Downloads 143
374 Quality Assurance Comparison of Map Check 2, Epid, and Gafchromic® EBT3 Film for IMRT Treatment Planning

Authors: Khalid Iqbal, Saima Altaf, M. Akram, Muhammad Abdur Rafaye, Saeed Ahmad Buzdar

Abstract:

Objective: Verification of patient-specific intensity modulated radiation therapy (IMRT) plans using different 2-D detectors has become increasingly popular due to their ease of use and immediate readout of the results. The purpose of this study was to test and compare various 2-D detectors for dosimetric quality assurance (QA) of intensity-modulated radiotherapy (IMRT) with the vision to find alternative QA methods. Material and Methods: Twenty IMRT patients (12 of brain and 8 of the prostate) were planned on Eclipse treatment planning system using Varian Clinac DHX on both energies 6MV and 15MV. Verification plans of all such patients were also made and delivered to Map check2, EPID (Electronic portal imaging device) and Gafchromic EBT3. Gamma index analyses were performed using different criteria to evaluate and compare the dosimetric results. Results: Statistical analysis shows the passing rate of 99.55%, 97.23% and 92.9% for 6MV and 99.53%, 98.3% and 94.85% for 15 MV energy using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm respectively for brain, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria, the passing rate is 94.55% and 90.45% for 6MV and 95.25%9 and 95% for 15 MV energy for the case of prostate using EBT3 film. Map check 2 results shows the passing rates of 98.17%, 97.68% and 86.78% for 6MV energy and 94.87%,97.46% and 88.31% for 15 MV energy respectively for brain using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria gives the passing rate of 97.7% and 96.4% for 6MV and 98.75%9 and 98.05% for 15 MV energy for the case of prostate. EPID 6 MV and gamma analysis shows the passing rate of 99.56%, 98.63% and 98.4% for the brain, 100% and 99.9% for prostate using the same criteria as for map check 2 and EBT 3 film. Conclusion: The results demonstrate excellent passing rates were obtained for all dosimeter when compared with the planar dose distributions for 6 MV IMRT fields as well as for 15 MV. EPID results are better than EBT3 films and map check 2 because it is likely that part of this difference is real, and part is due to manhandling and different treatment set up verification which contributes dose distribution difference. Overall all three dosimeter exhibits results within limits according to AAPM report.120.

Keywords: gafchromic EBT3, radiochromic film dosimetry, IMRT verification, EPID

Procedia PDF Downloads 410
373 Enhancing Healthcare Data Protection and Security

Authors: Joseph Udofia, Isaac Olufadewa

Abstract:

Everyday, the size of Electronic Health Records data keeps increasing as new patients visit health practitioner and returning patients fulfil their appointments. As these data grow, so is their susceptibility to cyber-attacks from criminals waiting to exploit this data. In the US, the damages for cyberattacks were estimated at $8 billion (2018), $11.5 billion (2019) and $20 billion (2021). These attacks usually involve the exposure of PII. Health data is considered PII, and its exposure carry significant impact. To this end, an enhancement of Health Policy and Standards in relation to data security, especially among patients and their clinical providers, is critical to ensure ethical practices, confidentiality, and trust in the healthcare system. As Clinical accelerators and applications that contain user data are used, it is expedient to have a review and revamp of policies like the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), the Fast Healthcare Interoperability Resources (FHIR), all aimed to ensure data protection and security in healthcare. FHIR caters for healthcare data interoperability, FHIR caters to healthcare data interoperability, as data is being shared across different systems from customers to health insurance and care providers. The astronomical cost of implementation has deterred players in the space from ensuring compliance, leading to susceptibility to data exfiltration and data loss on the security accuracy of protected health information (PHI). Though HIPAA hones in on the security accuracy of protected health information (PHI) and PCI DSS on the security of payment card data, they intersect with the shared goal of protecting sensitive information in line with industry standards. With advancements in tech and the emergence of new technology, it is necessary to revamp these policies to address the complexity and ambiguity, cost barrier, and ever-increasing threats in cyberspace. Healthcare data in the wrong hands is a recipe for disaster, and we must enhance its protection and security to protect the mental health of the current and future generations.

Keywords: cloud security, healthcare, cybersecurity, policy and standard

Procedia PDF Downloads 71
372 Coffee Consumption and Glucose Metabolism: a Systematic Review of Clinical Trials

Authors: Caio E. G. Reis, Jose G. Dórea, Teresa H. M. da Costa

Abstract:

Objective: Epidemiological data shows an inverse association of coffee consumption with risk of type 2 diabetes mellitus. However, the clinical effects of coffee consumption on the glucose metabolism biomarkers remain controversial. Thus, this paper reviews clinical trials that evaluated the effects of coffee consumption on glucose metabolism. Research Design and Methods: We identified studies published until December 2014 by searching electronic databases and reference lists. We included randomized clinical trials which the intervention group received caffeinated and/or decaffeinated coffee and the control group received water or placebo treatments and measured biomarkers of glucose metabolism. The Jadad Score was applied to evaluate the quality of the studies whereas studies that scored ≥ 3 points were considered for the analyses. Results: Seven clinical trials (total of 237 subjects) were analyzed involving adult healthy, overweight and diabetic subjects. The studies were divided in short-term (1 to 3h) and long-term (2 to 16 weeks) duration. The results for short-term studies showed that caffeinated coffee consumption may increase the area under the curve for glucose response, while for long-term studies caffeinated coffee may improve the glycemic metabolism by reducing the glucose curve and increasing insulin response. These results seem to show that the benefits of coffee consumption occur in the long-term as has been shown in the reduction of type 2 diabetes mellitus risk in epidemiological studies. Nevertheless, until the relationship between long-term coffee consumption and type 2 diabetes mellitus is better understood and any mechanism involved identified, it is premature to make claims about coffee preventing type 2 diabetes mellitus. Conclusion: The findings suggest that caffeinated coffee may impairs glucose metabolism in short-term but in the long-term the studies indicate reduction of type 2 diabetes mellitus risk. More clinical trials with comparable methodology are needed to unravel this paradox.

Keywords: coffee, diabetes mellitus type 2, glucose, insulin

Procedia PDF Downloads 449
371 A Methodological Approach to Digital Engineering Adoption and Implementation for Organizations

Authors: Sadia H. Syeda, Zain H. Malik

Abstract:

As systems continue to become more complex and the interdependencies of processes and sub-systems continue to grow and transform, the need for a comprehensive method of tracking and linking the lifecycle of the systems in a digital form becomes ever more critical. Digital Engineering (DE) provides an approach to managing an authoritative data source that links, tracks, and updates system data as it evolves and grows throughout the system development lifecycle. DE enables the developing, tracking, and sharing system data, models, and other related artifacts in a digital environment accessible to all necessary stakeholders. The DE environment provides an integrated electronic repository that enables traceability between design, engineering, and sustainment artifacts. The DE activities' primary objective is to develop a set of integrated, coherent, and consistent system models for the program. It is envisioned to provide a collaborative information-sharing environment for various stakeholders, including operational users, acquisition personnel, engineering personnel, and logistics and sustainment personnel. Examining the processes that DE can support in the systems engineering life cycle (SELC) is a primary step in the DE adoption and implementation journey. Through an analysis of the U.S Department of Defense’s (DoD) Office of the Secretary of Defense (OSD’s) Digital Engineering Strategy and their implementation, examples of DE implementation by the industry and technical organizations, this paper will provide descriptions of the current DE processes and best practices of implementing DE across an enterprise. This will help identify the capabilities, environment, and infrastructure needed to develop a potential roadmap for implementing DE practices consistent with its business strategy. A capability maturity matrix will be provided to assess the organization’s DE maturity emphasizing how all the SELC elements interlink to form a cohesive ecosystem. If implemented, DE can increase efficiency and improve the systems engineering processes' quality and outcomes.

Keywords: digital engineering, digital environment, digital maturity model, single source of truth, systems engineering life-cycle

Procedia PDF Downloads 81
370 An Impairment of Spatiotemporal Gait Adaptation in Huntington's Disease when Navigating around Obstacles

Authors: Naznine Anwar, Kim Cornish, Izelle Labuschagne, Nellie Georgiou-Karistianis

Abstract:

Falls and subsequent injuries are common features in symptomatic Huntington’s disease (symp-HD) individuals. As part of daily walking, navigating around obstacles may incur a greater risk of falls in symp-HD. We designed obstacle-crossing experiment to examine adaptive gait dynamics and to identify underlying spatiotemporal gait characteristics that could increase the risk of falling in symp-HD. This experiment involved navigating around one or two ground-based obstacles under two conditions (walking while navigating around one obstacle, and walking while navigating around two obstacles). A total of 32 participants were included, 16 symp-HD and 16 healthy controls with age and sex matched. We used a GAITRite electronic walkway to examine the spatiotemporal gait characteristics and inter-trail gait variability when participants walked at their preferable speed. A minimum of six trials were completed which were performed for baseline free walk and also for each and every condition during navigating around the obstacles. For analysis, we separated all walking steps into three phases as approach steps, navigating steps and recovery steps. The mean and inter-trail variability (within participant standard deviation) for each step gait variable was calculated across the six trails. We found symp-HD individuals significantly decreased their gait velocity and step length and increased step duration variability during the navigating steps and recovery steps compared with approach steps. In contrast, HC individuals showed less difference in gait velocity, step time and step length variability from baseline in both respective conditions as well as all three approaches. These findings indicate that increasing spatiotemporal gait variability may be a possible compensatory strategy that is adopted by symp-HD individuals to effectively navigate obstacles during walking. Such findings may offer benefit to clinicians in the development of strategies for HD individuals to improve functional outcomes in the home and hospital based rehabilitation program.

Keywords: Huntington’s disease, gait variables, navigating around obstacle, basal ganglia dysfunction

Procedia PDF Downloads 432
369 Effect of Climate Change on Rainfall Induced Failures for Embankment Slopes in Timor-Leste

Authors: Kuo Chieh Chao, Thishani Amarathunga, Sangam Shrestha

Abstract:

Rainfall induced slope failures are one of the most damaging and disastrous natural hazards which occur frequently in the world. This type of sliding mainly occurs in the zone above the groundwater level in silty/sandy soils. When the rainwater begins to infiltrate into the vadose zone of the soil, the negative pore-water pressure tends to decrease and reduce the shear strength of soil material. Climate change has resulted in excessive and unpredictable rainfall in all around the world, resulting in landslides with dire consequences to human lives and infrastructure. Such problems could be overcome by examining in detail the causes for such slope failures and recommending effective repair plans for vulnerable locations by considering future climatic change. The selected area for this study is located in the road rehabilitation section from Maubara to Mota Ain road in Timor-Leste. Slope failures and cracks have occurred in 2013 and after repairs reoccurred again in 2017 subsequent to heavy rains. Both observed and future predicted climate data analyses were conducted to understand the severe precipitation conditions in past and future. Observed climate data were collected from NOAA global climate data portal. CORDEX data portal was used to collect Regional Climate Model (RCM) future predicted climate data. Both observed and RCM data were extracted to location-based data using ArcGIS Software. Linear scaling method was used for the bias correction of future data and bias corrected climate data were assigned to GeoStudio Software. Precipitations of wet seasons (December to March ) in 2007 to 2013 is higher than 2001-2006 period and it is more than nearly 40% higher precipitation than usual monthly average precipitation of 160mm.The results of seepage analyses which were carried out using SEEP/W model with observed climate, clearly demonstrated that the pore water pressure within the fill slope was significantly increased due to the increase of the infiltration during the wet season of 2013.One main Regional Climate Models (RCM) was analyzed in order to predict future climate variation under two Representative Concentration Pathways (RCPs).In the projected period of 76 years ahead from 2014, shows that the amount of precipitation is considerably getting higher in the future in both RCP 4.5 and RCP 8.5 emission scenarios. Critical pore water pressure conditions during 2014-2090 were used in order to recommend appropriate remediation methods. Results of slope stability analyses indicated that the factor of safety of the fill slopes was reduced from 1.226 to 0.793 during the dry season to wet season in 2013.Results of future slope stability which were obtained using SLOPE/W model for the RCP emissions scenarios depict that, the use of tieback anchors and geogrids in slope protection could be effective in increasing the stability of slopes to an acceptable level during the wet seasons. Moreover, methods and procedures like monitoring of slopes showing signs or susceptible for movement and installing surface protections could be used to increase the stability of slopes.

Keywords: climate change, precipitation, SEEP/W, SLOPE/W, unsaturated soil

Procedia PDF Downloads 127
368 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology

Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey

Abstract:

In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.

Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography

Procedia PDF Downloads 72
367 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates

Authors: Jennifer Buz, Alvin Spivey

Abstract:

The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.

Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation

Procedia PDF Downloads 107
366 Effect of Graphene on the Structural and Optical Properties of Ceria:Graphene Nanocomposites

Authors: R. Udayabhaskar, R. V. Mangalaraja, V. T. Perarasu, Saeed Farhang Sahlevani, B. Karthikeyan, David Contreras

Abstract:

Bandgap engineering of CeO₂ nanocrystals is of high interest for many research groups to meet the requirement of desired applications. The band gap of CeO₂ nanostructures can be modified by varying the particle size, morphology and dopants. Anchoring the metal oxide nanostructures on graphene sheets will result in composites with improved properties than the parent materials. The presence of graphene sheets will acts a support for the growth, influences the morphology and provides external paths for electronic transitions. Thus, the controllable synthesis of ceria:graphene composites with various morphologies and the understanding of the optical properties is highly important for the usage of these materials in various applications. The development of ceria and ceria:graphene composites with low cost, rapid synthesis with tunable optical properties is still desirable. By this work, we discuss the synthesis of pure ceria (nanospheres) and ceria:graphene composites (nano-rice like morphology) by using commercial microwave oven as a cost effective and environmentally friendly approach. The influence of the graphene on the crystallinity, morphology, band gap and luminescence of the synthesized samples were analyzed. The average crystallite size obtained by using Scherrer formula of the CeO₂ nanostructures showed a decreasing trend with increasing the graphene loading. The higher graphene loaded ceria composite clearly depicted morphology of nano-rice like in shape with the diameter below 10 nm and the length over 50 nm. The presence of graphene and ceria related vibrational modes (100-4000 cm⁻¹) confirmed the successful formation of composites. We observed an increase in band gap (blue shift) with increasing loading amount of graphene. Further, the luminescence related to various F-centers was quenched in the composites. The authors gratefully acknowledge the FONDECYT Project No.: 3160142 and BECA Conicyt National Doctorado2017 No. 21170851 Government of Chile, Santiago, for the financial assistance.

Keywords: ceria, graphene, luminescence, blue shift, band gap widening

Procedia PDF Downloads 181
365 The State of Oral Health after COVID-19 Lockdown: A Systematic Review

Authors: Faeze omid, Morteza Banakar

Abstract:

Background: The COVID-19 pandemic has had a significant impact on global health and healthcare systems, including oral health. The lockdown measures implemented in many countries have led to changes in oral health behaviors, access to dental care, and the delivery of dental services. However, the extent of these changes and their effects on oral health outcomes remains unclear. This systematic review aims to synthesize the available evidence on the state of oral health after the COVID-19 lockdown. Methods: We conducted a systematic search of electronic databases (PubMed, Embase, Scopus, and Web of Science) and grey literature sources for studies reporting on oral health outcomes after the COVID-19 lockdown. We included studies published in English between January 2020 and March 2023. Two reviewers independently screened the titles, abstracts, and full texts of potentially relevant articles and extracted data from included studies. We used a narrative synthesis approach to summarize the findings. Results: Our search identified 23 studies from 12 countries, including cross-sectional surveys, cohort studies, and case reports. The studies reported on changes in oral health behaviors, access to dental care, and the prevalence and severity of dental conditions after the COVID-19 lockdown. Overall, the evidence suggests that the lockdown measures had a negative impact on oral health outcomes, particularly among vulnerable populations. There were decreases in dental attendance, increases in dental anxiety and fear, and changes in oral hygiene practices. Furthermore, there were increases in the incidence and severity of dental conditions, such as dental caries and periodontal disease, and delays in the diagnosis and treatment of oral cancers. Conclusion: The COVID-19 pandemic and associated lockdown measures have had significant effects on oral health outcomes, with negative impacts on oral health behaviors, access to care, and the prevalence and severity of dental conditions. These findings highlight the need for continued monitoring and interventions to address the long-term effects of the pandemic on oral health.

Keywords: COVID-19, oral health, systematic review, dental public health

Procedia PDF Downloads 62
364 Analysis of Potential Associations of Single Nucleotide Polymorphisms in Patients with Schizophrenia Spectrum Disorders

Authors: Tatiana Butkova, Nikolai Kibrik, Kristina Malsagova, Alexander Izotov, Alexander Stepanov, Anna Kaysheva

Abstract:

Relevance. The genetic risk of developing schizophrenia is determined by two factors: single nucleotide polymorphisms and gene copy number variations. The search for serological markers for early diagnosis of schizophrenia is driven by the fact that the first five years of the disease are accompanied by significant biological, psychological, and social changes. It is during this period that pathological processes are most amenable to correction. The aim of this study was to analyze single nucleotide polymorphisms (SNPs) that are hypothesized to potentially influence the onset and development of the endogenous process. Materials and Methods It was analyzed 73 single nucleotide polymorphism variants. The study included 48 patients undergoing inpatient treatment at "Psychiatric Clinical Hospital No. 1" in Moscow, comprising 23 females and 25 males. Inclusion criteria: - Patients aged 18 and above. - Diagnosis according to ICD-10: F20.0, F20.2, F20.8, F21.8, F25.1, F25.2. - Voluntary informed consent from patients. Exclusion criteria included: - The presence of concurrent somatic or neurological pathology, neuroinfections, epilepsy, organic central nervous system damage of any etiology, and regular use of medication. - Substance abuse and alcohol dependence. - Women who were pregnant or breastfeeding. Clinical and psychopathological assessment was complemented by psychometric evaluation using the PANSS scale at the beginning and end of treatment. The duration of observation during therapy was 4-6 weeks. Total DNA extraction was performed using QIAamp DNA. Blood samples were processed on Illumina HiScan and genotyped for 652,297 markers on the Infinium Global Chips Screening Array-24v2.0 using the IMPUTE2 program with parameters Ne=20,000 and k=90. Additional filtration was performed based on INFO>0.5 and genotype probability>0.5. Quality control of the obtained DNA was conducted using agarose gel electrophoresis, with each tested sample having a volume of 100 µL. Results. It was observed that several SNPs exhibited gender dependence. We identified groups of single nucleotide polymorphisms with a membership of 80% or more in either the female or male gender. These SNPs included rs2661319, rs2842030, rs4606, rs11868035, rs518147, rs5993883, and rs6269.Another noteworthy finding was the limited combination of SNPs sufficient to manifest clinical symptoms leading to hospitalization. Among all 48 patients, each of whom was analyzed for deviations in 73 SNPs, it was discovered that the combination of involved SNPs in the manifestation of pronounced clinical symptoms of schizophrenia was 19±3 out of 73 possible. In study, the frequency of occurrence of single nucleotide polymorphisms also varied. The most frequently observed SNPs were rs4849127 (in 90% of cases), rs1150226 (86%), rs1414334 (75%), rs10170310 (73%), rs2857657, and rs4436578 (71%). Conclusion. Thus, the results of this study provide additional evidence that these genes may be associated with the development of schizophrenia spectrum disorders. However, it's impossible cannot rule out the hypothesis that these polymorphisms may be in linkage disequilibrium with other functionally significant polymorphisms that may actually be involved in schizophrenia spectrum disorders. It has been shown that missense SNPs by themselves are likely not causative of the disease but are in strong linkage disequilibrium with non-functional SNPs that may indeed contribute to disease predisposition.

Keywords: gene polymorphisms, genotyping, single nucleotide polymorphisms, schizophrenia.

Procedia PDF Downloads 64