Search results for: James NDA Jacob
137 Dosimetric Dependence on the Collimator Angle in Prostate Volumetric Modulated Arc Therapy
Authors: Muhammad Isa Khan, Jalil Ur Rehman, Muhammad Afzal Khan Rao, James Chow
Abstract:
Purpose: This study investigates the dose-volume variations in planning target volume (PTV) and organs-at-risk (OARs) using different collimator angles for smart arc prostate volumetric modulated arc therapy (VMAT). Awareness of the collimator angle for PTV and OARs sparing is essential for the planner because optimization contains numerous treatment constraints producing a complex, unstable and computationally challenging problem throughout its examination of an optimal plan in a rational time. Materials and Methods: Single arc VMAT plans at different collimator angles varied systematically (0°-90°) were performed on a Harold phantom and a new treatment plan is optimized for each collimator angle. We analyzed the conformity index (CI), homogeneity index (HI), gradient index (GI), monitor units (MUs), dose-volume histogram, mean and maximum doses to PTV. We also explored OARs (e.g. bladder, rectum and femoral heads), dose-volume criteria in the treatment plan (e.g. D30%, D50%, V30Gy and V38Gy of bladder and rectum; D5%,V14Gy and V22Gy of femoral heads), dose-volume histogram, mean and maximum doses for smart arc VMAT at different collimator angles. Results: There was no significance difference found in VMAT optimization at all studied collimator angles. However, if 0.5% accuracy is concerned then collimator angle = 45° provides higher CI and lower HI. Collimator angle = 15° also provides lower HI values like collimator angle 45°. It is seen that collimator angle = 75° is established as a good for rectum and right femur sparing. Collimator angle = 90° and collimator angle = 30° were found good for rectum and left femur sparing respectively. The PTV dose coverage statistics for each plan are comparatively independent of the collimator angles. Conclusion: It is concluded that this study will help the planner to have freedom to choose any collimator angle from (0°-90°) for PTV coverage and select a suitable collimator angle to spare OARs.Keywords: VMAT, dose-volume histogram, collimator angle, organs-at-risk
Procedia PDF Downloads 512136 Isolation Enhancement of Compact Dual-Band Printed Multiple Input Multiple Output Antenna for WLAN Applications
Authors: Adham M. Salah, Tariq A. Nagem, Raed A. Abd-Alhameed, James M. Noras
Abstract:
Recently, the demand for wireless communications systems to cover more than one frequency band (multi-band) with high data rate has been increased for both fixed and mobile services. Multiple Input Multiple Output (MIMO) technology is one of the significant solutions for attaining these requirements and to achieve the maximum channel capacity of the wireless communications systems. The main issue associated with MIMO antennas especially in portable devices is the compact space between the radiating elements which leads to limit the physical separation between them. This issue exacerbates the performance of the MIMO antennas by increasing the mutual coupling between the radiating elements. In other words, the mutual coupling will be stronger if the radiating elements of the MIMO antenna are closer. This paper presents a low–profile dual-band (2×1) MIMO antenna that works at 2.4GHz, 5.3GHz and 5.8GHz for wireless local area networks (WLAN) applications. A neutralization line (NL) technique for enhancing the isolation has been used by introducing a strip line with a length of λg/4 at the isolation frequency (2.4GHz) between the radiating elements. The overall dimensions of the antenna are 33.5 x 36 x 1.6 mm³. The fabricated prototype shows a good agreement between the simulated and measured results. The antenna impedance bandwidths are 2.38–2.75 GHz and 4.4–6 GHz for the lower and upper band respectively; the reflection coefficient and mutual coupling are better than -25 dB in both lower and higher bands. The MIMO antenna performance characteristics are reported in terms of the scattering parameters, envelope correlation coefficient (ECC), total active reflection coefficient, capacity loss, antenna gain, and radiation patterns. Analysis of these characteristics indicates that the design is appropriate for the WLAN terminal applications.Keywords: ECC, neutralization line, MIMO antenna, multi-band, mutual coupling, WLAN
Procedia PDF Downloads 133135 The Role of Chennai NGOs in Combatting Human Trafficking
Authors: Nisha James, Shubha Ranganathan
Abstract:
Sex trafficking is a type of human trafficking involving prostitution of individuals for sexual exploitation. The stigma and social isolation they face in the society often makes it difficult for them to become rehabilitated from trafficking, due to which many of them continue in prostitution for years after being sex trafficked. Victims are subjected to violations of their fundamental human rights, deprived of basic medical facilities and undergo long-term abuse. This paper focuses on the role of Non-Governmental Organizations (NGOs) in the rescue and rehabilitation of victims of sex trafficking. Semi-structured interviews were conducted with 26 survivors of sex trafficking, five sex workers and 14 non-community staff members of a project running NGO in the city of Chennai in South India. Chennai has a number of NGOs that are involved in HIV/AIDS awareness and prevention programs. In many cases, rehabilitation of sex trafficking victims is also a mandate of these NGOs. This particular NGO was also involved in development activities towards the eradication of HIV/AIDS. For instance, they were engaged in inculcating safe sex practices among high-risk groups such as sex workers or in fighting for sex worker rights. The study found that the NGO’s role in combatting sex trafficking is overrun by the way it approaches these issue related to HIV/AIDS. Further, their activities are dependent solely on funding. Given that gradually, international funding for HIV/AIDS has slowly been withdrawn, there have been problems such as reduction in the salary of the project staff, the outreach workers and peer educators, many of whom were survivors of sex trafficking who have been able to survive on their wages instead of continuing in prostitution. Therefore, till date, the project funding has helped in making them aware of the health and social consequences of continuing in prostitution, and in supporting them socioeconomically, but the lack of funding may also lead the NGO workers into a state of unemployment, poverty and eventually into being re-trafficked. The study concludes by pointing to the need for disengaging anti-trafficking efforts from the HIV/AIDS related programs.Keywords: non-governmental organization role, non-governmental organization staff, sex trafficking survivors, sex workers
Procedia PDF Downloads 301134 Pilgrimage: Between Culture and Religion Case study of Pilgrimage in Shia tradition in Indonesia, Traditional Philosophy approach of Seyyed Hosein Nasr and Religious Experience of William James
Authors: Ma'ruf
Abstract:
Pilgrimage has a universal value, founded in every religion. No exception to Islam, has a ritual value, and became part of the religion, as well as the procession of a social culture in nature. The tradition of pilgrimage, especially in Indonesia, rooted in the society, because the Islam that entered into the archipelago through Sufism (tasawuf). In the Sufi tradition, the interconnecty of the human spirit (ruh) to the spirit (ruh) of God, must go through a guardian (wasilah) appointed by God himself ,the prophet Muhammad and wali. In the process of pilgrimage rituals usually by reading the prayer to praise God, the prophet and wali, then convey intent (hajat). In the pilgrimage procession, usually not only done in the house, but aslo completed the process by direct pilgrimage visiting the tombs of saints. The tradition of pilgrimage, especially in Indonesia continues to be maintained, and still attached to the traditions in Nahdiyin (NU followers). The relationship with God manifested in wasilah prayer to God, the prophet Muhammad, the best companions of the Prophet and Nine wali (Songo), who had been influential in spreading Islam in Java. The tradition of pilgrimage in Indonesia is also linked to the Shia community in Indonesia, along with a growing number of followers of the Shia in Indonesia, especially after the Islamic revolution of Iran after the 1979. Pilgrimage in the Shia community, Likewise NU members also pray with supplication of tawasul to the Prophet and Shia Imams. If NU members to make improvements pilgrimage to visit the tomb wali Songo in Java, residents Shia pilgrimage rituals abroad, usually one package with umrah trip, with a pilgrimage to the tomb of the prophet, proceed to the tomb of the Imam Shia, in Iran and Iraq. Trends of pilgrimage as a ritual in the Indonesian Shia tradition, together with the growing number of Shia residents increased, followed by increasing the awareness (syi’isme) - bond with the Imam, Shia. In every certain months (arbaeen, asyuro) Shia pilgrims routinely perform pilgrimage, along with increasing number spiritual travel.Keywords: traditional approach, religious experience, culture, religion, pilgrimage, Syria
Procedia PDF Downloads 383133 Maintenance Performance Measurement Derived Optimization: A Case Study
Authors: James M. Wakiru, Liliane Pintelon, Peter Muchiri, Stanley Mburu
Abstract:
Maintenance performance measurement (MPM) represents an integrated aspect that considers both operational and maintenance related aspects while evaluating the effectiveness and efficiency of maintenance to ensure assets are working as they should. Three salient issues require to be addressed for an asset-intensive organization to employ an MPM-based framework to optimize maintenance. Firstly, the organization should establish important perfomance metric(s), in this case the maintenance objective(s), which they will be focuss on. The second issue entails aligning the maintenance objective(s) with maintenance optimization. This is achieved by deriving maintenance performance indicators that subsequently form an objective function for the optimization program. Lastly, the objective function is employed in an optimization program to derive maintenance decision support. In this study, we develop a framework that initially identifies the crucial maintenance performance measures, and employs them to derive maintenance decision support. The proposed framework is demonstrated in a case study of a geothermal drilling rig, where the objective function is evaluated utilizing a simulation-based model whose parameters are derived from empirical maintenance data. Availability, reliability and maintenance inventory are depicted as essential objectives requiring further attention. A simulation model is developed mimicking a drilling rig operations and maintenance where the sub-systems are modelled undergoing imperfect maintenance, corrective (CM) and preventive (PM), with the total cost as the primary performance measurement. Moreover, three maintenance spare inventory policies are considered; classical (retaining stocks for a contractual period), vendor-managed inventory with consignment stock and periodic monitoring order-to-stock (s, S) policy. Optimization results infer that the adoption of (s, S) inventory policy, increased PM interval and reduced reliance of CM actions offers improved availability and total costs reduction.Keywords: maintenance, vendor-managed, decision support, performance, optimization
Procedia PDF Downloads 125132 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data
Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple
Abstract:
In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network
Procedia PDF Downloads 139131 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 55130 Transorbital Craniectomy for Treatment of Frontal Lobe and Olfactory Bulb Neoplasia in Two Canids
Authors: Kathryn L. Duncan, Charles A. Kuntz, James O. Simcock
Abstract:
A surgical approach to the cranium for treatment of frontal lobe and olfactory bulb neoplasia in dogs is described in this report, which provided excellent access for visualisation and removal of gross neoplastic tissue. An 8-year-old spayed female Shih Tzu crossbreed dog (dog 1) and a 13-year-old neutered male Miniature Fox Terrier (dog 2) were evaluated for removal of neoplasms involving both the frontal lobe and olfactory bulb. Both dogs presented with abnormal neurological clinical signs, decreased menace responses, and behavioural changes. Additionally, dog 2 presented with compulsive circling and generalized tonic-clonic seizure activity. Computed tomography was performed in both dogs, and MRI was also performed in dog 1. Imaging was consistent with frontal lobe and olfactory bulb neoplasia. A transorbital frontal bone craniectomy, with orbital ligament desmotomy and ventrolateral retraction of the globe, was performed in both cases without complication. Dog 1 had a focal area of lysis in the frontal bone adjacent to the neoplasm in the frontal lobe. The presence of the bone defect provided part of the impetus for this approach, as it would permit resection of the lytic bone. In addition, the neoplasms would be surgically accessible without encountering interposed brain parenchyma, reducing the risk of iatrogenic injury. Both dogs were discharged from the hospital within 72 hours post-operatively, both with normal mentation. Case 1 had a histopathologic diagnosis of malignant anaplastic neoplasm. The tumour recurred 101d postoperatively, and the patient was euthanized. Case 2 was diagnosed with a meningioma and was neurologically normal at 294d postoperatively. This transorbital surgical approach allowed successful removal of the intracranial frontal lobe and olfactory bulb neoplasms in 2 dogs. This approach should be considered for dogs with lateralized frontal lobe and olfactory bulb neoplasms that are closely associated with the suborbital region of the frontal bone.Keywords: neurosurgery, small animal surgery, surgical oncology, veterinary neurology
Procedia PDF Downloads 152129 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India
Authors: Vinu Elias Jacob, Manoj Kumar Kini
Abstract:
Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.Keywords: disaster management, resilience, spatial planning, spatial transformations
Procedia PDF Downloads 296128 Perception of Mass Media Usage in Educational Development of Rural Communities in Nigeria
Authors: Aniekan James Akpan, Inemesit Akpan Umoren, Uduak Iwok
Abstract:
From prehistoric and primitive cultures, education was seen as a process of culture transmission by way of guiding children into becoming good members of their local communities. Even in modern cultures, education is seen as a systematic discipline aimed at cultivating genuine values to improve oneself and society. Without education, the chances of realizing the desired vision are marred as it is believed that nations that invest much in education are able to reap the desired benefits technologically, economically, socially, politically, and otherwise. In this sense, the moulding of character is considered the primary purpose of education, and until the audience of mass media through its various vehicles is seen as tools for improving the overall development of society. It is believed that a media-friendly person is likely to perform better than someone who is less friendly. This work, therefore, examines the role media play in educational development. As highlighted by the study, a summary of the functions of media shows that they widen horizon by acting as a liberating force, breaking distance, bonds, and transforming a traditional society into a modern one. With the use of technological development theory, agenda-setting theory as well as uses and gratification theory and multiple intelligence theory, the work identifies different ways in which mass media help in educational development and draws attention to the audience’s perception of media functions in terms of educational development. With a survey method and a population of 6,903,321 people, the work sampled 220 respondents using purposive technique drawn from rural communities in the South-South region of Nigeria. The work concludes that mass media are potent vehicles for teaching and learning and therefore recommends that government should provide basic infrastructures to the rural communities to aid full utilization of media potentials in educational development and equally urge media owners and practitioners to as a matter of urgency increase coverage time on issues bordering on education as it is done for political and other issues.Keywords: educational, development, media usage, perception
Procedia PDF Downloads 128127 Effective Infection Control Measures to Prevent Transmission of Multi-Drug Resistant Organisms from Burn Transfer Cases in a Regional Burn Centre
Authors: Si Jack Chong, Chew Theng Yap, Wan Loong James Mok
Abstract:
Introduction: Regional burn centres face the spectra of introduced multi-drug resistant organisms (MDRO) from transfer patients resident in MDRO endemic countries. MDRO can cause severe nosocomial infection, which in massive burn patients, will lead to greater morbidity and mortality and strain the institution financially. We aim to highlight 4 key measures that have effectively prevented transmission of imported MDRO. Methods: A case of Candida auris (C. auris) from a massive burn patient transferred from an MDRO endemic country is used to illustrate the measures. C. auris is a globally emerging multi-drug resistant fungal pathogen causing nosocomial transmission. Results: Infection control measures used to mitigate the risk of outbreak from transfer cases are: (1) Multidisciplinary team approach involving Infection Control and Infectious Disease specialists early to ensure appropriate antibiotics use and implementation of barrier measures, (2) aseptic procedures for dressing change with strict isolation and donning of personal protective equipment in the ward, (3) early screening of massive burn patient from MDRO endemic region, (4) hydrogen peroxide vaporization terminal cleaning for operating theatres and rooms. Conclusion: The prevalence of air travel and international transfer to regional burn centres will need effective infection control measures to reduce the risk of transmission from imported massive burn patients. In our centre, we have effectively implemented 4 measures which have reduced the risks of local contamination. We share a recent case report to illustrate successful management of a potential MDRO outbreak resulting from transfer of massive burn patient resident in an MDRO endemic area.Keywords: burns, burn unit, cross infection, infection control
Procedia PDF Downloads 150126 Developing a Framework for Designing Digital Assessments for Middle-school Aged Deaf or Hard of Hearing Students in the United States
Authors: Alexis Polanco Jr, Tsai Lu Liu
Abstract:
Research on digital assessment for deaf and hard of hearing (DHH) students is negligible. Part of this stems from the DHH assessment design existing at the intersection of the emergent disciplines of usability, accessibility, and child-computer interaction (CCI). While these disciplines have some prevailing guidelines —e.g. in user experience design (UXD), there is Jacob Nielsen’s 10 Usability Heuristics (Nielsen-10); for accessibility, there are the Web Content Accessibility Guidelines (WCAG) & the Principles of Universal Design (PUD)— this research was unable to uncover a unified set of guidelines. Given that digital assessments have lasting implications for the funding and shaping of U.S. school districts, it is vital that cross-disciplinary guidelines emerge. As a result, this research seeks to provide a framework by which these disciplines can share knowledge. The framework entails a process of asking subject-matter experts (SMEs) and design & development professionals to self-describe their fields of expertise, how their work might serve DHH students, and to expose any incongruence between their ideal process and what is permissible at their workplace. This research used two rounds of mixed methods. The first round consisted of structured interviews with SMEs in usability, accessibility, CCI, and DHH education. These practitioners were not designers by trade but were revealed to use designerly work processes. In addition to asking these SMEs about their field of expertise, work process, etc., these SMEs were asked to comment about whether they believed Nielsen-10 and/or PUD were sufficient for designing products for middle-school DHH students. This first round of interviews revealed that Nielsen-10 and PUD were, at best, a starting point for creating middle-school DHH design guidelines or, at worst insufficient. The second round of interviews followed a semi-structured interview methodology. The SMEs who were interviewed in the first round were asked open-ended follow-up questions about their semantic understanding of guidelines— going from the most general sense down to the level of design guidelines for DHH middle school students. Designers and developers who were never interviewed previously were asked the same questions that the SMEs had been asked across both rounds of interviews. In terms of the research goals: it was confirmed that the design of digital assessments for DHH students is inherently cross-disciplinary. Unexpectedly, 1) guidelines did not emerge from the interviews conducted in this study, and 2) the principles of Nielsen-10 and PUD were deemed to be less relevant than expected. Given the prevalence of Nielsen-10 in UXD curricula across academia and certificate programs, this poses a risk to the efficacy of DHH assessments designed by UX designers. Furthermore, the following findings emerged: A) deep collaboration between the disciplines of usability, accessibility, and CCI is low to non-existent; B) there are no universally agreed-upon guidelines for designing digital assessments for DHH middle school students; C) these disciplines are structured academically and professionally in such a way that practitioners may not know to reach out to other disciplines. For example, accessibility teams at large organizations do not have designers and accessibility specialists on the same team.Keywords: deaf, hard of hearing, design, guidelines, education, assessment
Procedia PDF Downloads 67125 Palliative Orthovoltage Radiotherapy and Subcutaneous Infusion of Carboplatin for Treatment of Appendicular Osteosarcoma in Dogs
Authors: Kathryn L. Duncan, Charles A. Kuntz, Alessandra C. Santamaria, James O. Simcock
Abstract:
Access to megavoltage radiation therapy for small animals is limited in many locations around the world. This can preclude the use of palliative radiation therapy for the treatment of appendicular osteosarcoma in dogs. The objective of this study was to retrospectively assess the adverse effects and survival times of dogs with appendicular osteosarcoma that were treated with hypofractionated orthovoltage radiation therapy and adjunctive carboplatin chemotherapy administered via a single subcutaneous infusion. Medical records were reviewed retrospectively to identify client-owned dogs with spontaneously occurring appendicular osteosarcoma that was treated with palliative orthovoltage radiation therapy and a single subcutaneous infusion of carboplatin. Data recorded included signalment, tumour location, results of diagnostic imaging, haematologic and serum biochemical analyses, adverse effects of radiation therapy and chemotherapy, and survival times. Kaplan-Meier survival analysis was performed, and log-rank analysis was used to determine the impact of specific patient variables on survival time. Twenty-three dogs were identified that met the inclusion criteria. Median survival time for dogs was 182 days. Eleven dogs had adverse haematologic effects, 3 had adverse gastrointestinal effects, 6 had adverse effects at the radiation site and 7 developed infections at the carboplatin infusion site. No statistically significant differences were identified in survival times based on sex, tumour location, development of infection, or pretreatment serum alkaline phosphatase. Median survival time and incidence of adverse effects were comparable to those previously reported in dogs undergoing palliative radiation therapy with megavoltage or cobalt radiation sources and conventional intravenous carboplatin chemotherapy. The use of orthovoltage palliative radiation therapy may be a reasonable alternative to megavoltage radiation in locations where access is limited.Keywords: radiotherapy, veterinary oncology, chemotherapy, osteosarcoma
Procedia PDF Downloads 73124 Higher Education and the Economy in Western Canada: Is Institutional Autonomy at Risk?
Authors: James Barmby
Abstract:
Canada’s westernmost provinces of British Columbia and Alberta are similar in many respects as they are both reliant on volatile natural resources for major portions of their economies. The two provinces have banded together to develop mutually beneficial trade, investment and labour market mobility rules, but in terms of developing systems of higher education, the two provinces are attempting to align higher education programs to economic development objectives by means that are quite different. In British Columbia, the recently announced initiative, B.C’s Skills for Jobs Blueprint will “make sure education and training programs are aligned with the demands of the labor market.” Meanwhile in Alberta, the province’s institutions of higher education are enjoying the tenth year of their membership in the Campus Alberta Quality Council, which makes recommendations to government on issues related to post-secondary education, including the approval of new programs. In B.C., public institutions of higher education are encouraged to comply with government objectives, and are rewarded with targeted funds for their efforts. In Alberta, the institutions as a system tell the government what programs they want to offer and government can agree or not agree to fund these programs through a ministerial approval process. In comparing the two higher education systems, the question emerges as to which one is more beneficial to the province: the one where change is directed primarily by financial incentives to achieve economic objectives or the one that makes recommendations to the government for changes in programs to achieve institutional objectives? How is institutional autonomy affected in each strategy? Does institutional autonomy matter anymore? In recent years, much has been written in regard to academic freedom, but less about institutional autonomy, which is seen by many as essential to protecting academic freedom. However, while institutional autonomy means freedom from government control, it does not necessarily mean self-government. In this study, a comparison of the two higher education systems is made using recent government policy initiatives in both provinces, and responses to those actions by the higher education institutions. The findings indicate that the economic needs in both provinces take precedence over issues of institutional autonomy.Keywords: alberta, British Columbia, institutional autonomy, funding
Procedia PDF Downloads 701123 Risk Analysis of Flood Physical Vulnerability in Residential Areas of Mathare Nairobi, Kenya
Authors: James Kinyua Gitonga, Toshio Fujimi
Abstract:
Vulnerability assessment and analysis is essential to solving the degree of damage and loss as a result of natural disasters. Urban flooding causes a major economic loss and casualties, at Mathare residential area in Nairobi, Kenya. High population caused by rural-urban migration, Unemployment, and unplanned urban development are among factors that increase flood vulnerability in Mathare area. This study aims to analyse flood risk physical vulnerabilities in Mathare based on scientific data, research data that includes the Rainfall data, River Mathare discharge rate data, Water runoff data, field survey data and questionnaire survey through sampling of the study area have been used to develop the risk curves. Three structural types of building were identified in the study area, vulnerability and risk curves were made for these three structural types by plotting the relationship between flood depth and damage for each structural type. The results indicate that the structural type with mud wall and mud floor is the most vulnerable building to flooding while the structural type with stone walls and concrete floor is least vulnerable. The vulnerability of building contents is mainly determined by the number of floors, where households with two floors are least vulnerable, and households with a one floor are most vulnerable. Therefore more than 80% of the residential buildings including the property in the building are highly vulnerable to floods consequently exposed to high risk. When estimating the potential casualties/injuries we discovered that the structural types of houses were major determinants where the mud/adobe structural type had casualties of 83.7% while the Masonry structural type had casualties of 10.71% of the people living in these houses. This research concludes that flood awareness, warnings and observing the building codes will enable reduce damage to the structural types of building, deaths and reduce damage to the building contents.Keywords: flood loss, Mathare Nairobi, risk curve analysis, vulnerability
Procedia PDF Downloads 238122 Use of Locally Effective Microorganisms in Conjunction with Biochar to Remediate Mine-Impacted Soils
Authors: Thomas F. Ducey, Kristin M. Trippe, James A. Ippolito, Jeffrey M. Novak, Mark G. Johnson, Gilbert C. Sigua
Abstract:
The Oronogo-Duenweg mining belt –approximately 20 square miles around the Joplin, Missouri area– is a designated United States Environmental Protection Agency Superfund site due to lead-contaminated soil and groundwater by former mining and smelting operations. Over almost a century of mining (from 1848 to the late 1960’s), an estimated ten million tons of cadmium, lead, and zinc containing material have been deposited on approximately 9,000 acres. Sites that have undergone remediation, in which the O, A, and B horizons have been removed along with the lead contamination, the exposed C horizon remains incalcitrant to revegetation efforts. These sites also suffer from poor soil microbial activity, as measured by soil extracellular enzymatic assays, though 16S ribosomal ribonucleic acid (rRNA) indicates that microbial diversity is equal to sites that have avoided mine-related contamination. Soil analysis reveals low soil organic carbon, along with high levels of bio-available zinc, that reflect the poor soil fertility conditions and low microbial activity. Our study looked at the use of several materials to restore and remediate these sites, with the goal of improving soil health. The following materials, and their purposes for incorporation into the study, were as follows: manure-based biochar for the binding of zinc and other heavy metals responsible for phytotoxicity, locally sourced biosolids and compost to incorporate organic carbon into the depleted soils, effective microorganisms harvested from nearby pristine sites to provide a stable community for nutrient cycling in the newly composited 'soil material'. Our results indicate that all four materials used in conjunction result in the greatest benefit to these mine-impacted soils, based on above ground biomass, microbial biomass, and soil enzymatic activities.Keywords: locally effective microorganisms, biochar, remediation, reclamation
Procedia PDF Downloads 217121 Hypertensive Response to Maximal Exercise Test in Young and Middle Age Hypertensive on Blood Pressure Lowering Medication: Monotherapy vs. Combination Therapy
Authors: James Patrick A. Diaz, Raul E. Ramboyong
Abstract:
Background: Hypertensive response during maximal exercise test provides important information on the level of blood pressure control and evaluation of treatment. Method: A single center retrospective descriptive study was conducted among 117 young (aged 20 to 40) and middle age (aged 40 to 65) hypertensive patients, who underwent treadmill stress test. Currently on maintenance frontline medication either monotherapy (Angiotensin-converting enzyme inhibitor/Angiotensin receptor blocker [ACEi/ARB], Calcium channel blocker [CCB], Diuretic - Hydrochlorthiazide [HCTZ]) or combination therapy (ARB+CCB, ARB+HCTZ), who attained a maximal exercise on treadmill stress test (TMST) with hypertensive response (systolic blood pressure: male >210 mm Hg, female >190 mm Hg, diastolic blood pressure >100 mmHg, or increase of >10 mm Hg at any time during the test), on Bruce and Modified Bruce protocol. Exaggerated blood pressure response during exercise (systolic [SBP] and diastolic [DBP]), peak exercise blood pressure (SBP and DBP), recovery period (SBP and DBP) and test for ischemia and their antihypertensive medication/s were investigated. Analysis of variance and chi-square test were used for statistical analysis. Results: Hypertensive responses on maximal exercise test were seen mostly among female population (P < 0.000) and middle age (P < 0.000) patients. Exaggerated diastolic blood pressure responses were significantly lower in patients who were taking CCB (P < 0.004). A longer recovery period that showed a delayed decline in SBP was observed in patients taking ARB+HCTZ (P < 0.036). There were no significant differences in the level of exaggerated systolic blood pressure response and during peak exercise (both systolic and diastolic) in patients using either monotherapy or combination antihypertensives. Conclusion: Calcium channel blockers provided lower exaggerated diastolic BP response during maximal exercise test in hypertensive middle age patients. Patients on combination therapy using ARB+HCTZ exhibited a longer recovery period of systolic blood pressure.Keywords: antihypertensive, exercise test, hypertension, hyperytensive response
Procedia PDF Downloads 284120 Historical Analysis of the Evolution of Swiss Identity and the Successful Integration of Multilingualism into the Swiss Concept of Nationhood
Authors: James Beringer
Abstract:
Switzerland’s ability to forge a strong national identity across linguistic barriers has long been of interest to nationalism scholars. This begs the question of how this has been achieved, given that traditional explanations of luck or exceptionalism appear highly reductionist. This paper evaluates the theory that successful Swiss management of linguistic diversity stems from the strong integration of multilingualism into Swiss national identity. Using archival analysis of Swiss government records, historical accounts of prominent Swiss citizens, as well as secondary literature concerning the fundamental aspects of Swiss national identity, this paper charts the historical evolution of Swiss national identity. It explains how multilingualism was deliberately and successfully integrated into Swiss national identity as a response to political fragmentation along linguistic lines during the First World War. Its primary conclusions are the following. Firstly, the earliest foundations of Swiss national identity were purposefully removed from any association with a single national language. This produced symbols, myths, and values -such as a strong commitment to communalism, the imagery of the Swiss natural landscape, and the use of Latin expressions, which can be adopted across Swiss linguistic groups. Secondly, the First World War triggered a turning point in the evolution of Swiss national identity. The fundamental building blocks proved insufficient in preventing political fractures amongst linguistic lines, as each Swiss linguistic group gravitated towards its linguistic neighbours within Europe. To avoid a repeat of such fragmentation, a deliberate effort was made to fully integrate multilingualism as a fundamental aspect of Swiss national identity. Existing natural symbols, such as the St Gotthard Mountains, were recontextualized in order to become associated with multilingualism. The education system was similarly reformed to reflect the unique multilingual nature of the Swiss nation. The successful result of this process can be readily observed in polls and surveys, with large segments of the Swiss population highlighting multilingualism as a uniquely Swiss characteristic, indicating the symbiotic connection between multilingualism and the Swiss nation.Keywords: language's role in identity formation, multilingualism in nationalism, national identity formation, Swiss national identity history
Procedia PDF Downloads 189119 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites
Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari
Abstract:
Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm:118 Identification of Clinical Characteristics from Persistent Homology Applied to Tumor Imaging
Authors: Eashwar V. Somasundaram, Raoul R. Wadhwa, Jacob G. Scott
Abstract:
The use of radiomics in measuring geometric properties of tumor images such as size, surface area, and volume has been invaluable in assessing cancer diagnosis, treatment, and prognosis. In addition to analyzing geometric properties, radiomics would benefit from measuring topological properties using persistent homology. Intuitively, features uncovered by persistent homology may correlate to tumor structural features. One example is necrotic cavities (corresponding to 2D topological features), which are markers of very aggressive tumors. We develop a data pipeline in R that clusters tumors images based on persistent homology is used to identify meaningful clinical distinctions between tumors and possibly new relationships not captured by established clinical categorizations. A preliminary analysis was performed on 16 Magnetic Resonance Imaging (MRI) breast tissue segments downloaded from the 'Investigation of Serial Studies to Predict Your Therapeutic Response with Imaging and Molecular Analysis' (I-SPY TRIAL or ISPY1) collection in The Cancer Imaging Archive. Each segment represents a patient’s breast tumor prior to treatment. The ISPY1 dataset also provided the estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) status data. A persistent homology matrix up to 2-dimensional features was calculated for each of the MRI segmentation. Wasserstein distances were then calculated between all pairwise tumor image persistent homology matrices to create a distance matrix for each feature dimension. Since Wasserstein distances were calculated for 0, 1, and 2-dimensional features, three hierarchal clusters were constructed. The adjusted Rand Index was used to see how well the clusters corresponded to the ER/PR/HER2 status of the tumors. Triple-negative cancers (negative status for all three receptors) significantly clustered together in the 2-dimensional features dendrogram (Adjusted Rand Index of .35, p = .031). It is known that having a triple-negative breast tumor is associated with aggressive tumor growth and poor prognosis when compared to non-triple negative breast tumors. The aggressive tumor growth associated with triple-negative tumors may have a unique structure in an MRI segmentation, which persistent homology is able to identify. This preliminary analysis shows promising results in the use of persistent homology on tumor imaging to assess the severity of breast tumors. The next step is to apply this pipeline to other tumor segment images from The Cancer Imaging Archive at different sites such as the lung, kidney, and brain. In addition, whether other clinical parameters, such as overall survival, tumor stage, and tumor genotype data are captured well in persistent homology clusters will be assessed. If analyzing tumor MRI segments using persistent homology consistently identifies clinical relationships, this could enable clinicians to use persistent homology data as a noninvasive way to inform clinical decision making in oncology.Keywords: cancer biology, oncology, persistent homology, radiomics, topological data analysis, tumor imaging
Procedia PDF Downloads 135117 Assessment of Students Skills in Error Detection in SQL Classes using Rubric Framework - An Empirical Study
Authors: Dirson Santos De Campos, Deller James Ferreira, Anderson Cavalcante Gonçalves, Uyara Ferreira Silva
Abstract:
Rubrics to learning research provide many evaluation criteria and expected performance standards linked to defined student activity for learning and pedagogical objectives. Despite the rubric being used in education at all levels, academic literature on rubrics as a tool to support research in SQL Education is quite rare. There is a large class of SQL queries is syntactically correct, but certainly, not all are semantically correct. Detecting and correcting errors is a recurring problem in SQL education. In this paper, we usthe Rubric Abstract Framework (RAF), which consists of steps, that allows us to map the information to measure student performance guided by didactic objectives defined by the teacher as long as it is contextualized domain modeling by rubric. An empirical study was done that demonstrates how rubrics can mitigate student difficulties in finding logical errors and easing teacher workload in SQL education. Detecting and correcting logical errors is an important skill for students. Researchers have proposed several ways to improve SQL education because understanding this paradigm skills are crucial in software engineering and computer science. The RAF instantiation was using in an empirical study developed during the COVID-19 pandemic in database course. The pandemic transformed face-to-face and remote education, without presential classes. The lab activities were conducted remotely, which hinders the teaching-learning process, in particular for this research, in verifying the evidence or statements of knowledge, skills, and abilities (KSAs) of students. Various research in academia and industry involved databases. The innovation proposed in this paper is the approach used where the results obtained when using rubrics to map logical errors in query formulation have been analyzed with gains obtained by students empirically verified. The research approach can be used in the post-pandemic period in both classroom and distance learning.Keywords: rubric, logical error, structured query language (SQL), empirical study, SQL education
Procedia PDF Downloads 190116 Synthesis and Two-Photon Polymerization of a Cytocompatibility Tyramine Functionalized Hyaluronic Acid Hydrogel That Mimics the Chemical, Mechanical, and Structural Characteristics of Spinal Cord Tissue
Authors: James Britton, Vijaya Krishna, Manus Biggs, Abhay Pandit
Abstract:
Regeneration of the spinal cord after injury remains a great challenge due to the complexity of this organ. Inflammation and gliosis at the injury site hinder the outgrowth of axons and hence prevent synaptic reconnection and reinnervation. Hyaluronic acid (HA) is the main component of the spinal cord extracellular matrix and plays a vital role in cell proliferation and axonal guidance. In this study, we have synthesized and characterized a photo-cross-linkable HA-tyramine (tyr) hydrogel from a chemical, mechanical, electrical, biological and structural perspective. From our experimentation, we have found that HA-tyr can be synthesized with controllable degrees of tyramine substitution using click chemistry. The complex modulus (G*) of HA-tyr can be tuned to mimic the mechanical properties of the native spinal cord via optimization of the photo-initiator concentration and UV exposure. We have examined the degree of tyramine-tyramine covalent bonding (polymerization) as a function of UV exposure and photo-initiator use via Photo and Nuclear magnetic resonance spectroscopy. Both swelling and enzymatic degradation assays were conducted to examine the resilience of our 3D printed hydrogel constructs in-vitro. Using a femtosecond 780nm laser, the two-photon polymerization of HA-tyr hydrogel in the presence of riboflavin photoinitiator was optimized. A laser power of 50mW and scan speed of 30,000 μm/s produced high-resolution spatial patterning within the hydrogel with sustained mechanical integrity. Using dorsal root ganglion explants, the cytocompatibility of photo-crosslinked HA-tyr was assessed. Using potentiometry, the electrical conductivity of photo-crosslinked HA-tyr was assessed and compared to that of native spinal cord tissue as a function of frequency. In conclusion, we have developed a biocompatible hydrogel that can be used for photolithographic 3D printing to fabricate tissue engineered constructs for neural tissue regeneration applications.Keywords: 3D printing, hyaluronic acid, photolithography, spinal cord injury
Procedia PDF Downloads 152115 Four-Electron Auger Process for Hollow Ions
Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola
Abstract:
A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method
Procedia PDF Downloads 153114 Identification of New Familial Breast Cancer Susceptibility Genes: Are We There Yet?
Authors: Ian Campbell, Gillian Mitchell, Paul James, Na Li, Ella Thompson
Abstract:
The genetic cause of the majority of multiple-case breast cancer families remains unresolved. Next generation sequencing has emerged as an efficient strategy for identifying predisposing mutations in individuals with inherited cancer. We are conducting whole exome sequence analysis of germ line DNA from multiple affected relatives from breast cancer families, with the aim of identifying rare protein truncating and non-synonymous variants that are likely to include novel cancer predisposing mutations. Data from more than 200 exomes show that on average each individual carries 30-50 protein truncating mutations and 300-400 rare non-synonymous variants. Heterogeneity among our exome data strongly suggest that numerous moderate penetrance genes remain to be discovered, with each gene individually accounting for only a small fraction of families (~0.5%). This scenario marks validation of candidate breast cancer predisposing genes in large case-control studies as the rate-limiting step in resolving the missing heritability of breast cancer. The aim of this study is to screen genes that are recurrently mutated among our exome data in a larger cohort of cases and controls to assess the prevalence of inactivating mutations that may be associated with breast cancer risk. We are using the Agilent HaloPlex Target Enrichment System to screen the coding regions of 168 genes in 1,000 BRCA1/2 mutation-negative familial breast cancer cases and 1,000 cancer-naive controls. To date, our interim analysis has identified 21 genes which carry an excess of truncating mutations in multiple breast cancer families versus controls. Established breast cancer susceptibility gene PALB2 is the most frequently mutated gene (13/998 cases versus 0/1009 controls), but other interesting candidates include NPSR1, GSN, POLD2, and TOX3. These and other genes are being validated in a second cohort of 1,000 cases and controls. Our experience demonstrates that beyond PALB2, the prevalence of mutations in the remaining breast cancer predisposition genes is likely to be very low making definitive validation exceptionally challenging.Keywords: predisposition, familial, exome sequencing, breast cancer
Procedia PDF Downloads 492113 A Systematic Review of Pedometer-or Accelerometer-Based Interventions for Increasing Physical Activity in Low Socioeconomic Groups
Authors: Shaun G. Abbott, Rebecca C. Reynolds, James B. Etter, John B. F. de Wit
Abstract:
The benefits of physical activity (PA) on health are well documented. Low socioeconomic status (SES) is associated with poor health, with PA a suggested mediator. Pedometers and accelerometers offer an effective behavior change tool to increase PA levels. While the role of pedometer and accelerometer use in increasing PA is recognized in many populations, little is known in low-SES groups. We are aiming to assess the effectiveness of pedometer- and accelerometer-based interventions for increasing PA step count and improving subsequent health outcomes among low-SES groups of high-income countries. Medline, Embase, PsycINFO, CENTRAL and SportDiscus databases were searched to identify articles published before 10th July, 2015; using search terms developed from previous systematic reviews. Inclusion criteria are: low-SES participants classified by income, geography, education, occupation or ethnicity; study duration minimum 4 weeks; an intervention and control group; wearing of an unsealed pedometer or accelerometer to objectively measure PA as step counts per day for the duration of the study. We retrieved 2,142 articles from our database searches, after removal of duplicates. Two investigators independently reviewed titles and abstracts of these articles (50% each) and a combined 20% sample were reviewed to account for inter-assessor variation. We are currently verifying the full texts of 430 articles. Included studies will be critically appraised for risk of bias using guidelines suggested by the Cochrane Public Health Group. Two investigators will extract data concerning the intervention; study design; comparators; steps per day; participants; context and presence or absence of obesity and/or chronic disease. Heterogeneity amongst studies is anticipated, thus a narrative synthesis of data will be conducted with the simplification of selected results into percentage increases from baseline to allow for between-study comparison. Results will be presented at the conference in December if selected.Keywords: accelerometer, pedometer, physical activity, socioeconomic, step count
Procedia PDF Downloads 331112 The Needs of People with a Diagnosis of Dementia and Their Carers and Families
Authors: James Boag
Abstract:
The needs of people with a diagnosis of dementia and their carers and families are physical, psychosocial, and psychological and begin at the time of diagnosis. There is frequently a lack of emotional support and counselling. Care- giving support is required from the presentation of the first symptoms of dementia until death. Alzheimer's disease begins decades before the clinical symptoms begin to appear, and in many cases, it remains undiagnosed, or diagnosed too late for any possible interventions to have any effect. However, if an incorrect diagnosis is given, it may result in a person being treated, without effect, for a type of dementia they do not have and delaying the interventions they should have received. Being diagnosed with dementia can cause emotional distress to the person, and physical and emotional support is needed, which will become more important as the disease progresses. The severity of the patient's dementia and their symptoms has a bearing of the impact on the carer and the support needed. A lack of insight and /or a denial of the diagnosis, grief, reacting to anticipated future losses, and coping methods to maximise the disease outcome, are things that should be addressed. Because of the stigma, it is important for carers not to lose contact with family and others because social isolation leads to depression and burnout. The impact on a carer's well- being and quality of life can be influenced by the severity of the illness, its type of dementia, its symptoms, healthcare support, financial and social status, career, age, health, residential setting, and relationship to the patient. Carer burnout due to lack of support leads to people diagnosed with dementia being put into residential care prematurely. Often dementia is not recognised as a terminal illness, limiting the ability of the person diagnosed with dementia and their carers to work on advance care planning and getting access to palliative and other support. Many carers have been satisfied with the physical support they were given in their everyday life, however, it was agreed that there was an immense unmet need for psychosocial support, especially after diagnosis and approaching end of life. Providing continuity and coordination of care is important. Training is necessary for providers to understand that every case is different, and they should understand the complexities. Grief, the emotional response to loss, is suffered during the progression of the disease and long afterwards, and carers should continue to be supported after the death of the person they were caring for.Keywords: dementia, caring, challenges, needs
Procedia PDF Downloads 97111 The MoEDAL-MAPP* Experiment - Expanding the Discovery Horizon of the Large Hadron Collider
Authors: James Pinfold
Abstract:
The MoEDAL (Monopole and Exotics Detector at the LHC) experiment deployed at IP8 on the Large Hadron Collider ring was the first dedicated search experiment to take data at the Large Hadron Collider (LHC) in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, massive, slowly moving charged particles and long-lived massive charge SUSY particles. We shall report on our search at LHC’s Run-2 for Magnetic monopoles and dyons produced in p-p and photon-fusion. In more detail, we will report our most recent result in this arena: the search for magnetic monopoles via the Schwinger Mechanism in Pb-Pb collisions. The MoEDAL detector, originally the first dedicated search detector at the LHC, is being reinstalled for LHC’s Run-3 to continue the search for electrically and magnetically charged HIPs with enhanced instantaneous luminosity, detector efficiency and a factor of ten lower thresholds for HIPs. As part of this effort, we will search for massive l long-lived, singly and multiply charged particles from various scenarios for which MoEDAL has a competitive sensitivity. An upgrade to MoEDAL, the MoEDAL Apparatus for Penetrating Particles (MAPP), is now the LHC’s newest detector. The MAPP detector, positioned in UA83, expands the physics reach of MoEDAL to include sensitivity to feebly-charged particles with charge, or effective charge, as low as 10-3 e (where e is the electron charge). Also, In conjunction with MoEDAL’s trapping detector, the MAPP detector gives us a unique sensitivity to extremely long-lived charged particles. MAPP also has some sensitivity to long-lived neutral particles. The addition of an Outrigger detector for MAPP-1 to increase its acceptance for more massive milli-charged particles is currently in the Technical Proposal stage. Additionally, we will briefly report on the plans for the MAPP-2 upgrade to the MoEDAL-MAPP experiment for the High Luminosity LHC (HL-LHC). This experiment phase is designed to maximize MoEDAL-MAPP’s sensitivity to very long-lived neutral messengers of physics beyond the Standard Model. We envisage this detector being deployed in the UGC1 gallery near IP8.Keywords: LHC, beyond the standard model, dedicated search experiment, highly ionizing particles, long-lived particles, milli-charged particles
Procedia PDF Downloads 68110 Harnessing Environmental DNA to Assess the Environmental Sustainability of Commercial Shellfish Aquaculture in the Pacific Northwest United States
Authors: James Kralj
Abstract:
Commercial shellfish aquaculture makes significant contributions to the economy and culture of the Pacific Northwest United States. The industry faces intense pressure to minimize environmental impacts as a result of Federal policies like the Magnuson-Stevens Fisheries Conservation and Management Act and the Endangered Species Act. These policies demand the protection of essential fish habitat and declare several salmon species as endangered. Consequently, numerous projects related to the protection and rehabilitation of eelgrass beds, a crucial ecosystem for countless fish species, have been proposed at both state and federal levels. Both eelgrass beds and commercial shellfish farms occupy the same physical space, and therefore understanding the effects of shellfish aquaculture on eelgrass ecosystems has become a top ecological and economic priority of both government and industry. This study evaluates the organismal communities that eelgrass and oyster aquaculture habitats support. Water samples were collected from Willapa Bay, Washington; Tillamook Bay, Oregon; Humboldt Bay, California; and Sammish Bay, Washington to compare species diversity in eelgrass beds, oyster aquaculture plots, and boundary edges between these two habitats. Diversity was assessed using a novel technique: environmental DNA (eDNA). All organisms constantly shed small pieces of DNA into their surrounding environment through the loss of skin, hair, tissues, and waste. In the marine environment, this DNA becomes suspended in the water column allowing it to be easily collected. Once extracted and sequenced, this eDNA can be used to paint a picture of all the organisms that live in a particular habitat making it a powerful technology for environmental monitoring. Industry professionals and government officials should consider these findings to better inform future policies regulating eelgrass beds and oyster aquaculture. Furthermore, the information collected in this study may be used to improve the environmental sustainability of commercial shellfish aquaculture while simultaneously enhancing its growth and profitability in the face of ever-changing political and ecological landscapes.Keywords: aquaculture, environmental DNA, shellfish, sustainability
Procedia PDF Downloads 246109 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 167108 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 159