Search results for: score prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4151

Search results for: score prediction

71 Management of Myofascial Temporomandibular Disorder in Secondary Care: A Quality Improvement Project

Authors: Rishana Bilimoria, Selina Tang, Sajni Shah, Marianne Henien, Christopher Sproat

Abstract:

Temporomandibular disorders (TMD) may affect up to a third of the general population, and there is evidence demonstrating the majority of Myofascial TMD cases improve after education and conservative measures. In 2015 our department implemented a modified care pathway for myofascial TMD patients in an attempt to improve the patient journey. This involved the use of an interactive group therapy approach to deliver education, reinforce conservative measures and promote self-management. Patient reported experience measures from the new group clinic revealed 71% patient satisfaction. This service is efficient in improving aspects of health status while reducing health-care costs and redistributing clinical time. Since its’ establishment, 52 hours of clinical time, resources and funding have been redirected effectively. This Quality Improvement Project was initiated because it was felt that this new service was being underutilised by our surgical teams. The ‘Plan-Do-Study-Act cycle’ (PDSA) framework was employed to analyse utilisation of the service: The ‘plan’ stage involved outlining our aims: to raise awareness amongst clinicians of the unified care pathway and to increase referral to this clinic. The ‘do’ stage involved collecting data from a sample of 96 patients over 4 month period to ascertain the proportion of Myofascial TMD patients who were correctly referred to the designated clinic. ‘Suitable’ patients who weren’t referred were identified. The ‘Study’ phase involved analysis of results, which revealed that 77% of suitable patients weren’t referred to the designated clinic. They were reviewed on other clinics, which are often overbooked, or managed by junior staff members. This correlated with our original prediction. Barriers to referral included: lack of awareness of the clinic, individual consultant treatment preferences and patient, reluctance to be referred to a ‘group’ clinic. The ‘Act’ stage involved presenting our findings to the team at a clinical governance meeting. This included demonstration of the clinical effectiveness of the care-pathway and explaining the referral route and criteria. In light of the evaluation results, it was decided to keep the group clinic and maximize utilisation. The second cycle of data collection following these changes revealed that of 66 Myofascial TMD patients over a 4 month period, only 9% of suitable patients were not seen via the designated pathway; therefore this QIP was successful in meeting the set objectives. Overall, employing the PDSA cycle in this QIP resulted in appropriate utilisation of the modified care pathway for patients with myofascial TMD in Guy’s Oral Surgery Department. In turn, this leads to high patient satisfaction with the service and effectively redirected 52 hours of clinical time. It permitted adoption of a collaborative working style with oral surgery colleagues to investigate problems, identify solutions, and collectively raise standards of clinical care to ensure we adopt a unified care pathway in secondary care management of Myofascial TMD patients.

Keywords: myofascial, quality Improvement, PDSA, TMD

Procedia PDF Downloads 141
70 Integrated Approach Towards Safe Wastewater Reuse in Moroccan Agriculture

Authors: Zakia Hbellaq

Abstract:

The Mediterranean region is considered a hotbed for climate change. Morocco is a semi-arid Mediterranean country facing water shortages and poor water quality. Its limited water resources limit the activities of various economic sectors. Most of Morocco's territory is in arid and desert areas. The potential water resources are estimated at 22 billion m3, which is equivalent to about 700 m3/inhabitant/year, and Morocco is in a state of structural water stress. Strictly speaking, the Kingdom of Morocco is one of the “very riskiest” countries, according to the World Resources Institute (WRI), which oversees the calculation of water stress risk in 167 countries. The surprising results of the Institute (WRI) rank Morocco as one of the riskiest countries in terms of water scarcity, ranking 3.89 out of 5, thus occupying the 23rd place out of a total of 167 countries, which indicates that the demand for water exceeds the available resources. Agriculture with a score of 3.89 is most affected by water stress from irrigation and places a heavy burden on the water table. Irrigation is an unavoidable technical need and has undeniable economic and social benefits given the available resources and climatic conditions. Irrigation, and therefore the agricultural sector, currently uses 86% of its water resources, while industry uses 5.5%. Although its development has undeniable economic and social benefits, it also contributes to the overfishing of most groundwater resources and the surprising decline in levels and deterioration of water quality in some aquifers. In this context, REUSE is one of the proposed solutions to reduce the water footprint of the agricultural sector and alleviate the shortage of water resources. Indeed, wastewater reuse, also known as REUSE (reuse of treated wastewater), is a step forward not only for the circular economy but also for the future, especially in the context of climate change. In particular, water reuse provides an alternative to existing water supplies and can be used to improve water security, sustainability, and resilience. However, given the introduction of organic trace pollutants or, organic micro-pollutants, the absorption of emerging contaminants, and decreasing salinity, it is possible to tackle innovative capabilities to overcome these problems and ensure food and health safety. To this end, attention will be paid to the adoption of an integrated and attractive approach, based on the reinforcement and optimization of the treatments proposed for the elimination of the organic load with particular attention to the elimination of emerging pollutants, to achieve this goal. , membrane bioreactors (MBR) as stand-alone technologies are not able to meet the requirements of WHO guidelines. They will be combined with heterogeneous Fenton processes using persulfate or hydrogen peroxide oxidants. Similarly, adsorption and filtration are applied as tertiary treatment In addition, the evaluation of crop performance in terms of yield, productivity, quality, and safety, through the optimization of Trichoderma sp strains that will be used to increase crop resistance to abiotic stresses, as well as the use of modern omics tools such as transcriptomic analysis using RNA sequencing and methylation to identify adaptive traits and associated genetic diversity that is tolerant/resistant/resilient to biotic and abiotic stresses. Hence, ensuring this approach will undoubtedly alleviate water scarcity and, likewise, increase the negative and harmful impact of wastewater irrigation on the condition of crops and the health of their consumers.

Keywords: water scarcity, food security, irrigation, agricultural water footprint, reuse, emerging contaminants

Procedia PDF Downloads 163
69 Predicting Suicidal Behavior by an Accurate Monitoring of RNA Editing Biomarkers in Blood Samples

Authors: Berengere Vire, Nicolas Salvetat, Yoann Lannay, Guillaume Marcellin, Siem Van Der Laan, Franck Molina, Dinah Weissmann

Abstract:

Predicting suicidal behaviors is one of the most complex challenges of daily psychiatric practices. Today, suicide risk prediction using biological tools is not validated and is only based on subjective clinical reports of the at-risk individual. Therefore, there is a great need to identify biomarkers that would allow early identification of individuals at risk of suicide. Alterations of adenosine-to-inosine (A-to-I) RNA editing of neurotransmitter receptors and other proteins have been shown to be involved in etiology of different psychiatric disorders and linked to suicidal behavior. RNA editing is a co- or post-transcriptional process leading to a site-specific alteration in RNA sequences. It plays an important role in the epi transcriptomic regulation of RNA metabolism. On postmortem human brain tissue (prefrontal cortex) of depressed suicide victims, Alcediag found specific alterations of RNA editing activity on the mRNA coding for the serotonin 2C receptor (5-HT2cR). Additionally, an increase in expression levels of ADARs, the RNA editing enzymes, and modifications of RNA editing profiles of prime targets, such as phosphodiesterase 8A (PDE8A) mRNA, have also been observed. Interestingly, the PDE8A gene is located on chromosome 15q25.3, a genomic region that has recurrently been associated with the early-onset major depressive disorder (MDD). In the current study, we examined whether modifications in RNA editing profile of prime targets allow identifying disease-relevant blood biomarkers and evaluating suicide risk in patients. To address this question, we performed a clinical study to identify an RNA editing signature in blood of depressed patients with and without the history of suicide attempts. Patient’s samples were drawn in PAXgene tubes and analyzed on Alcediag’s proprietary RNA editing platform using next generation sequencing technology. In addition, gene expression analysis by quantitative PCR was performed. We generated a multivariate algorithm comprising various selected biomarkers to detect patients with a high risk to attempt suicide. We evaluated the diagnostic performance using the relative proportion of PDE8A mRNA editing at different sites and/or isoforms as well as the expression of PDE8A and the ADARs. The significance of these biomarkers for suicidality was evaluated using the area under the receiver-operating characteristic curve (AUC). The generated algorithm comprising the biomarkers was found to have strong diagnostic performances with high specificity and sensitivity. In conclusion, we developed tools to measure disease-specific biomarkers in blood samples of patients for identifying individuals at the greatest risk for future suicide attempts. This technology not only fosters patient management but is also suitable to predict the risk of drug-induced psychiatric side effects such as iatrogenic increase of suicidal ideas/behaviors.

Keywords: blood biomarker, next-generation-sequencing, RNA editing, suicide

Procedia PDF Downloads 259
68 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model

Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson

Abstract:

The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.

Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania

Procedia PDF Downloads 107
67 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 406
66 Optimization of Perfusion Distribution in Custom Vascular Stent-Grafts Through Patient-Specific CFD Models

Authors: Scott M. Black, Craig Maclean, Pauline Hall Barrientos, Konstantinos Ritos, Asimina Kazakidi

Abstract:

Aortic aneurysms and dissections are leading causes of death in cardiovascular disease. Both inevitably lead to hemodynamic instability without surgical intervention in the form of vascular stent-graft deployment. An accurate description of the aortic geometry and blood flow in patient-specific cases is vital for treatment planning and long-term success of such grafts, as they must generate physiological branch perfusion and in-stent hemodynamics. The aim of this study was to create patient-specific computational fluid dynamics (CFD) models through a multi-modality, multi-dimensional approach with boundary condition optimization to predict branch flow rates and in-stent hemodynamics in custom stent-graft configurations. Three-dimensional (3D) thoracoabdominal aortae were reconstructed from four-dimensional flow-magnetic resonance imaging (4D Flow-MRI) and computed tomography (CT) medical images. The former employed a novel approach to generate and enhance vessel lumen contrast via through-plane velocity at discrete, user defined cardiac time steps post-hoc. To produce patient-specific boundary conditions (BCs), the aortic geometry was reduced to a one-dimensional (1D) model. Thereafter, a zero-dimensional (0D) 3-Element Windkessel model (3EWM) was coupled to each terminal branch to represent the distal vasculature. In this coupled 0D-1D model, the 3EWM parameters were optimized to yield branch flow waveforms which are representative of the 4D Flow-MRI-derived in-vivo data. Thereafter, a 0D-3D CFD model was created, utilizing the optimized 3EWM BCs and a 4D Flow-MRI-obtained inlet velocity profile. A sensitivity analysis on the effects of stent-graft configuration and BC parameters was then undertaken using multiple stent-graft configurations and a range of distal vasculature conditions. 4D Flow-MRI granted unparalleled visualization of blood flow throughout the cardiac cycle in both the pre- and postsurgical states. Segmentation and reconstruction of healthy and stented regions from retrospective 4D Flow-MRI images also generated 3D models with geometries which were successfully validated against their CT-derived counterparts. 0D-1D coupling efficiently captured branch flow and pressure waveforms, while 0D-3D models also enabled 3D flow visualization and quantification of clinically relevant hemodynamic parameters for in-stent thrombosis and graft limb occlusion. It was apparent that changes in 3EWM BC parameters had a pronounced effect on perfusion distribution and near-wall hemodynamics. Results show that the 3EWM parameters could be iteratively changed to simulate a range of graft limb diameters and distal vasculature conditions for a given stent-graft to determine the optimal configuration prior to surgery. To conclude, this study outlined a methodology to aid in the prediction post-surgical branch perfusion and in-stent hemodynamics in patient specific cases for the implementation of custom stent-grafts.

Keywords: 4D flow-MRI, computational fluid dynamics, vascular stent-grafts, windkessel

Procedia PDF Downloads 181
65 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 136
64 Impact of Interdisciplinary Therapy Allied to Online Health Education on Cardiometabolic Parameters and Inflammation Factor Rating in Obese Adolescents

Authors: Yasmin A. M. Ferreira, Ana C. K. Pelissari, Sofia De C. F. Vicente, Raquel M. Da S. Campos, Deborah C. L. Masquio, Lian Tock, Lila M. Oyama, Flavia C. Corgosinho, Valter T. Boldarine, Ana R. Dâmaso

Abstract:

The prevalence of overweight and obesity is growing around the world and currently considered a global epidemic. Food and nutrition are essential requirements for promoting health and protecting non-communicable chronic diseases, such as obesity and cardiovascular disease. Specific dietary components may modulate the inflammation and oxidative stress in obese individuals. Few studies have investigated the dietary Inflammation Factor Rating (IFR) in obese adolescents. The IFR was developed to characterize an individual´s diet on anti- to pro-inflammatory score. This evaluation contributes to investigate the effects of inflammatory diet in metabolic profile in several individual conditions. Objectives: The present study aims to investigate the effects of a multidisciplinary weight loss therapy on inflammation factor rating and cardiometabolic risk in obese adolescents. Methods: A total of 26 volunteers (14-19 y.o) were recruited and submitted to 20 weeks interdisciplinary therapy allied to health education website- Ciclo do Emagrecimento®, including clinical, nutritional, psychological counseling and exercise training. The body weight was monitored weekly by self-report and photo. The adolescents answered a test to evaluate the knowledge of the topics covered in the videos. A 24h dietary record was applied at the baseline and after 20 weeks to assess the food intake and to calculate IFR. A negative IFR suggests that diet may have inflammatory effects and a positive IFR indicates an anti-inflammatory effect. Statistical analysis was performed using the program STATISTICA version 12.5 for Windows. The adopted significant value was α ≤ 5 %. Data normality was verified with the Kolmogorov Smirnov test. Data were expressed as mean±SD values. To analyze the effects of intervention it was applied test t. Pearson´s correlations test was performed. Results: After 20 weeks of treatment, body mass index (BMI), body weight, body fat (kg and %), abdominal and waist circumferences decreased significantly. The mean of high-density lipoprotein cholesterol (HDL-c) increased after the therapy. Moreover, it was found an improvement of inflammation factor rating from -427,27±322,47 to -297,15±240,01, suggesting beneficial effects of nutritional counselling. Considering the correlations analysis, it was found that pro-inflammatory diet is associated with increase in the BMI, very low-density lipoprotein cholesterol (VLDL), triglycerides, insulin and insulin resistance index (HOMA-IR); while an anti-inflammatory diet is associated with improvement of HDL-c and insulin sensitivity Check index (QUICKI). Conclusion: The 20-week blended multidisciplinary therapy was effective to reduce body weight, anthropometric circumferences and improve inflammatory markers in obese adolescents. In addition, our results showed that an increase in inflammatory profile diet is associated with cardiometabolic parameters, suggesting the relevance to stimulate anti-inflammatory diet habits as an effective strategy to treat and control of obesity and related comorbidities. Financial Support: FAPESP (2017/07372-1) and CNPq (409943/2016-9)

Keywords: cardiometabolic risk, inflammatory diet, multidisciplinary therapy, obesity

Procedia PDF Downloads 194
63 Surviral: An Agent-Based Simulation Framework for Sars-Cov-2 Outcome Prediction

Authors: Sabrina Neururer, Marco Schweitzer, Werner Hackl, Bernhard Tilg, Patrick Raudaschl, Andreas Huber, Bernhard Pfeifer

Abstract:

History and the current outbreak of Covid-19 have shown the deadly potential of infectious diseases. However, infectious diseases also have a serious impact on areas other than health and healthcare, such as the economy or social life. These areas are strongly codependent. Therefore, disease control measures, such as social distancing, quarantines, curfews, or lockdowns, have to be adopted in a very considerate manner. Infectious disease modeling can support policy and decision-makers with adequate information regarding the dynamics of the pandemic and therefore assist in planning and enforcing appropriate measures that will prevent the healthcare system from collapsing. In this work, an agent-based simulation package named “survival” for simulating infectious diseases is presented. A special focus is put on SARS-Cov-2. The presented simulation package was used in Austria to model the SARS-Cov-2 outbreak from the beginning of 2020. Agent-based modeling is a relatively recent modeling approach. Since our world is getting more and more complex, the complexity of the underlying systems is also increasing. The development of tools and frameworks and increasing computational power advance the application of agent-based models. For parametrizing the presented model, different data sources, such as known infections, wastewater virus load, blood donor antibodies, circulating virus variants and the used capacity for hospitalization, as well as the availability of medical materials like ventilators, were integrated with a database system and used. The simulation result of the model was used for predicting the dynamics and the possible outcomes and was used by the health authorities to decide on the measures to be taken in order to control the pandemic situation. The survival package was implemented in the programming language Java and the analytics were performed with R Studio. During the first run in March 2020, the simulation showed that without measures other than individual personal behavior and appropriate medication, the death toll would have been about 27 million people worldwide within the first year. The model predicted the hospitalization rates (standard and intensive care) for Tyrol and South Tyrol with an accuracy of about 1.5% average error. They were calculated to provide 10-days forecasts. The state government and the hospitals were provided with the 10-days models to support their decision-making. This ensured that standard care was maintained for as long as possible without restrictions. Furthermore, various measures were estimated and thereafter enforced. Among other things, communities were quarantined based on the calculations while, in accordance with the calculations, the curfews for the entire population were reduced. With this framework, which is used in the national crisis team of the Austrian province of Tyrol, a very accurate model could be created on the federal state level as well as on the district and municipal level, which was able to provide decision-makers with a solid information basis. This framework can be transferred to various infectious diseases and thus can be used as a basis for future monitoring.

Keywords: modelling, simulation, agent-based, SARS-Cov-2, COVID-19

Procedia PDF Downloads 175
62 Unique Interprofessional Mental Health Education Model: A Pre/Post Survey

Authors: Michele L. Tilstra, Tiffany J. Peets

Abstract:

Interprofessional collaboration in behavioral healthcare education is increasingly recognized for its value in training students to address diverse client needs. While interprofessional education (IPE) is well-documented in occupational therapy education to address physical health, limited research exists on collaboration with counselors to address mental health concerns and the psychosocial needs of individuals receiving care. Counseling education literature primarily examines the collaboration of counseling students with psychiatrists, psychologists, social workers, and marriage and family therapists. This pretest/posttest survey research study explored changes in attitudes toward interprofessional teams among 56 Master of Occupational Therapy (MOT) (n = 42) and Counseling and Human Development (CHD) (n = 14) students participating in the Counselors and Occupational Therapists Professionally Engaged in the Community (COPE) program. The COPE program was designed to strengthen the behavioral health workforce in high-need and high-demand areas. Students accepted into the COPE program were divided into small MOT/CHD groups to complete multiple interprofessional multicultural learning modules using videos, case studies, and online discussion board posts. The online modules encouraged reflection on various behavioral healthcare roles, benefits of team-based care, cultural humility, current mental health challenges, personal biases, power imbalances, and advocacy for underserved populations. Using the Student Perceptions of Interprofessional Clinical Education- Revision 2 (SPICE-R2) scale, students completed pretest and posttest surveys using a 5-point Likert scale (Strongly Agree = 5 to Strongly Disagree = 1) to evaluate their attitudes toward interprofessional teamwork and collaboration. The SPICE-R2 measured three different factors: interprofessional teamwork and team-based practice (Team), roles/responsibilities for collaborative practice (Roles), and patient outcomes from collaborative practice (Outcomes). The mean total scores for all students improved from 4.25 (pretest) to 4.43 (posttest), Team from 4.66 to 4.58, Roles from 3.88 to 4.30, and Outcomes from 4.08 to 4.36. A paired t-test analysis for the total mean scores resulted in a t-statistic of 2.54, which exceeded both one-tail and two-tail critical values, indicating statistical significance (p = .001). When the factors of the SPICE-R2 were analyzed separately, only the Roles (t Stat=4.08, p =.0001) and Outcomes (t Stat=3.13, p = .002) were statistically significant. The item ‘I understand the roles of other health professionals’ showed the most improvement from a mean score for all students of 3.76 (pretest) to 4.46 (posttest). The significant improvement in students' attitudes toward interprofessional teams suggests that the unique integration of OT and CHD students in the COPE program effectively develops a better understanding of the collaborative roles necessary for holistic client care. These results support the importance of IPE through structured, engaging interprofessional experiences. These experiences are essential for enhancing students' readiness for collaborative practice and align with accreditation standards requiring interprofessional education in OT and CHD programs to prepare practitioners for team-based care. The findings contribute to the growing body of evidence supporting the integration of IPE in behavioral healthcare curricula to improve holistic client care and encourage students to engage in collaborative practice across healthcare settings.

Keywords: behavioral healthcare, counseling education, interprofessional education, mental health education, occupational therapy education

Procedia PDF Downloads 41
61 Medical Decision-Making in Advanced Dementia from the Family Caregiver Perspective: A Qualitative Study

Authors: Elzbieta Sikorska-Simmons

Abstract:

Advanced dementia is a progressive terminal brain disease that is accompanied by a syndrome of difficult to manage symptoms and complications that eventually lead to death. The management of advanced dementia poses major challenges to family caregivers who act as patient health care proxies in making medical treatment decisions. Little is known, however, about how they manage advanced dementia and how their treatment choices influence the quality of patient life. This prospective qualitative study examines the key medical treatment decisions that family caregivers make while managing advanced dementia. The term ‘family caregiver’ refers to a relative or a friend who is primarily responsible for managing patient’s medical care needs and legally authorized to give informed consent for medical treatments. Medical decision-making implies a process of choosing between treatment options in response to patient’s medical care needs (e.g., worsening comorbid conditions, pain, infections, acute medical events). Family caregivers engage in this process when they actively seek treatments or follow recommendations by healthcare professionals. Better understanding of medical decision-making from the family caregiver perspective is needed to design interventions that maximize the quality of patient life and limit inappropriate treatments. Data were collected in three waves of semi-structured interviews with 20 family caregivers for patients with advanced dementia. A purposive sample of 20 family caregivers was recruited from a senior care center in Central Florida. The qualitative personal interviews were conducted by the author in 4-5 months intervals. The ethical approval for the study was obtained prior to the data collection. Advanced dementia was operationalized as stage five or higher on the Global Deterioration Scale (GDS) (i.e., starting with the GDS score of five, patients are no longer able survive without assistance due to major cognitive and functional impairments). Information about patients’ GDS scores was obtained from the Center’s Medical Director, who had an in-depth knowledge of each patient’s health and medical treatment history. All interviews were audiotaped and transcribed verbatim. The qualitative data analysis was conducted to answer the following research questions: 1) what treatment decisions do family caregivers make while managing the symptoms of advanced dementia and 2) how do these treatment decisions influence the quality of patient life? To validate the results, the author asked each participating family caregiver if the summarized findings accurately captured his/her experiences. The identified medical decisions ranged from seeking specialist medical care to end-of-life care. The most common decisions were related to arranging medical appointments, medication management, seeking treatments for pain and other symptoms, nursing home placement, and accessing community-based healthcare services. The most challenging and consequential decisions were related to the management of acute complications, hospitalizations, and discontinuation of treatments. Decisions that had the greatest impact on the quality of patient life and survival were triggered by traumatic falls, worsening psychiatric symptoms, and aspiration pneumonia. The study findings have important implications for geriatric nurses in the context of patient/caregiver-centered dementia care. Innovative nursing approaches are needed to support family caregivers to effectively manage medical care needs of patients with advanced dementia.

Keywords: advanced dementia, family caregiver, medical decision-making, symptom management

Procedia PDF Downloads 122
60 An Argument for Agile, Lean, and Hybrid Project Management in Museum Conservation Practice: A Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts

Authors: Maria Ledinskaya

Abstract:

This paper is part case study and part literature review. It seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation by looking at their practical application on a recent conservation project at the Sainsbury Centre for Visual Arts. The author outlines the advantages of leaner and more agile conservation practices in today’s faster, less certain, and more budget-conscious museum climate where traditional project structures are no longer as relevant or effective. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre by private collectors Michael and Joyce Morris. It was a medium-sized conservation project of moderate complexity, planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown conditions and materials, unconfirmed budget. The project was later impacted by the COVID-19 pandemic, introducing indeterminate lockdowns, budget cuts, staff changes, and the need to accommodate social distancing and remote communications. The author, then a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. The paper examines the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, including the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics. Although not intentionally planned as such, the Morris Project had a number of Agile and Lean features which were instrumental to its successful delivery. These key features are identified as distributed decision-making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point in favour of a hybrid model, which combines traditional and alternative project processes and tools to suit the specific needs of the project.

Keywords: agile project management, conservation, hybrid project management, lean project management, waterfall project management

Procedia PDF Downloads 71
59 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators

Authors: K. O'Malley

Abstract:

Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.

Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university

Procedia PDF Downloads 34
58 Developing Primal Teachers beyond the Classroom: The Quadrant Intelligence (Q-I) Model

Authors: Alexander K. Edwards

Abstract:

Introduction: The moral dimension of teacher education globally has assumed a new paradigm of thinking based on the public gain (return-on-investments), value-creation (quality), professionalism (practice), and business strategies (innovations). Abundant literature reveals an interesting revolutionary trend in complimenting the raising of teachers and academic performances. Because of the global competition in the knowledge-creation and service areas, the C21st teacher at all levels is expected to be resourceful, strategic thinker, socially intelligent, relationship aptitude, and entrepreneur astute. This study is a significant contribution to practice and innovations to raise exemplary or primal teachers. In this study, the qualities needed were considered as ‘Quadrant Intelligence (Q-i)’ model for a primal teacher leadership beyond the classroom. The researcher started by examining the issue of the majority of teachers in Ghana Education Services (GES) in need of this Q-i to be effective and efficient. The conceptual framing became determinants of such Q-i. This is significant for global employability and versatility in teacher education to create premium and primal teacher leadership, which are again gaining high attention in scholarship due to failing schools. The moral aspect of teachers failing learners is a highly important discussion. In GES, some schools score zero percent at the basic education certificate examination (BECE). The question is what will make any professional teacher highly productive, marketable, and an entrepreneur? What will give teachers the moral consciousness of doing the best to succeed? Method: This study set out to develop a model for primal teachers in GES as an innovative way to highlight a premium development for the C21st business-education acumen through desk reviews. The study is conceptually framed by examining certain skill sets such as strategic thinking, social intelligence, relational and emotional intelligence and entrepreneurship to answer three main burning questions and other hypotheses. Then the study applied the causal comparative methodology with a purposive sampling technique (N=500) from CoE, GES, NTVI, and other teachers associations. Participants responded to a 30-items, researcher-developed questionnaire. Data is analyzed on the quadrant constructs and reported as ex post facto analyses of multi-variances and regressions. Multiple associations were established for statistical significance (p=0.05). Causes and effects are postulated for scientific discussions. Findings: It was found out that these quadrants are very significant in teacher development. There were significant variations in the demographic groups. However, most teachers lack considerable skills in entrepreneurship, leadership in teaching and learning, and business thinking strategies. These have significant effect on practices and outcomes. Conclusion and Recommendations: It is quite conclusive therefore that in GES teachers may need further instructions in innovations and creativity to transform knowledge-creation into business venture. In service training (INSET) has to be comprehensive. Teacher education curricula at Colleges may have to be re-visited. Teachers have the potential to raise their social capital, to be entrepreneur, and to exhibit professionalism beyond their community services. Their primal leadership focus will benefit many clienteles including students and social circles. Recommendations examined the policy implications for curriculum design, practice, innovations and educational leadership.

Keywords: emotional intelligence, entrepreneurship, leadership, quadrant intelligence (q-i), primal teacher leadership, strategic thinking, social intelligence

Procedia PDF Downloads 315
57 Adolescent Health Risk Behaviors and the Mediating Effects of Family Dynamics and Socio-Demographic Factors

Authors: Rufina C. Abul, Dylan Kyle D. Apostol, Darius Rex G. Binuya, Alyanah Mae F. Cauilan, Darren A. Diaz, Angelica Jones A. Gallang, Charisse G. Kiwang, Alyanna Nicole G. Mactal, Nadine Beatrize V. Nerona, Janella Nicole R. Posadas, Charisse Purie C. Toledo

Abstract:

Background: Dramatic physical development, socioemotional adjustment, and cognitive changes highlight adolescent development. Adolescent brains are susceptible to emotional reactivity, making them likely to engage in risk-taking and impulsive behaviors. The family is crucial in laying the foundations of good health. Aims: This study determined the degree of family cohesion, quality of father-child and mother-child relationships, and degree of academic pressure across cultures, age groups, and sexual orientations. Further, it sought the prevalence of adolescent health concerns, including suicide risks, risk-taking behaviors, social media engagement, and self-care deviations. Finally, the correlations between health risk behaviors and the elements of family dynamics were unraveled. Methods: The descriptive-correlational design served as the blueprint for this study. Data were collected from 1095 adolescents aged 12-21 in two high schools and two universities in Baguio City using self-report questionnaires. Data was analyzed using Microsoft Excel Toolpak and IBM SPSS Statistics to identify significant differences and relationships among variables through descriptive statistics (frequency, %, means and figures) and inferential statistics (ANOVA and logistic regression). Results and Discussion: Adolescents generally have strong family cohesion (FC), high-quality father-child relationships (F-CR), very high-quality mother-child relationships(M-CR), and experience high academic pressure (AP). Cultural affiliation does not influence the 4 elements of family dynamics; the higher the age, the stronger the family cohesion; males score significantly higher on family cohesion and mother-child relationship while significantly lower in perceived academic pressure compared to their female and LGBT counterparts. Suicide risk is prevalent among 29-63% of the population, safety issues have the lowest prevalence for having an abusive relationship (8.22%) and the highest for encountering major family changes (53.52%). Substance use was highest for vaping (22.74%), sexual engagement occurs in 14.61% of the population, while 63% are engaged in social media for >5 hours/day. The self-care deviation is highest for weight concerns (63.39%), lack of visits to health care professionals (64.65%) and lack of exercise (49.94%). All 4 elements of family dynamic (FC, F-CR, M-CR and AP) are significantly associated with safety concerns, suicide risks and social media engagement, while M-CR significantly influences cigarette smoking, alcohol drinking, rugby use and engagement in sex. Conclusion and Recommendations: Strong family cohesion and quality parent-child interactions improve emotional and behavioral outcomes. Sexual orientation has a significant impact on academic pressure and social media use, demanding targeted treatments. The link between family dynamics and health-risk behaviors emphasizes the importance of promoting positive family relationships and encouraging safer behaviors, which are critical for increasing adolescents' well-being.

Keywords: adolescent health, family cohesion, health risk behaviors, suicide risk

Procedia PDF Downloads 17
56 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data

Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann

Abstract:

Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.

Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers

Procedia PDF Downloads 206
55 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis

Authors: Serhat Tüzün, Tufan Demirel

Abstract:

Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.

Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review

Procedia PDF Downloads 280
54 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan

Authors: Hsien-Te Lin

Abstract:

The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.

Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy

Procedia PDF Downloads 139
53 Developing and integrated Clinical Risk Management Model

Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei

Abstract:

Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.

Keywords: failure modes and effective analysis, risk management, root cause analysis, model

Procedia PDF Downloads 250
52 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results

Authors: Athanasios Karasimos, Vasiliki Makri

Abstract:

This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.

Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes

Procedia PDF Downloads 103
51 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 75
50 Flood Risk Assessment for Agricultural Production in a Tropical River Delta Considering Climate Change

Authors: Chandranath Chatterjee, Amina Khatun, Bhabagrahi Sahoo

Abstract:

With the changing climate, precipitation events are intensified in the tropical river basins. Since these river basins are significantly influenced by the monsoonal rainfall pattern, critical impacts are observed on the agricultural practices in the downstream river reaches. This study analyses the crop damage and associated flood risk in terms of net benefit in the paddy-dominated tropical Indian delta of the Mahanadi River. The Mahanadi River basin lies in eastern part of the Indian sub-continent and is greatly affected by the southwest monsoon rainfall extending from the month of June to September. This river delta is highly flood-prone and has suffered from recurring high floods, especially after the 2000s. In this study, the lumped conceptual model, Nedbør Afstrømnings Model (NAM) from the suite of MIKE models, is used for rainfall-runoff modeling. The NAM model is laterally integrated with the MIKE11-Hydrodynamic (HD) model to route the runoffs up to the head of the delta region. To obtain the precipitation-derived future projected discharges at the head of the delta, nine Global Climate Models (GCMs), namely, BCC-CSM1.1(m), GFDL-CM3, GFDL-ESM2G, HadGEM2-AO, IPSL-CM5A-LR, IPSL-CM5A-MR, MIROC5, MIROC-ESM-CHEM and NorESM1-M, available in the Coupled Model Intercomparison Project-Phase 5 (CMIP5) archive are considered. These nine GCMs are previously found to best-capture the Indian Summer Monsoon rainfall. Based on the performance of the nine GCMs in reproducing the historical discharge pattern, three GCMs (HadGEM2-AO, IPSL-CM5A-MR and MIROC-ESM-CHEM) are selected. A higher Taylor Skill Score is considered as the GCM selection criteria. Thereafter, the 10-year return period design flood is estimated using L-moments based flood frequency analysis for the historical and three future projected periods (2010-2039, 2040-2069 and 2070-2099) under Representative Concentration Pathways (RCP) 4.5 and 8.5. A non-dimensional hydrograph analysis is performed to obtain the hydrographs for the historical/projected 10-year return period design floods. These hydrographs are forced into the calibrated and validated coupled 1D-2D hydrodynamic model, MIKE FLOOD, to simulate the flood inundation in the delta region. Historical and projected flood risk is defined based on the information about the flood inundation simulated by the MIKE FLOOD model and the inundation depth-damage-duration relationship of a normal rice variety cultivated in the river delta. In general, flood risk is expected to increase in all the future projected time periods as compared to the historical episode. Further, in comparison to the 2010s (2010-2039), an increased flood risk in the 2040s (2040-2069) is shown by all the three selected GCMs. However, the flood risk then declines in the 2070s as we move towards the end of the century (2070-2099). The methodology adopted herein for flood risk assessment is one of its kind and may be implemented in any world-river basin. The results obtained from this study can help in future flood preparedness by implementing suitable flood adaptation strategies.

Keywords: flood frequency analysis, flood risk, global climate models (GCMs), paddy cultivation

Procedia PDF Downloads 75
49 Cognitive Decline in People Living with HIV in India and Correlation with Neurometabolites Using 3T Magnetic Resonance Spectroscopy (MRS): A Cross-Sectional Study

Authors: Kartik Gupta, Virendra Kumar, Sanjeev Sinha, N. Jagannathan

Abstract:

Introduction: A significant number of patients having human immunodeficiency virus (HIV) infection show a neurocognitive decline (NCD) ranging from minor cognitive impairment to severe dementia. The possible causes of NCD in HIV-infected patients include brain injury by HIV before cART, neurotoxic viral proteins and metabolic abnormalities. In the present study, we compared the level of NCD in asymptomatic HIV-infected patients with changes in brain metabolites measured by using magnetic resonance spectroscopy (MRS). Methods: 43 HIV-positive patients (30 males and 13 females) coming to ART center of the hospital and HIV-seronegative healthy subjects were recruited for the study. All the participants completed MRI and MRS examination, detailed clinical assessments and a battery of neuropsychological tests. All the MR investigations were carried out at 3.0T MRI scanner (Ingenia/Achieva, Philips, Netherlands). MRI examination protocol included the acquisition of T2-weighted imaging in axial, coronal and sagittal planes, T1-weighted, FLAIR, and DWI images in the axial plane. Patients who showed any apparent lesion on MRI were excluded from the study. T2-weighted images in three orthogonal planes were used to localize the voxel in left frontal lobe white matter (FWM) and left basal ganglia (BG) for single voxel MRS. Single voxel MRS spectra were acquired with a point resolved spectroscopy (PRESS) localization pulse sequence at an echo time (TE) of 35 ms and a repetition time (TR) of 2000 ms with 64 or 128 scans. Automated preprocessing and determination of absolute concentrations of metabolites were estimated using LCModel by water scaling method and the Cramer-Rao lower bounds for all metabolites analyzed in the study were below 15\%. Levels of total N-acetyl aspartate (tNAA), total choline (tCho), glutamate + glutamine (Glx), total creatine (tCr), were measured. Cognition was tested using a battery of tests validated for Indian population. The cognitive domains tested were the memory, attention-information processing, abstraction-executive, simple and complex perceptual motor skills. Z-scores normalized according to age, sex and education standard were used to calculate dysfunction in these individual domains. The NCD was defined as dysfunction with Z-score ≤ 2 in at least two domains. One-way ANOVA was used to compare the difference in brain metabolites between the patients and healthy subjects. Results: NCD was found in 23 (53%) patients. There was no significant difference in age, CD4 count and viral load between the two groups. Maximum impairment was found in the domains of memory and simple motor skills i.e., 19/43 (44%). The prevalence of deficit in attention-information processing, complex perceptual motor skills and abstraction-executive function was 37%, 35%, 33% respectively. Subjects with NCD had a higher level of Glutamate in the Frontal region (8.03 ± 2.30 v/s. 10.26 ± 5.24, p-value 0.001). Conclusion: Among newly diagnosed, ART-naïve retroviral disease patients from India, cognitive decline was found in 53\% patients using tests validated for this population. Those with neurocognitive decline had a significantly higher level of Glutamate in the left frontal region. There was no significant difference in age, CD4 count and viral load at initiation of ART between the two groups.

Keywords: HIV, neurocognitive decline, neurometabolites, magnetic resonance spectroscopy

Procedia PDF Downloads 214
48 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management

Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.

Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities

Procedia PDF Downloads 72
47 Finite Element Modeling of Global Ti-6Al-4V Mechanical Behavior in Relationship with Microstructural Parameters

Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vedal, Farhad Rezai-Aria, Christine Boher

Abstract:

The global mechanical behavior of materials is strongly linked to their microstructure, especially their crystallographic texture and their grains morphology. These material aspects determine the mechanical fields character (heterogeneous or homogeneous), thus, they give to the global behavior a degree of anisotropy according the initial microstructure. For these reasons, the prediction of global behavior of materials in relationship with the microstructure must be performed with a multi-scale approach. Therefore, multi-scale modeling in the context of crystal plasticity is widely used. In this present contribution, a phenomenological elasto-viscoplastic model developed in the crystal plasticity context and finite element method are used to investigate the effects of crystallographic texture and grains sizes on global behavior of a polycrystalline equiaxed Ti-6Al-4V alloy. The constitutive equations of this model are written on local scale for each slip system within each grain while the strain and stress mechanical fields are investigated at the global scale via finite element scale transition. The beta phase of Ti-6Al-4V alloy modeled is negligible; its percent is less than 10%. Three families of slip systems of alpha phase are considered: basal and prismatic families with a burgers vector and pyramidal family with a burgers vector. The twinning mechanism of plastic strain is not observed in Ti-6Al-4V, therefore, it is not considered in the present modeling. Nine representative elementary volumes (REV) are generated with Voronoi tessellations. For each individual equiaxed grain, the own crystallographic orientation vis-à-vis the loading is taken into account. The meshing strategy is optimized in a way to eliminate the meshing effects and at the same time to allow calculating the individual grain size. The stress and strain fields are determined in each Gauss point of the mesh element. A post-treatment is used to calculate the local behavior (in each grain) and then by appropriate homogenization, the macroscopic behavior is calculated. The developed model is validated by comparing the numerical simulation results with an experimental data reported in the literature. It is observed that the present model is able to predict the global mechanical behavior of Ti-6Al-4V alloy and investigate the microstructural parameters' effects. According to the simulations performed on the generated volumes (REV), the macroscopic mechanical behavior of Ti-6Al-4V is strongly linked to the active slip systems family (prismatic, basal or pyramidal). The crystallographic texture determines which family of slip systems can be activated; therefore it gives to the plastic strain a heterogeneous character thus an anisotropic macroscopic mechanical behavior. The average grains size influences also the Ti-6Al-4V mechanical proprieties, especially the yield stress; by decreasing of the average grains size, the yield strength increases according to Hall-Petch relationship. The grains sizes' distribution gives to the strain fields considerable heterogeneity. By increasing grain sizes, the scattering in the localization of plastic strain is observed, thus, in certain areas the stress concentrations are stronger than other regions.

Keywords: microstructural parameters, multi-scale modeling, crystal plasticity, Ti-6Al-4V alloy

Procedia PDF Downloads 126
46 Absolute Quantification of the Bexsero Vaccine Component Factor H Binding Protein (fHbp) by Selected Reaction Monitoring: The Contribution of Mass Spectrometry in Vaccinology

Authors: Massimiliano Biagini, Marco Spinsanti, Gabriella De Angelis, Sara Tomei, Ilaria Ferlenghi, Maria Scarselli, Alessia Biolchi, Alessandro Muzzi, Brunella Brunelli, Silvana Savino, Marzia M. Giuliani, Isabel Delany, Paolo Costantino, Rino Rappuoli, Vega Masignani, Nathalie Norais

Abstract:

The gram-negative bacterium Neisseria meningitidis serogroup B (MenB) is an exclusively human pathogen representing the major cause of meningitides and severe sepsis in infants and children but also in young adults. This pathogen is usually present in the 30% of healthy population that act as a reservoir, spreading it through saliva and respiratory fluids during coughing, sneezing, kissing. Among surface-exposed protein components of this diplococcus, factor H binding protein is a lipoprotein proved to be a protective antigen used as a component of the recently licensed Bexsero vaccine. fHbp is a highly variable meningococcal protein: to reflect its remarkable sequence variability, it has been classified in three variants (or two subfamilies), and with poor cross-protection among the different variants. Furthermore, the level of fHbp expression varies significantly among strains, and this has also been considered an important factor for predicting MenB strain susceptibility to anti-fHbp antisera. Different methods have been used to assess fHbp expression on meningococcal strains, however, all these methods use anti-fHbp antibodies, and for this reason, the results are affected by the different affinity that antibodies can have to different antigenic variants. To overcome the limitations of an antibody-based quantification, we developed a quantitative Mass Spectrometry (MS) approach. Selected Reaction Monitoring (SRM) recently emerged as a powerful MS tool for detecting and quantifying proteins in complex mixtures. SRM is based on the targeted detection of ProteoTypicPeptides (PTPs), which are unique signatures of a protein that can be easily detected and quantified by MS. This approach, proven to be highly sensitive, quantitatively accurate and highly reproducible, was used to quantify the absolute amount of fHbp antigen in total extracts derived from 105 clinical isolates, evenly distributed among the three main variant groups and selected to be representative of the fHbp circulating subvariants around the world. We extended the study at the genetic level investigating the correlation between the differential level of expression and polymorphisms present within the genes and their promoter sequences. The implications of fHbp expression on the susceptibility of the strain to killing by anti-fHbp antisera are also presented. To date this is the first comprehensive fHbp expression profiling in a large panel of Neisseria meningitidis clinical isolates driven by an antibody-independent MS-based methodology, opening the door to new applications in vaccine coverage prediction and reinforcing the molecular understanding of released vaccines.

Keywords: quantitative mass spectrometry, Neisseria meningitidis, vaccines, bexsero, molecular epidemiology

Procedia PDF Downloads 314
45 Production of Insulin Analogue SCI-57 by Transient Expression in Nicotiana benthamiana

Authors: Adriana Muñoz-Talavera, Ana Rosa Rincón-Sánchez, Abraham Escobedo-Moratilla, María Cristina Islas-Carbajal, Miguel Ángel Gómez-Lim

Abstract:

The highest rates of diabetes incidence and prevalence worldwide will increase the number of diabetic patients requiring insulin or insulin analogues. Then, current production systems would not be sufficient to meet the future market demands. Therefore, developing efficient expression systems for insulin and insulin analogues are needed. In addition, insulin analogues with better pharmacokinetics and pharmacodynamics properties and without mitogenic potential will be required. SCI-57 (single chain insulin-57) is an insulin analogue having 10 times greater affinity to the insulin receptor, higher resistance to thermal degradation than insulin, native mitogenicity and biological effect. Plants as expression platforms have been used to produce recombinant proteins because of their advantages such as cost-effectiveness, posttranslational modifications, absence of human pathogens and high quality. Immunoglobulin production with a yield of 50% has been achieved by transient expression in Nicotiana benthamiana (Nb). The aim of this study is to produce SCI-57 by transient expression in Nb. Methodology: DNA sequence encoding SCI-57 was cloned in pICH31070. This construction was introduced into Agrobacterium tumefaciens by electroporation. The resulting strain was used to infiltrate leaves of Nb. In order to isolate SCI-57, leaves from transformed plants were incubated 3 hours with the extraction buffer therefore filtrated to remove solid material. The resultant protein solution was subjected to anion exchange chromatography on an FPLC system and ultrafiltration to purify SCI-57. Detection of SCI-57 was made by electrophoresis pattern (SDS-PAGE). Protein band was digested with trypsin and the peptides were analyzed by Liquid chromatography tandem-mass spectrometry (LC-MS/MS). A purified protein sample (20µM) was analyzed by ESI-Q-TOF-MS to obtain the ionization pattern and the exact molecular weight determination. Chromatography pattern and impurities detection were performed using RP-HPLC using recombinant insulin as standard. The identity of the SCI-57 was confirmed by anti-insulin ELISA. The total soluble protein concentration was quantified by Bradford assay. Results: The expression cassette was verified by restriction mapping (5393 bp fragment). The SDS-PAGE of crude leaf extract (CLE) of transformed plants, revealed a protein of about 6.4 kDa, non-present in CLE of untransformed plants. The LC-MS/MS results displayed one peptide with a high score that matches SCI-57 amino acid sequence in the sample, confirming the identity of SCI-57. From the purified SCI-57 sample (PSCI-57) the most intense charge state was 1069 m/z (+6) on the displayed ionization pattern corresponding to the molecular weight of SCI-57 (6412.6554 Da). The RP-HPLC of the PSCI-57 shows the presence of a peak with similar retention time (rt) and UV spectroscopic profile to the insulin standard (SCI-57 rt=12.96 and insulin rt=12.70 min). The collected SCI-57 peak had ELISA signal. The total protein amount in CLE from transformed plants was higher compared to untransformed plants. Conclusions: Our results suggest the feasibility to produce insulin analogue SCI-57 by transient expression in Nicotiana benthamiana. Further work is being undertaken to evaluate the biological activity by glucose uptake by insulin-sensitive and insulin-resistant murine and human cultured adipocytes.

Keywords: insulin analogue, mass spectrometry, Nicotiana benthamiana, transient expression

Procedia PDF Downloads 351
44 On the Bias and Predictability of Asylum Cases

Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats

Abstract:

An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.

Keywords: asylum adjudications, automated decision-making, machine learning, text mining

Procedia PDF Downloads 96
43 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 189
42 Clinical Efficacy of Localized Salvage Prostate Cancer Reirradiation with Proton Scanning Beam Therapy

Authors: Charles Shang, Salina Ramirez, Stephen Shang, Maria Estrada, Timothy R. Williams

Abstract:

Purpose: Over the past decade, proton therapy utilizing pencil beam scanning has emerged as a preferred treatment modality in radiation oncology, particularly for prostate cancer. This retrospective study aims to assess the clinical and radiobiological efficacy of proton scanning beam therapy in the treatment of localized salvage prostate cancer, following initial radiation therapy with a different modality. Despite the previously delivered high radiation doses, this investigation explores the potential of proton reirradiation in controlling recurrent prostate cancer and detrimental quality of life side effects. Methods and Materials: A retrospective analysis was conducted on 45 cases of locally recurrent prostate cancer that underwent salvage proton reirradiation. Patients were followed for 24.6 ± 13.1 months post-treatment. These patients had experienced an average remission of 8.5 ± 7.9 years after definitive radiotherapy for localized prostate cancer (n=41) or post-prostatectomy (n=4), followed by rising PSA levels. Recurrent disease was confirmed by FDG-PET (n=31), PSMA-PET (n=10), or positive local biopsy (n=4). Gross tumor volume (GTV) was delineated based on PET and MR imaging, with the planning target volume (PTV) expanding to an average of 10.9 cm³. Patients received proton reirradiation using two oblique coplanar beams, delivering total doses ranging from 30.06 to 60.00 GyE in 17–30 fractions. All treatments were administered using the ProBeam Compact system with CT image guidance. The International Prostate Symptom Scores (IPSS) and prostate-specific antigen (PSA) levels were evaluated to assess treatment-related toxicity and tumor control. Results and Discussions: In this cohort (mean age: 76.7 ± 7.3 years), 60% (27/45) of patients showed sustained reductions in PSA levels post-treatment, while 36% (16/45) experienced a PSA decline of more than 0.8 ng/mL. Additionally, 73% (33/45) of patients exhibited an initial PSA reduction, though some showed later PSA increases, indicating the potential presence of undetected metastatic lesions. The median post-retreatment IPSS score was 4, significantly lower than scores reported in other treatment studies. Overall, 69% of patients reported mild urinary symptoms, with 96% (43/45) experiencing mild to moderate symptoms. Three patients experienced grade I or II proctitis, while one patient reported grade III proctitis. These findings suggest that regional organs, including the urethra, bladder, and rectum, demonstrate significant radiobiological recovery from prior radiation exposure, enabling tolerance to additional proton scanning beam therapy. Conclusions: This retrospective analysis of 45 patients with recurrent localized prostate cancer treated with salvage proton reirradiation demonstrates favorable outcomes, with a median follow-up of two years. The post-retreatment IPSS scores were comparable to those reported in follow-up studies of initial radiation therapy treatments, indicating stable or improved urinary symptoms compared to the end of initial treatment. These results highlight the efficacy of proton scanning beam therapy in providing effective salvage treatment while minimizing adverse effects on critical organs. The findings also enhance the understanding of radiobiological responses to reirradiation and support proton therapy as a viable option for patients with recurrent localized prostate cancer following previous definitive radiation therapy.

Keywords: prostate salvage radiotherapy, proton therapy, biological radiation tolerance, radiobiology of organs

Procedia PDF Downloads 19