Search results for: Paul D. Thornton
92 Cultural Collisions, Ethics and HIV: On Local Values in a Globalized Medical World
Authors: Norbert W. Paul
Abstract:
In 1988, parts of the scientific community still heralded findings to support that AIDS was likely to remain largely a ‘gay disease’. The value-ladden terminology of some of the articles suggested that rectum and fragile urethra are not sufficiently robust to provide a barrier against infectious fluids, especially body fluids contaminated with HIV while the female vagina, would provide natural protection against injuries and trauma facilitating HIV-infection. Anal sexual intercourse was constituted not only as dangerous but also as unnatural practice, while penile-vaginal intercourse would follow natural design and thus be relatively safe practice minimizing the risk of HIV. Statements like the latter were not uncommon in the early times of HIV/AIDS and contributed to captious certainties and an underestimation of heterosexual risks. Pseudo-scientific discourses on the origin of HIV were linked to local and global health politics in the 1980ies. The pathways of infection were related to normative concepts like deviant, subcultural behavior, cultural otherness, and guilt used to target, tag and separate specific groups at risk from the ‘normal’ population. Controlling populations at risk became the top item on the agenda rather than controlling modes of transmission and the virus. Hence, the Thai strategy to cope with HIV/AIDS by acknowledging social and sexual practices as they were – not as they were imagined – has become a role model for successful prevention in the highly scandalized realm of sexually transmitted disease. By accepting the globalized character of local HIV-risk and projecting the risk onto populations which are neither particularly vocal groups nor vested with the means to strive for health and justice Thailand managed to culturally implement knowledge-based tools of prevention. This paper argues, that pertinent cultural collisions regarding our strategies to cope with HIV/AIDS are deeply rooted in misconceptions, misreadings and scandalizations brought about in the early history of HIV in the 1980ties. The Thai strategy is used to demonstrate how local values can be balanced against globalized health risk and used to effectuated prevention by which knowledge and norms are translated into local practices. Issues of global health and injustice will be addressed in the final part of the paper dealing with the achievability of health as a human right.Keywords: bioethics, HIV, global health, justice
Procedia PDF Downloads 26091 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics
Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic
Abstract:
Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress
Procedia PDF Downloads 22590 Teaching Behaviours of Effective Secondary Mathematics Teachers: A Study in Dhaka, Bangladesh
Authors: Asadullah Sheikh, Kerry Barnett, Paul Ayres
Abstract:
Despite significant progress in access, equity and public examination success, poor student performance in mathematics in secondary schools has become a major concern in Bangladesh. A substantial body of research has emphasised the important contribution of teaching practices to student achievement. However, this has not been investigated in Bangladesh. Therefore, the study sought to find out the effectiveness of mathematics teaching practices as a means of improving secondary school mathematics in Dhaka Municipality City (DMC) area, Bangladesh. The purpose of this study was twofold, first, to identify the 20 highest performing secondary schools in mathematics in DMC, and second, to investigate the teaching practices of mathematics teachers in these schools. A two-phase mixed method approach was adopted. In the first phase, secondary source data were obtained from the Board of Intermediate and Secondary Education (BISE), Dhaka and value-added measures used to identify the 20 highest performing secondary schools in mathematics. In the second phase, a concurrent mixed method design, where qualitative methods were embedded within a dominant quantitative approach was utilised. A purposive sampling strategy was used to select fifteen teachers from the 20 highest performing secondary schools. The main sources of data were classroom teaching observations, and teacher interviews. The data from teacher observations were analysed with descriptive and nonparametric statistics. The interview data were analysed qualitatively. The main findings showed teachers adopt a direct teaching approach which incorporates orientation, structuring, modelling, practice, questioning and teacher-student interaction that creates an individualistic learning environment. The variation in developmental levels of teaching skill indicate that teachers do not necessarily use the qualitative (i.e., focus, stage, quality and differentiation) aspects of teaching behaviours effectively. This is the first study to investigate teaching behaviours of effective secondary mathematics teachers within Dhaka, Bangladesh. It contributes in an international dimension to the field of educational effectiveness and raise questions about existing constructivist approaches. Further, it contributes to important insights about teaching behaviours that can be used to inform the development of evidence-based policy and practice on quality teaching in Bangladesh.Keywords: effective teaching, mathematics, secondary schools, student achievement, value-added measures
Procedia PDF Downloads 23889 Cancer Burden and Policy Needs in the Democratic Republic of the Congo: A Descriptive Study
Authors: Jean Paul Muambangu Milambo, Peter Nyasulu, John Akudugu, Leonidas Ndayisaba, Joyce Tsoka-Gwegweni, Lebwaze Massamba Bienvenu, Mitshindo Mwambangu Chiro
Abstract:
In 2018, non-communicable diseases (NCDs) were responsible for 48% of deaths in the Democratic Republic of Congo (DRC), with cancer contributing to 5% of these deaths. There is a notable absence of cancer registries, capacity-building activities, budgets, and treatment roadmaps in the DRC. Current cancer estimates are primarily based on mathematical modeling with limited data from neighboring countries. This study aimed to assess cancer subtype prevalence in Kinshasa hospitals and compare these findings with WHO model estimates. Methods: A retrospective observational study was conducted from 2018 to 2020 at HJ Hospitals in Kinshasa. Data were collected using American Cancer Society (ACS) questionnaires and physician logs. Descriptive analysis was performed using STATA version 16 to estimate cancer burden and provide evidence-based recommendations. Results: The results from the chart review at HJ Hospitals in Kinshasa (2018-2020) indicate that out of 6,852 samples, approximately 11.16% were diagnosed with cancer. The distribution of cancer subtypes in this cohort was as follows: breast cancer (33.6%), prostate cancer (21.8%), colorectal cancer (9.6%), lymphoma (4.6%), and cervical cancer (4.4%). These figures are based on histopathological confirmation at the facility and may not fully represent the broader population due to potential selection biases related to geographic and financial accessibility to the hospital. In contrast, the World Health Organization (WHO) model estimates for cancer prevalence in the DRC show different proportions. According to WHO data, the distribution of cancer types is as follows: cervical cancer (15.9%), prostate cancer (15.3%), breast cancer (14.9%), liver cancer (6.8%), colorectal cancer (5.9%), and other cancers (41.2%) (WHO, 2020). Conclusion: The data indicate a rising cancer prevalence in DRC but highlight significant gaps in clinical, biomedical, and genetic cancer data. The establishment of a population-based cancer registry (PBCR) and a defined cancer management pathway is crucial. The current estimates are limited due to data scarcity and inconsistencies in clinical practices. There is an urgent need for multidisciplinary cancer management, integration of palliative care, and improvement in care quality based on evidence-based measures.Keywords: cancer, risk factors, DRC, gene-environment interactions, survivors
Procedia PDF Downloads 1888 Healthcare Learning From Near Misses in Aviation Safety
Authors: Nick Woodier, Paul Sampson, Iain Moppett
Abstract:
Background: For years, healthcare across the world has recognised that patients are coming to harm from the very processes meant to help them. In response, healthcare tells itself that it needs to ‘be more like aviation.’ Aviation safety is highly regarded by those in healthcare and is seen as an exemplar. Specifically, healthcare is keen to learn from how aviation uses near misses to make their industry safer. Healthcare is rife with near misses; however, there has been little progress addressing them, with most research having focused on reporting. Addressing the factors that contribute to near misses will potentially help reduce the number of significant, harm patientsafety incidents. While the healthcare literature states the need to learn from aviation’s use of near misses, there is nothing that describes how best to do this. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from aviation to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how aviation, amongst other safety-critical industries, manages near misses. Results: Twelve aviation interviews contributed to the GT across passenger airlines, air traffic control, and bodies involved in policy, regulation, and investigation. The scoping review identified 83 articles across a range of safety-critical industries, but only seven focused on aviation. The GT identified that aviation interprets the term ‘near miss’ in different ways, commonly using it to specifically refer to near-miss air collisions, also known as Airproxes. Other types of near misses exist, such as health and safety, but the reporting of these and the safety climate associated with them is not as mature. Safety culture in aviation was regularly discussed, with evidence that culture varies depending on which part of the industry is being considered (e.g., civil vs. business aviation). Near misses are seen as just one part of an extensive safety management system, but processes to support their reporting and their analysis are not consistent. Their value alone is also questionable, with the challenge to long-held beliefs originating from the ‘common cause hypothesis.’ Conclusions: There is learning that healthcare can take from how parts of aviation manage and learn from near misses. For example, healthcare would benefit from a formal safety management system that currently does not exist. However, it may not be as simple as ‘healthcare should learn from aviation’ due to variation in safety maturity across the industry. Healthcare needs to clarify how to incorporate near misses into learning and whether allocating resources to them is of value; it was heard that catastrophes have led to greater improvements in safety in aviation.Keywords: aviation safety, patient safety, near miss, safety management systems
Procedia PDF Downloads 14787 Feasibility of Online Health Coaching for Canadian Armed Forces Personnel Receiving Treatment for Depression, Anxiety and PTSD
Authors: Noah Wayne, Andrea Tuka, Adrian Norbash, Bryan Garber, Paul Ritvo
Abstract:
Program/Intervention Description: The Canadian Armed Forces(CAF) Mental Health Clinicstreat a full spectrum of mental disorder, addictions, and psychosocial issues that include Major Depressive Disorder, Post-Traumatic Stress Disorder, Generalized Anxiety Disorder, and other diagnoses. We evaluated the feasibility of an online health coach interventiondelivering mindfulness based cognitive behavioral therapy (M-CBT) and behaviour changesupport for individuals receiving treatment at CAF Clinics. Participants were provided accounts on NexJ Connected Wellness, a digital health platform, and 16 weeks of phone-based health coaching,emphasizingmild to moderate aerobic exercise, a healthy diet, and M-CBT content. The primary objective was to assess the feasibility of the online deliverywith CAF members. Evaluation Methods: Feasibility was evaluated in terms of recruitment, engagement, and program satisfaction. Weadditionallyevaluatedhealth behavior change, program completion, and mental health symptoms (i.e. PHQ-9, GAD-7, PCL-5) at three time points. Results: Service members were referred from Vancouver, Esquimalt, and Edmonton CAF bases between August 2020 and January 2021. N=106 CAF personnel were referred, and n=77 consented.N=66 participated, and n=44 completed 4-month and follow-up measures. The platform received a mean rating of76.5 on the System Usability Scale, and health coaching was judged the most helpful program feature (95.2% endorsement), while reminders (53.7%), secure messaging (51.2%), and notifications (51.2%) were also identified. Improvements in mental health status during active interventions were observed on the PHQ-9 (-5.4, p<0.001), GAD-7 (-4.0, p<0.001), and PCL-5 (-4.1, p<0.05). Conclusion: Online health coaching was well-received amidst the COVID-19 pandemic and related lockdowns. Uptake and engagement were positively reported. Participants valuedcontacts and reported strong therapeutic alliances with coaches. Healthy diet, regular exercise, and mindfulness practice are important for physical and mental health. Engagements in these behaviors are associated with reduced symptoms. An online health coach program appears feasible for assisting Canadian Armed Forces personnel.Keywords: coaching, CBT, military, depression, mental health, digital
Procedia PDF Downloads 15986 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity
Authors: Justus Enninga
Abstract:
Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.Keywords: degrowth, green political theory, polycentricity, institutional robustness
Procedia PDF Downloads 18285 Reaching New Levels: Using Systems Thinking to Analyse a Major Incident Investigation
Authors: Matthew J. I. Woolley, Gemma J. M. Read, Paul M. Salmon, Natassia Goode
Abstract:
The significance of high consequence, workplace failures within construction continues to resonate with a combined average of 12 fatal incidents occurring daily throughout Australia, the United Kingdom, and the United States. Within the Australian construction domain, more than 35 serious, compensable injury incidents are reported daily. These alarming figures, in conjunction with the continued occurrence of fatal and serious, occupational injury incidents globally suggest existing approaches to incident analysis may not be achieving required injury prevention outcomes. One reason may be that, incident analysis methods used in construction have not kept pace with advances in the field of safety science and are not uncovering the full range system-wide contributory factors that are required to achieve optimal levels of construction safety performance. Another reason underpinning this global issue may also be the absence of information surrounding the construction operating and project delivery system. For example, it is not clear who shares the responsibility for construction safety in different contexts. To respond to this issue, to the author’s best knowledge, a first of its kind, control structure model of the construction industry is presented and then used to analyse a fatal construction incident. The model was developed by applying and extending the Systems Theoretic and Incident Model and Process method to hierarchically represent the actors, constraints, feedback mechanisms, and relationships that are involved in managing construction safety performance. The Causal Analysis based on Systems Theory (CAST) method was then used to identify the control and feedback failures involved in the fatal incident. The conclusions from the Coronial investigation into the event are compared with the findings stemming from the CAST analysis. The CAST analysis highlighted additional issues across the construction system that were not identified in the coroner’s recommendations, suggested there is a potential benefit in applying a systems theory approach to incident analysis in construction. The findings demonstrate the utility applying systems theory-based methods to the analysis of construction incidents. Specifically, this study shows the utility of the construction control structure and the potential benefits for project leaders, construction entities, regulators, and construction clients in controlling construction performance.Keywords: construction project management, construction performance, incident analysis, systems thinking
Procedia PDF Downloads 12784 The Impact Of Environmental Management System ISO 14001 Adoption on Firm Performance
Authors: Raymond Treacy, Paul Humphreys, Ronan McIvor, Trevor Cadden, Alan McKittrick
Abstract:
This study employed event study methodology to examine the role of institutions, resources and dynamic capabilities in the relationship between the Environmental Management System ISO 14001 adoption and firm performance. Utilising financial data from 140 ISO 14001 certified firms and 320 non-certified firms, the results of the study suggested that the UK and Irish manufacturers were not implementing ISO 14001 solely to gain legitimacy. In contrast, the results demonstrated that firms were fully integrating the ISO 14001 standard within their operations as certified firms were able to improve both financial and operating performance when compared to non-certified firms. However, while there were significant and long lasting improvements for employee productivity, manufacturing cost efficiency, return on assets and sales turnover, the sample firms operating cycle and fixed asset efficiency displayed evidence of diminishing returns in the long-run, underlying the observation that no operating advantage based on incremental improvements can be everlasting. Hence, there is an argument for investing in dynamic capabilities which help renew and refresh the resource base and help the firm adapt to changing environments. Indeed, the results of the regression analysis suggest that dynamic capabilities for innovation acted as a moderator in the relationship between ISO 14001 certification and firm performance. This, in turn, will have a significant and symbiotic influence on sustainability practices within the participating organisations. The study not only provides new and original insights, but demonstrates pragmatically how firms can take advantage of environmental management systems as a moderator to significantly enhance firm performance. However, while it was shown that firm innovation aided both short term and long term ROA performance, adaptive market capabilities only aided firms in the short-term at the marketing strategy deployment stage. Finally, the results have important implications for firms operating in an economic recession as the results suggest that firms should scale back investment in R&D while operating in an economic downturn. Conversely, under normal trading conditions, consistent and long term investments in R&D was found to moderate the relationship between ISO 14001 certification and firm performance. Hence, the results of the study have important implications for academics and management alike.Keywords: supply chain management, environmental management systems, quality management, sustainability, firm performance
Procedia PDF Downloads 30783 Effects of Nutrients Supply on Milk Yield, Composition and Enteric Methane Gas Emissions from Smallholder Dairy Farms in Rwanda
Authors: Jean De Dieu Ayabagabo, Paul A.Onjoro, Karubiu P. Migwi, Marie C. Dusingize
Abstract:
This study investigated the effects of feed on milk yield and quality through feed monitoring and quality assessment, and the consequent enteric methane gas emissions from smallholder dairy farms in drier areas of Rwanda, using the Tier II approach for four seasons in three zones, namely; Mayaga and peripheral Bugesera (MPB), Eastern Savanna and Central Bugesera (ESCB), and Eastern plateau (EP). The study was carried out using 186 dairy cows with a mean live weight of 292 Kg in three communal cowsheds. The milk quality analysis was carried out on 418 samples. Methane emission was estimated using prediction equations. Data collected were subjected to ANOVA. The dry matter intake was lower (p<0.05) in the long dry season (7.24 Kg), with the ESCB zone having the highest value of 9.10 Kg, explained by the practice of crop-livestock integration agriculture in that zone. The Dry matter digestibility varied between seasons and zones, ranging from 52.5 to 56.4% for seasons and from 51.9 to 57.5% for zones. The daily protein supply was higher (p<0.05) in the long rain season with 969 g. The mean daily milk production of lactating cows was 5.6 L with a lower value (p<0.05) during the long dry season (4.76 L), and the MPB zone having the lowest value of 4.65 L. The yearly milk production per cow was 1179 L. The milk fat varied from 3.79 to 5.49% with a seasonal and zone variation. No variation was observed with milk protein. The seasonal daily methane emission varied from 150 g for the long dry season to 174 g for the long rain season (p<0.05). The rain season had the highest methane emission as it is associated with high forage intake. The mean emission factor was 59.4 Kg of methane/year. The present EFs were higher than the default IPPC value of 41 Kg from developing countries in African, the Middle East, and other tropical regions livestock EFs using Tier I approach due to the higher live weight in the current study. The methane emission per unit of milk production was lower in the EP zone (46.8 g/L) due to the feed efficiency observed in that zone. Farmers should use high-quality feeds to increase the milk yield and reduce the methane gas produced per unit of milk. For an accurate assessment of the methane produced from dairy farms, there is a need for the use of the Life Cycle Assessment approach that considers all the sources of emissions.Keywords: footprint, forage, girinka, tier
Procedia PDF Downloads 20382 Music Listening in Dementia: Current Developments and the Potential for Automated Systems in the Home: Scoping Review and Discussion
Authors: Alexander Street, Nina Wollersberger, Paul Fernie, Leonardo Muller, Ming Hung HSU, Helen Odell-Miller, Jorg Fachner, Patrizia Di Campli San Vito, Stephen Brewster, Hari Shaji, Satvik Venkatesh, Paolo Itaborai, Nicolas Farina, Alexis Kirke, Sube Banerjee, Eduardo Reck Miranda
Abstract:
Escalating neuropsychiatric symptoms (NPS) in people with dementia may lead to earlier care home admission. Music listening has been reported to stimulate cognitive function, potentially reducing agitation in this population. We present a scoping review, reporting on current developments and discussing the potential for music listening with related technology in managing agitation in dementia care. Of two searches for music listening studies, one focused on older people or people living with dementia where music listening interventions, including technology, were delivered in participants’ homes or in institutions to address neuropsychiatric symptoms, quality of life and independence. The second included any population focusing on the use of music technology for health and wellbeing. In search one 70/251 full texts were included. The majority reported either statistical significance (6, 8.5%), significance (17, 24.2%) or improvements (26, 37.1%). Agitation was specifically reported in 36 (51.4%). The second search included 51/99 full texts, reporting improvement (28, 54.9%), significance (11, 21.5%), statistical significance (1, 1.9%) and no difference compared to the control (6, 11.7%). The majority in the first focused on mood and agitation, and the second on mood and psychophysiological responses. Five studies used AI or machine learning systems to select music, all involving healthy controls and reporting benefits. Most studies in both reviews were not conducted in a home environment (review 1 = 12; 17.1%; review 2 = 11; 21.5%). Preferred music listening may help manage NPS in the care home settings. Based on these and other data extracted in the review, a reasonable progression would be to co-design and test music listening systems and protocols for NPS in all settings, including people’s homes. Machine learning and automated technology for music selection and arousal adjustment, driven by live biodata, have not been explored in dementia care. Such approaches may help deliver the right music at the appropriate time in the required dosage, reducing the use of medication and improving quality of life.Keywords: music listening, dementia, agitation, scoping review, technology
Procedia PDF Downloads 11281 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 30280 Regional Variations in Spouse Selection Patterns of Women in India
Authors: Nivedita Paul
Abstract:
Marriages in India are part and parcel of kinship and cultural practices. Marriage practices differ in India because of cross-regional diversities in social relations which itself has evolved as a result of causal relationship between space and culture. As the place is important for the formation of culture and other social structures, therefore there is regional differentiation in cultural practices and marital customs. Based on the cultural practices some scholars have divided India into North and South kinship regions where women in the North get married early and have lesser autonomy compared to women in the South where marriages are mostly consanguineous. But, the emergence of new modes and alternative strategies such as matrimonial advertisements becoming popular, as well as the increase in women’s literacy and work force participation, matchmaking process in India has changed to some extent. The present study uses data from Indian Human Development Survey II (2011-12) which is a nationally representative multitopic survey that covers 41,554 households. Currently married women of age group 15-49 in their first marriage; whose year of marriage is from the 1970s to 2000s have been taken for the study. Based on spouse selection experiences, the sample of women has been divided into three marriage categories-self, semi and family arranged. Women in self-arranged or love marriage is the sole decision maker in choosing the partner, in semi-arranged marriage or arranged marriage with consent both parents and women together take the decision, whereas in family arranged or arranged marriage without consent only parents take the decision. The main aim of the study is to show the spatial and regional variations in spouse selection decision making. The basis for regionalization has been taken from Irawati Karve’s pioneering work on kinship studies in India called Kinship Organization in India. India is divided into four kinship regions-North, Central, South and East. Since this work was formulated in 1953, some of the states have experienced changes due to modernization; hence these have been regrouped. After mapping spouse selection patterns using GIS software, it is found that the northern region has mostly family arranged marriages (around 64.6%), the central zone shows a mixed pattern since family arranged marriages are less than north but more than south and semi-arranged marriages are more than north but less than south. The southern zone has the dominance of semi-arranged marriages (around 55%) whereas the eastern zone has more of semi-arranged marriage (around 53%) but there is also a high percentage of self-arranged marriage (around 42%). Thus, arranged marriage is the dominant form of marriage in all four regions, but with a difference in the degree of the involvement of the female and her parents and relatives.Keywords: spouse selection, consent, kinship, regional pattern
Procedia PDF Downloads 16779 Factors Influencing the Integration of Comprehensive Sexuality Education into Educational Systems in Low- And Middle-Income Countries: A Systematic Review
Authors: Malizgani Paul Chavula
Abstract:
Background: Comprehensive sexuality education (CSE) plays a critical role in promoting youth and adolescents’ sexual and reproductive health and well-being. However, little is known about the enablers and barriers affecting the integration of CSE into educational programmes. The aim of this review is to explore positive and negative factors influencing the integration of CSE into national curricula and educational systems in low- and middle-income countries. Methods: We conducted a systematic literature review (January 2010 to August 2022). The results accord with the Preferred Reporting Items for Systematic Reviews and Meta-analysis standards for systematic reviews. Data were retrieved from the PubMed, Cochrane, Google Scholar, and Web of Hinari databases. The search yielded 431 publications, of which 23 met the inclusion criteria for full-text screening. The review is guided by an established conceptual framework that incorporates the integration of health innovations into health systems. Data were analyzed using a thematic synthesis approach. Results: The magnitude of the problem is evidenced by sexual and reproductive health challenges such as high teenage pregnancies, early marriages, and sexually transmitted infections. Awareness of these challenges can facilitate the development of interventions and the implementation and integration of CSE. Reported aspects of the interventions include core CSE content, delivery methods, training materials and resources, and various teacher-training factors. Reasons for adoption include perceived benefits of CSE, experiences and characteristics of both teachers and learners, and religious, social, and cultural factors. Broad system characteristics include strengthening links between schools and health facilities, school and community-based collaboration, coordination of CSE implementation, and the monitoring and evaluation of CSE. Ultimately, the availability of resources, national policies and laws, international agendas, and political commitment will impact upon the extent and level of integration. Conclusion: Social, economic, cultural, political, legal, and financial contextual factors influence the implementation and integration of CSE into national curricula and educational systems. Stakeholder collaboration and involvement in the design and appropriateness of interventions is critical.Keywords: comprehensive sexuality education, factors, integration, sexual reproductive health rights
Procedia PDF Downloads 7378 Volunteered Geographic Information Coupled with Wildfire Fire Progression Maps: A Spatial and Temporal Tool for Incident Storytelling
Authors: Cassandra Hansen, Paul Doherty, Chris Ferner, German Whitley, Holly Torpey
Abstract:
Wildfire is a natural and inevitable occurrence, yet changing climatic conditions have increased the severity, frequency, and risk to human populations in the wildland/urban interface (WUI) of the Western United States. Rapid dissemination of accurate wildfire information is critical to both the Incident Management Team (IMT) and the affected community. With the advent of increasingly sophisticated information systems, GIS can now be used as a web platform for sharing geographic information in new and innovative ways, such as virtual story map applications. Crowdsourced information can be extraordinarily useful when coupled with authoritative information. Information abounds in the form of social media, emergency alerts, radio, and news outlets, yet many of these resources lack a spatial component when first distributed. In this study, we describe how twenty-eight volunteer GIS professionals across nine Geographic Area Coordination Centers (GACC) sourced, curated, and distributed Volunteered Geographic Information (VGI) from authoritative social media accounts focused on disseminating information about wildfires and public safety. The combination of fire progression maps with VGI incident information helps answer three critical questions about an incident, such as: where the first started. How and why the fire behaved in an extreme manner and how we can learn from the fire incident's story to respond and prepare for future fires in this area. By adding a spatial component to that shared information, this team has been able to visualize shared information about wildfire starts in an interactive map that answers three critical questions in a more intuitive way. Additionally, long-term social and technical impacts on communities are examined in relation to situational awareness of the disaster through map layers and agency links, the number of views in a particular region of a disaster, community involvement and sharing of this critical resource. Combined with a GIS platform and disaster VGI applications, this workflow and information become invaluable to communities within the WUI and bring spatial awareness for disaster preparedness, response, mitigation, and recovery. This study highlights progression maps as the ultimate storytelling mechanism through incident case studies and demonstrates the impact of VGI and sophisticated applied cartographic methodology make this an indispensable resource for authoritative information sharing.Keywords: storytelling, wildfire progression maps, volunteered geographic information, spatial and temporal
Procedia PDF Downloads 17477 Destroying the Body for the Salvation of the Soul: A Modern Theological Approach
Authors: Angelos Mavropoulos
Abstract:
Apostle Paul repeatedly mentioned the bodily sufferings that he voluntarily went through for Christ, as his body was in chains for the ‘mystery of Christ’ (Col 4:3), while on his flesh he gladly carried the ‘thorn’ and all his pains and weaknesses, which prevent him from being proud (2 Cor 12:7). In his view, God’s power ‘is made perfect in weakness’ and when we are physically weak, this is when we are spiritually strong (2 Cor 12:9-10). In addition, we all bear the death of Jesus in our bodies so that His life can be ‘revealed in our mortal body’ (2 Cor 4:10-11), and if we indeed share in His sufferings, we will share in His glory as well (Rom 8:17). Based on these passages, several Christian writers projected bodily suffering, pain, death, and martyrdom, in general, as the means to a noble Christian life and the way to attain God. Even more, Christian tradition is full of instances of voluntary self-harm, mortification of the flesh, and body mutilation for the sake of the soul by several pious men and women, as an imitation of Christ’s earthly suffering. It is a fact, therefore, that, for Christianity, he or she who not only endures but even inflicts earthly pains for God is highly appreciated and will be rewarded in the afterlife. Nevertheless, more recently, Gaudium et Spes and Veritatis Splendor decisively and totally overturned the Catholic Church’s view on the matter. The former characterised the practices that violate ‘the integrity of the human person, such as mutilation, torments inflicted on body or mind’ as ‘infamies’ (Gaudium et Spes, 27), while the latter, after confirming that there are some human acts that are ‘intrinsically evil’, that is, they are always wrong, regardless of ‘the ulterior intentions of the one acting and the circumstances’, included in this category, among others, ‘whatever violates the integrity of the human person, such as mutilation, physical and mental torture and attempts to coerce the spirit.’ ‘All these and the like’, the encyclical concludes, ‘are a disgrace… and are a negation of the honour due to the Creator’ (Veritatis Splendor, 80). For the Catholic Church, therefore, willful bodily sufferings and mutilations infringe human integrity and are intrinsically evil acts, while intentional harm, based on the principle that ‘evil may not be done for the sake of good’, is always unreasonable. On the other hand, many saints who engaged in these practices are still honoured for their ascetic and noble life, while, even today, similar practices are found, such as the well-known Good Friday self-flagellation and nailing to the cross, performed in San Fernando, Philippines. So, the viewpoint of modern Theology about these practices and the question of whether Christians should hurt their body for the salvation of their soul is the question that this paper will attempt to answer.Keywords: human body, human soul, torture, pain, salvation
Procedia PDF Downloads 9176 NHS Tayside Plastic Surgery Induction Cheat Sheet and Video
Authors: Paul Holmes, Mike N. G.
Abstract:
Foundation-year doctors face increased stress, pressure and uncertainty when starting new rotations throughout their first years of work. This research questionnaire resulted in an induction cheat sheet and induction video that enhanced the Junior doctor's understanding of how to work effectively within the plastic surgery department at NHS Tayside. The objectives and goals were to improve the transition between cohorts of junior doctors in ward 26 at Ninewells Hospital. Before this quality improvement project, the induction pack was 74 pages long and over eight years old. With the support of consultant Mike Ng a new up-to-date induction was created. This involved a questionnaire and cheat sheet being developed. The questionnaire covered clerking, venipuncture, ward pharmacy, theatres, admissions, specialties on the ward, the cardiac arrest trolley, clinical emergencies, discharges and escalation. This audit has three completed cycles between August 2022 and August 2023. The cheat sheet developed a concise two-page A4 document designed for doctors to be able to reference easily and understand the essentials. The document format is a table containing ward layout; specialty; location; physician associate, shift patterns; ward rounds; handover location and time; hours coverage; senior escalation; nights; daytime duties, meetings/MDTs/board meetings, important bleeps and codes; department guidelines; boarders; referrals and patient stream; pharmacy; absences; rota coordinator; annual leave; top tips. The induction video is a 10-minute in-depth explanation of all aspects of the ward. The video explores in more depth the contents of the cheat sheet. This alternative visual format familiarizes the junior doctor with all aspects of the ward. These were provided to all foundation year 1 and 2 doctors on ward 26 at Ninewells Hospital at NHS Tayside Scotland. This work has since been adopted by the General Surgery Department, which extends to six further wards and has improved the effective handing over of the junior doctor’s role between cohorts. There is potential to further expand the cheat sheet to other departments as the concise document takes around 30 minutes to complete by a doctor who is currently on that ward. The time spent filling out the form provides vital information to the incoming junior doctors, which has a significant possibility to improve patient care.Keywords: induction, junior doctor, handover, plastic surgery
Procedia PDF Downloads 8475 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 13274 [Keynote Talk]: Monitoring of Ultrafine Particle Number and Size Distribution at One Urban Background Site in Leicester
Authors: Sarkawt M. Hama, Paul S. Monks, Rebecca L. Cordell
Abstract:
Within the Joaquin project, ultrafine particles (UFP) are continuously measured at one urban background site in Leicester. The main aims are to examine the temporal and seasonal variations in UFP number concentration and size distribution in an urban environment, and to try to assess the added value of continuous UFP measurements. In addition, relations of UFP with more commonly monitored pollutants such as black carbon (BC), nitrogen oxides (NOX), particulate matter (PM2.5), and the lung deposited surface area(LDSA) were evaluated. The effects of meteorological conditions, particularly wind speed and direction, and also temperature on the observed distribution of ultrafine particles will be detailed. The study presents the results from an experimental investigation into the particle number concentration size distribution of UFP, BC, and NOX with measurements taken at the Automatic Urban and Rural Network (AURN) monitoring site in Leicester. The monitoring was performed as part of the EU project JOAQUIN (Joint Air Quality Initiative) supported by the INTERREG IVB NWE program. The total number concentrations (TNC) were measured by a water-based condensation particle counter (W-CPC) (TSI model 3783), the particle number concentrations (PNC) and size distributions were measured by an ultrafine particle monitor (UFP TSI model 3031), the BC by MAAP (Thermo-5012), the NOX by NO-NO2-NOx monitor (Thermos Scientific 42i), and a Nanoparticle Surface Area Monitor (NSAM, TSI 3550) was used to measure the LDSA (reported as μm2 cm−3) corresponding to the alveolar region of the lung between November 2013 and November 2015. The average concentrations of particle number concentrations were observed in summer with lower absolute values of PNC than in winter might be related mainly to particles directly emitted by traffic and to the more favorable conditions of atmospheric dispersion. Results showed a traffic-related diurnal variation of UFP, BC, NOX and LDSA with clear morning and evening rush hour peaks on weekdays, only an evening peak at the weekends. Correlation coefficients were calculated between UFP and other pollutants (BC and NOX). The highest correlation between them was found in winter months. Overall, the results support the notion that local traffic emissions were a major contributor of the atmospheric particles pollution and a clear seasonal pattern was found, with higher values during the cold season.Keywords: size distribution, traffic emissions, UFP, urban area
Procedia PDF Downloads 32973 The Role of People and Data in Complex Spatial-Related Long-Term Decisions: A Case Study of Capital Project Management Groups
Authors: Peter Boyes, Sarah Sharples, Paul Tennent, Gary Priestnall, Jeremy Morley
Abstract:
Significant long-term investment projects can involve complex decisions. These are often described as capital projects, and the factors that contribute to their complexity include budgets, motivating reasons for investment, stakeholder involvement, interdependent projects, and the delivery phases required. The complexity of these projects often requires management groups to be established involving stakeholder representatives; these teams are inherently multidisciplinary. This study uses two university campus capital projects as case studies for this type of management group. Due to the interaction of projects with wider campus infrastructure and users, decisions are made at varying spatial granularity throughout the project lifespan. This spatial-related context brings complexity to the group decisions. Sensemaking is the process used to achieve group situational awareness of a complex situation, enabling the team to arrive at a consensus and make a decision. The purpose of this study is to understand the role of people and data in the complex spatial related long-term decision and sensemaking processes. The paper aims to identify and present issues experienced in practical settings of these types of decision. A series of exploratory semi-structured interviews with members of the two projects elicit an understanding of their operation. From two stages of thematic analysis, inductive and deductive, emergent themes are identified around the group structure, the data usage, and the decision making within these groups. When data were made available to the group, there were commonly issues with the perception of veracity and validity of the data presented; this impacted the ability of group to reach consensus and, therefore, for decisions to be made. Similarly, there were different responses to forecasted or modelled data, shaped by the experience and occupation of the individuals within the multidisciplinary management group. This paper provides an understanding of further support required for team sensemaking and decision making in complex capital projects. The paper also discusses the barriers found to effective decision making in this setting and suggests opportunities to develop decision support systems in this team strategic decision-making process. Recommendations are made for further research into the sensemaking and decision-making process of this complex spatial-related setting.Keywords: decision making, decisions under uncertainty, real decisions, sensemaking, spatial, team decision making
Procedia PDF Downloads 12972 Identification, Synthesis, and Biological Evaluation of the Major Human Metabolite of NLRP3 Inflammasome Inhibitor MCC950
Authors: Manohar Salla, Mark S. Butler, Ruby Pelingon, Geraldine Kaeslin, Daniel E. Croker, Janet C. Reid, Jong Min Baek, Paul V. Bernhardt, Elizabeth M. J. Gillam, Matthew A. Cooper, Avril A. B. Robertson
Abstract:
MCC950 is a potent and selective inhibitor of the NOD-like receptor pyrin domain-containing protein 3 (NLRP3) inflammasome that shows early promise for treatment of inflammatory diseases. The identification of major metabolites of lead molecule is an important step during drug development process. It provides an information about the metabolically labile sites in the molecule and thereby helping medicinal chemists to design metabolically stable molecules. To identify major metabolites of MCC950, the compound was incubated with human liver microsomes and subsequent analysis by (+)- and (−)-QTOF-ESI-MS/MS revealed a major metabolite formed due to hydroxylation on 1,2,3,5,6,7-hexahydro-s-indacene moiety of MCC950. This major metabolite can lose two water molecules and three possible regioisomers were synthesized. Co-elution of major metabolite with each of the synthesized compounds using HPLC-ESI-SRM-MS/MS revealed the structure of the metabolite (±) N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. Subsequent synthesis of individual enantiomers and coelution in HPLC-ESI-SRM-MS/MS using a chiral column revealed the metabolite was R-(+)- N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. To study the possible cytochrome P450 enzyme(s) responsible for the formation of major metabolite, MCC950 was incubated with a panel of cytochrome P450 enzymes. The result indicated that CYP1A2, CYP2A6, CYP2B6, CYP2C9, CYP2C18, CYP2C19, CYP2J2 and CYP3A4 are most likely responsible for the formation of the major metabolite. The biological activity of the major metabolite and the other synthesized regioisomers was also investigated by screening for for NLRP3 inflammasome inhibitory activity and cytotoxicity. The major metabolite had 170-fold less inhibitory activity (IC50-1238 nM) than MCC950 (IC50-7.5 nM). Interestingly, one regioisomer had shown nanomolar inhibitory activity (IC50-232 nM). However, no evidence of cytotoxicity was observed with any of these synthesized compounds when tested in human embryonic kidney 293 cells (HEK293) and human liver hepatocellular carcinoma G2 cells (HepG2). These key findings give an insight into the SAR of the hexahydroindacene moiety of MCC950 and reveal a metabolic soft spot which could be blocked by chemical modification.Keywords: Cytochrome P450, inflammasome, MCC950, metabolite, microsome, NLRP3
Procedia PDF Downloads 25071 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant
Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula
Abstract:
Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning
Procedia PDF Downloads 13170 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks
Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba
Abstract:
The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.Keywords: authentication, long term evolution, security, vehicle-to-everything
Procedia PDF Downloads 16769 Emotion Motives Predict the Mood States of Depression and Happiness
Authors: Paul E. Jose
Abstract:
A new self-report measure named the General Emotion Regulation Measure (GERM) assesses four key goals for experiencing broad valenced groups of emotions: 1) trying to experience positive emotions (e.g., joy, pride, liking a person); 2) trying to avoid experiencing positive emotions; 3) trying to experience negative emotions (e.g., anger, anxiety, contempt); and 4) trying to avoid experiencing negative emotions. Although individual differences in GERM motives have been identified, evidence of validity with common mood outcomes is lacking. In the present study, whether GERM motives predict self-reported subjective happiness and depressive symptoms (CES-D) was tested with a community sample of 833 young adults. It was predicted that the GERM motive of trying to experience positive emotions would positively predict subjective happiness, and analogously trying to experience negative emotions would predict depressive symptoms. An initial path model was constructed in which the four GERM motives predicted both subjective happiness and depressive symptoms. The fully saturated model included three non-significant paths, which were subsequently pruned, and a good fitting model was obtained (CFI = 1.00; RMR = .007). Two GERM motives significantly predicted subjective happiness: 1) trying to experience positive emotions ( = .38, p < .001) and 2) trying to avoid experiencing positive emotions ( = -.48, p <.001). Thus, individuals who reported high levels of trying to experience positive emotions reported high levels of happiness, and individuals who reported low levels of trying to avoid experiencing positive emotions also reported high levels of happiness. Three GERM motives significantly predicted depressive symptoms: 1) trying to avoid experiencing positive emotions ( = .20, p <.001); 2) trying to experience negative emotions ( = .15, p <.001); and 3) trying to experience positive emotions (= -.07, p <.001). In agreement with predictions, trying to experience positive emotions was positively associated with subjective happiness and trying to experience negative emotions was positively associated with depressive symptoms. In essence, these two valenced mood states seem to be sustained by trying to experience similarly valenced emotions. However, the three other significant paths in the model indicated that emotional motives play a complicated role in supporting both positive and negative mood states. For subjective happiness, the GERM motive of not trying to avoid positive emotions, i.e., not avoiding happiness, was also a strong predictor of happiness. Thus, people who report being the happiest are those individuals who not only strive to experience positive emotions but also are not ambivalent about them. The pattern for depressive symptoms was more nuanced. Individuals who reported higher depressive symptoms also reported higher levels of avoiding positive emotions and trying to experience negative emotions. The strongest predictor for depressed mood was avoiding positive emotions, which would suggest that happiness aversion or fear of happiness is an important motive for dysphoric people. Future work should determine whether these patterns of association are similar among clinically depressed people, and longitudinal data are needed to determine temporal relationships between motives and mood states.Keywords: emotions motives, depression, subjective happiness, path model
Procedia PDF Downloads 20268 Non-Newtonian Fluid Flow Simulation for a Vertical Plate and a Square Cylinder Pair
Authors: Anamika Paul, Sudipto Sarkar
Abstract:
The flow behaviour of non-Newtonian fluid is quite complicated, although both the pseudoplastic (n < 1, n being the power index) and dilatant (n > 1) fluids under this category are used immensely in chemical and process industries. A limited research work is carried out for flow over a bluff body in non-Newtonian flow environment. In the present numerical simulation we control the vortices of a square cylinder by placing an upstream vertical splitter plate for pseudoplastic (n=0.8), Newtonian (n=1) and dilatant (n=1.2) fluids. The position of the upstream plate is also varied to calculate the critical distance between the plate and cylinder, below which the cylinder vortex shedding suppresses. Here the Reynolds number is considered as Re = 150 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid), which comes under laminar periodic vortex shedding regime. The vertical plate is having a dimension of 0.5a × 0.05a and it is placed at the cylinder centre-line. Gambit 2.2.30 is used to construct the flow domain and to impose the boundary conditions. In detail, we imposed velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. The unsteady 2-D Navier Stokes equations in fully conservative form are then discretized in second-order spatial and first-order temporal form. These discretized equations are then solved by Ansys Fluent 14.5 implementing SIMPLE algorithm written in finite volume method. Here, fine meshing is used surrounding the plate and cylinder. Away from the cylinder, the grids are slowly stretched out in all directions. To get an account of mesh quality, a total of 297 × 208 grid points are used for G/a = 3 (G being the gap between the plate and cylinder) in the streamwise and flow-normal directions respectively after a grid independent study. The computed mean flow quantities obtained from Newtonian flow are agreed well with the available literatures. The results are depicted with the help of instantaneous and time-averaged flow fields. Qualitative and quantitative noteworthy differences are obtained in the flow field with the changes in rheology of fluid. Also, aerodynamic forces and vortex shedding frequencies differ with the gap-ratio and power index of the fluid. We can conclude from the present simulation that fluent is capable to capture the vortex dynamics of unsteady laminar flow regime even in the non-Newtonian flow environment.Keywords: CFD, critical gap-ratio, splitter plate, wake-wake interactions, dilatant, pseudoplastic
Procedia PDF Downloads 11167 The Need For Higher Education Stem Integrated into the Social Science
Authors: Luis Fernando Calvo Prieto, Raul Herrero Martínez, Mónica Santamarta Llorente, Sergio Paniagua Bermejo
Abstract:
The project that is presented starts from the questioning about the compartmentalization of knowledge that occurs in university higher education. There are several authors who describe the problems associated with this reality (Rodamillans, M) indicating a lack of integration of the knowledge acquired by students throughout the subjects taken in their university degree. Furthermore, this disintegration is accentuated by the enrollment system of some Faculties and/or Schools of Engineering, which allows the student to take subjects outside the recommended curricular path. This problem is accentuated in an ostentatious way when trying to find an integration between humanistic subjects and the world of experimental sciences or engineering. This abrupt separation between humanities and sciences can be observed in any study plan of Spanish degrees. Except for subjects such as economics or English, in the Faculties of Sciences and the Schools of Engineering, the absence of any humanistic content is striking. At some point it was decided that the only value to take into account when designing their study plans was “usefulness”, considering the humanities systematically useless for their training, and therefore banishing them from the study plans. forgetting the role they have on the capacity of both Leadership and Civic Humanism in our professionals of tomorrow. The teaching guides for the different subjects in the branch of science or engineering do not include any competency, not even transversal, related to leadership capacity or the need, in today's world, for social, civic and humanitarian knowledge part of the people who will offer medical, pharmaceutical, environmental, biotechnological or engineering solutions to a society that is generated thanks to more or less complex relationships based on human relationships and historical events that have occurred so far. If we want professionals who know how to deal effectively and rationally with their leadership tasks and who, in addition, find and develop an ethically civic sense and a humanistic profile in their functions and scientific tasks, we must not leave aside the importance that it has, for the themselves, know the causes, facts and consequences of key events in the history of humanity. The words of the humanist Paul Preston are well known: “he who does not know his history is condemned to repeat the mistakes of the past.” The idea, therefore, that today there can be men of science in the way that the scientists of the Renaissance were, becomes, at the very least, difficult to conceive. To think that a Leonardo da Vinci can be repeated in current times is a more than crazy idea; and although at first it may seem that the specialization of a professional is inevitable but beneficial, there are authors who consider (Sánchez Inarejos) that it has an extremely serious negative side effect: the entrenchment behind the different postulates of each area of knowledge, disdaining everything. what is foreign to it.Keywords: STEM, higher education, social sciences, history
Procedia PDF Downloads 6666 Call-Back Laterality and Bilaterality: Possible Screening Mammography Quality Metrics
Authors: Samson Munn, Virginia H. Kim, Huija Chen, Sean Maldonado, Michelle Kim, Paul Koscheski, Babak N. Kalantari, Gregory Eckel, Albert Lee
Abstract:
In terms of screening mammography quality, neither the portion of reports that advise call-back imaging that should be bilateral versus unilateral nor how much the unilateral call-backs may appropriately diverge from 50–50 (left versus right) is known. Many factors may affect detection laterality: display arrangement, reflections preferentially striking one display location, hanging protocols, seating positions with respect to others and displays, visual field cuts, health, etc. The call-back bilateral fraction may reflect radiologist experience (not in our data) or confidence level. Thus, laterality and bilaterality of call-backs advised in screening mammography reports could be worthy quality metrics. Here, laterality data did not reveal a concern until drilling down to individuals. Bilateral screening mammogram report recommendations by five breast imaging, attending radiologists at Harbor-UCLA Medical Center (Torrance, California) 9/1/15--8/31/16 and 9/1/16--8/31/17 were retrospectively reviewed. Recommended call-backs for bilateral versus unilateral, and for left versus right, findings were counted. Chi-square (χ²) statistic was applied. Year 1: of 2,665 bilateral screening mammograms, reports of 556 (20.9%) recommended call-back, of which 99 (17.8% of the 556) were for bilateral findings. Of the 457 unilateral recommendations, 222 (48.6%) regarded the left breast. Year 2: of 2,106 bilateral screening mammograms, reports of 439 (20.8%) recommended call-back, of which 65 (14.8% of the 439) were for bilateral findings. Of the 374 unilateral recommendations, 182 (48.7%) regarded the left breast. Individual ranges of call-backs that were bilateral were 13.2–23.3%, 10.2–22.5%, and 13.6–17.9%, by year(s) 1, 2, and 1+2, respectively; these ranges were unrelated to experience level; the two-year mean was 15.8% (SD=1.9%). The lowest χ² p value of the group's sidedness disparities years 1, 2, and 1+2 was > 0.4. Regarding four individual radiologists, the lowest p value was 0.42. However, the fifth radiologist disfavored the left, with p values of 0.21, 0.19, and 0.07, respectively; that radiologist had the greatest number of years of experience. There was a concerning, 93% likelihood that bias against left breast findings evidenced by one of our radiologists was not random. Notably, very soon after the period under review, he retired, presented with leukemia, and died. We call for research to be done, particularly by large departments with many radiologists, of two possible, new, quality metrics in screening mammography: laterality and bilaterality. (Images, patient outcomes, report validity, and radiologist psychological confidence levels were not assessed. No intervention nor subsequent data collection was conducted. This uncomplicated collection of data and simple appraisal were not designed, nor had there been any intention to develop or contribute, to generalizable knowledge (per U.S. DHHS 45 CFR, part 46)).Keywords: mammography, screening mammography, quality, quality metrics, laterality
Procedia PDF Downloads 15865 Printed Electronics for Enhanced Monitoring of Organ-on-Chip Culture Media Parameters
Authors: Alejandra Ben-Aissa, Martina Moreno, Luciano Sappia, Paul Lacharmoise, Ana Moya
Abstract:
Organ-on-Chip (OoC) stands out as a highly promising approach for drug testing, presenting a cost-effective and ethically superior alternative to conventional in vivo experiments. These cutting-edge devices emerge from the integration of tissue engineering and microfluidic technology, faithfully replicating the physiological conditions of targeted organs. Consequently, they offer a more precise understanding of drug responses without the ethical concerns associated with animal testing. When addressing the limitations of OoC due to conventional and time-consuming techniques, Lab-On-Chip (LoC) emerge as a disruptive technology capable of providing real-time monitoring without compromising sample integrity. This work develops LoC platforms that can be integrated within OoC platforms to monitor essential culture media parameters, including glucose, oxygen, and pH, facilitating the straightforward exchange of sensing units within a dynamic and controlled environment without disrupting cultures. This approach preserves the experimental setup, minimizes the impact on cells, and enables efficient, prolonged measurement. The LoC system is fabricated following the patented methodology protected by EU patent EP4317957A1. One of the key challenges of integrating sensors in a biocompatible, feasible, robust, and scalable manner is addressed through fully printed sensors, ensuring a customized, cost-effective, and scalable solution. With this technique, sensor reliability is enhanced, providing high sensitivity and selectivity for accurate parameter monitoring. In the present study, LoC is validated measuring a complete culture media. The oxygen sensor provided a measurement range from 0 mgO2/L to 6.3 mgO2/L. The pH sensor demonstrated a measurement range spanning 2 pH units to 9.5 pH units. Additionally, the glucose sensor achieved a measurement range from 0 mM to 11 mM. All the measures were performed with the sensors integrated in the LoC. In conclusion, this study showcases the impactful synergy of OoC technology with LoC systems using fully printed sensors, marking a significant step forward in ethical and effective biomedical research, particularly in drug development. This innovation not only meets current demands but also lays the groundwork for future advancements in precision and customization within scientific exploration.Keywords: organ on chip, lab on chip, real time monitoring, biosensors
Procedia PDF Downloads 1164 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis
Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon
Abstract:
Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage
Procedia PDF Downloads 15963 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 229