Search results for: Paul V. Thomas
135 Destroying the Body for the Salvation of the Soul: A Modern Theological Approach
Authors: Angelos Mavropoulos
Abstract:
Apostle Paul repeatedly mentioned the bodily sufferings that he voluntarily went through for Christ, as his body was in chains for the ‘mystery of Christ’ (Col 4:3), while on his flesh he gladly carried the ‘thorn’ and all his pains and weaknesses, which prevent him from being proud (2 Cor 12:7). In his view, God’s power ‘is made perfect in weakness’ and when we are physically weak, this is when we are spiritually strong (2 Cor 12:9-10). In addition, we all bear the death of Jesus in our bodies so that His life can be ‘revealed in our mortal body’ (2 Cor 4:10-11), and if we indeed share in His sufferings, we will share in His glory as well (Rom 8:17). Based on these passages, several Christian writers projected bodily suffering, pain, death, and martyrdom, in general, as the means to a noble Christian life and the way to attain God. Even more, Christian tradition is full of instances of voluntary self-harm, mortification of the flesh, and body mutilation for the sake of the soul by several pious men and women, as an imitation of Christ’s earthly suffering. It is a fact, therefore, that, for Christianity, he or she who not only endures but even inflicts earthly pains for God is highly appreciated and will be rewarded in the afterlife. Nevertheless, more recently, Gaudium et Spes and Veritatis Splendor decisively and totally overturned the Catholic Church’s view on the matter. The former characterised the practices that violate ‘the integrity of the human person, such as mutilation, torments inflicted on body or mind’ as ‘infamies’ (Gaudium et Spes, 27), while the latter, after confirming that there are some human acts that are ‘intrinsically evil’, that is, they are always wrong, regardless of ‘the ulterior intentions of the one acting and the circumstances’, included in this category, among others, ‘whatever violates the integrity of the human person, such as mutilation, physical and mental torture and attempts to coerce the spirit.’ ‘All these and the like’, the encyclical concludes, ‘are a disgrace… and are a negation of the honour due to the Creator’ (Veritatis Splendor, 80). For the Catholic Church, therefore, willful bodily sufferings and mutilations infringe human integrity and are intrinsically evil acts, while intentional harm, based on the principle that ‘evil may not be done for the sake of good’, is always unreasonable. On the other hand, many saints who engaged in these practices are still honoured for their ascetic and noble life, while, even today, similar practices are found, such as the well-known Good Friday self-flagellation and nailing to the cross, performed in San Fernando, Philippines. So, the viewpoint of modern Theology about these practices and the question of whether Christians should hurt their body for the salvation of their soul is the question that this paper will attempt to answer.Keywords: human body, human soul, torture, pain, salvation
Procedia PDF Downloads 91134 Estimation of Rock Strength from Diamond Drilling
Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi
Abstract:
The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength
Procedia PDF Downloads 136133 NHS Tayside Plastic Surgery Induction Cheat Sheet and Video
Authors: Paul Holmes, Mike N. G.
Abstract:
Foundation-year doctors face increased stress, pressure and uncertainty when starting new rotations throughout their first years of work. This research questionnaire resulted in an induction cheat sheet and induction video that enhanced the Junior doctor's understanding of how to work effectively within the plastic surgery department at NHS Tayside. The objectives and goals were to improve the transition between cohorts of junior doctors in ward 26 at Ninewells Hospital. Before this quality improvement project, the induction pack was 74 pages long and over eight years old. With the support of consultant Mike Ng a new up-to-date induction was created. This involved a questionnaire and cheat sheet being developed. The questionnaire covered clerking, venipuncture, ward pharmacy, theatres, admissions, specialties on the ward, the cardiac arrest trolley, clinical emergencies, discharges and escalation. This audit has three completed cycles between August 2022 and August 2023. The cheat sheet developed a concise two-page A4 document designed for doctors to be able to reference easily and understand the essentials. The document format is a table containing ward layout; specialty; location; physician associate, shift patterns; ward rounds; handover location and time; hours coverage; senior escalation; nights; daytime duties, meetings/MDTs/board meetings, important bleeps and codes; department guidelines; boarders; referrals and patient stream; pharmacy; absences; rota coordinator; annual leave; top tips. The induction video is a 10-minute in-depth explanation of all aspects of the ward. The video explores in more depth the contents of the cheat sheet. This alternative visual format familiarizes the junior doctor with all aspects of the ward. These were provided to all foundation year 1 and 2 doctors on ward 26 at Ninewells Hospital at NHS Tayside Scotland. This work has since been adopted by the General Surgery Department, which extends to six further wards and has improved the effective handing over of the junior doctor’s role between cohorts. There is potential to further expand the cheat sheet to other departments as the concise document takes around 30 minutes to complete by a doctor who is currently on that ward. The time spent filling out the form provides vital information to the incoming junior doctors, which has a significant possibility to improve patient care.Keywords: induction, junior doctor, handover, plastic surgery
Procedia PDF Downloads 85132 Expanding the Atelier: Design Lead Academic Project Using Immersive User-Generated Mobile Images and Augmented Reality
Authors: David Sinfield, Thomas Cochrane, Marcos Steagall
Abstract:
While there is much hype around the potential and development of mobile virtual reality (VR), the two key critical success factors are the ease of user experience and the development of a simple user-generated content ecosystem. Educational technology history is littered with the debris of over-hyped revolutionary new technologies that failed to gain mainstream adoption or were quickly superseded. Examples include 3D television, interactive CDROMs, Second Life, and Google Glasses. However, we argue that this is the result of curriculum design that substitutes new technologies into pre-existing pedagogical strategies that are focused upon teacher-delivered content rather than exploring new pedagogical strategies that enable student-determined learning or heutagogy. Visual Communication design based learning such as Graphic Design, Illustration, Photography and Design process is heavily based on the traditional forms of the classroom environment whereby student interaction takes place both at peer level and indeed teacher based feedback. In doing so, this makes for a healthy creative learning environment, but does raise other issue in terms of student to teacher learning ratios and reduced contact time. Such issues arise when students are away from the classroom and cannot interact with their peers and teachers and thus we see a decline in creative work from the student. Using AR and VR as a means of stimulating the students and to think beyond the limitation of the studio based classroom this paper will discuss the outcomes of a student project considering the virtual classroom and the techniques involved. The Atelier learning environment is especially suited to the Visual Communication model as it deals with the creative processing of ideas that needs to be shared in a collaborative manner. This has proven to have been a successful model over the years, in the traditional form of design education, but has more recently seen a shift in thinking as we move into a more digital model of learning and indeed away from the classical classroom structure. This study focuses on the outcomes of a student design project that employed Augmented Reality and Virtual Reality technologies in order to expand the dimensions of the classroom beyond its physical limits. Augmented Reality when integrated into the learning experience can improve the learning motivation and engagement of students. This paper will outline some of the processes used and the findings from the semester-long project that took place.Keywords: augmented reality, blogging, design in community, enhanced learning and teaching, graphic design, new technologies, virtual reality, visual communications
Procedia PDF Downloads 238131 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 133130 [Keynote Talk]: Monitoring of Ultrafine Particle Number and Size Distribution at One Urban Background Site in Leicester
Authors: Sarkawt M. Hama, Paul S. Monks, Rebecca L. Cordell
Abstract:
Within the Joaquin project, ultrafine particles (UFP) are continuously measured at one urban background site in Leicester. The main aims are to examine the temporal and seasonal variations in UFP number concentration and size distribution in an urban environment, and to try to assess the added value of continuous UFP measurements. In addition, relations of UFP with more commonly monitored pollutants such as black carbon (BC), nitrogen oxides (NOX), particulate matter (PM2.5), and the lung deposited surface area(LDSA) were evaluated. The effects of meteorological conditions, particularly wind speed and direction, and also temperature on the observed distribution of ultrafine particles will be detailed. The study presents the results from an experimental investigation into the particle number concentration size distribution of UFP, BC, and NOX with measurements taken at the Automatic Urban and Rural Network (AURN) monitoring site in Leicester. The monitoring was performed as part of the EU project JOAQUIN (Joint Air Quality Initiative) supported by the INTERREG IVB NWE program. The total number concentrations (TNC) were measured by a water-based condensation particle counter (W-CPC) (TSI model 3783), the particle number concentrations (PNC) and size distributions were measured by an ultrafine particle monitor (UFP TSI model 3031), the BC by MAAP (Thermo-5012), the NOX by NO-NO2-NOx monitor (Thermos Scientific 42i), and a Nanoparticle Surface Area Monitor (NSAM, TSI 3550) was used to measure the LDSA (reported as μm2 cm−3) corresponding to the alveolar region of the lung between November 2013 and November 2015. The average concentrations of particle number concentrations were observed in summer with lower absolute values of PNC than in winter might be related mainly to particles directly emitted by traffic and to the more favorable conditions of atmospheric dispersion. Results showed a traffic-related diurnal variation of UFP, BC, NOX and LDSA with clear morning and evening rush hour peaks on weekdays, only an evening peak at the weekends. Correlation coefficients were calculated between UFP and other pollutants (BC and NOX). The highest correlation between them was found in winter months. Overall, the results support the notion that local traffic emissions were a major contributor of the atmospheric particles pollution and a clear seasonal pattern was found, with higher values during the cold season.Keywords: size distribution, traffic emissions, UFP, urban area
Procedia PDF Downloads 330129 The Role of People and Data in Complex Spatial-Related Long-Term Decisions: A Case Study of Capital Project Management Groups
Authors: Peter Boyes, Sarah Sharples, Paul Tennent, Gary Priestnall, Jeremy Morley
Abstract:
Significant long-term investment projects can involve complex decisions. These are often described as capital projects, and the factors that contribute to their complexity include budgets, motivating reasons for investment, stakeholder involvement, interdependent projects, and the delivery phases required. The complexity of these projects often requires management groups to be established involving stakeholder representatives; these teams are inherently multidisciplinary. This study uses two university campus capital projects as case studies for this type of management group. Due to the interaction of projects with wider campus infrastructure and users, decisions are made at varying spatial granularity throughout the project lifespan. This spatial-related context brings complexity to the group decisions. Sensemaking is the process used to achieve group situational awareness of a complex situation, enabling the team to arrive at a consensus and make a decision. The purpose of this study is to understand the role of people and data in the complex spatial related long-term decision and sensemaking processes. The paper aims to identify and present issues experienced in practical settings of these types of decision. A series of exploratory semi-structured interviews with members of the two projects elicit an understanding of their operation. From two stages of thematic analysis, inductive and deductive, emergent themes are identified around the group structure, the data usage, and the decision making within these groups. When data were made available to the group, there were commonly issues with the perception of veracity and validity of the data presented; this impacted the ability of group to reach consensus and, therefore, for decisions to be made. Similarly, there were different responses to forecasted or modelled data, shaped by the experience and occupation of the individuals within the multidisciplinary management group. This paper provides an understanding of further support required for team sensemaking and decision making in complex capital projects. The paper also discusses the barriers found to effective decision making in this setting and suggests opportunities to develop decision support systems in this team strategic decision-making process. Recommendations are made for further research into the sensemaking and decision-making process of this complex spatial-related setting.Keywords: decision making, decisions under uncertainty, real decisions, sensemaking, spatial, team decision making
Procedia PDF Downloads 131128 Identification, Synthesis, and Biological Evaluation of the Major Human Metabolite of NLRP3 Inflammasome Inhibitor MCC950
Authors: Manohar Salla, Mark S. Butler, Ruby Pelingon, Geraldine Kaeslin, Daniel E. Croker, Janet C. Reid, Jong Min Baek, Paul V. Bernhardt, Elizabeth M. J. Gillam, Matthew A. Cooper, Avril A. B. Robertson
Abstract:
MCC950 is a potent and selective inhibitor of the NOD-like receptor pyrin domain-containing protein 3 (NLRP3) inflammasome that shows early promise for treatment of inflammatory diseases. The identification of major metabolites of lead molecule is an important step during drug development process. It provides an information about the metabolically labile sites in the molecule and thereby helping medicinal chemists to design metabolically stable molecules. To identify major metabolites of MCC950, the compound was incubated with human liver microsomes and subsequent analysis by (+)- and (−)-QTOF-ESI-MS/MS revealed a major metabolite formed due to hydroxylation on 1,2,3,5,6,7-hexahydro-s-indacene moiety of MCC950. This major metabolite can lose two water molecules and three possible regioisomers were synthesized. Co-elution of major metabolite with each of the synthesized compounds using HPLC-ESI-SRM-MS/MS revealed the structure of the metabolite (±) N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. Subsequent synthesis of individual enantiomers and coelution in HPLC-ESI-SRM-MS/MS using a chiral column revealed the metabolite was R-(+)- N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. To study the possible cytochrome P450 enzyme(s) responsible for the formation of major metabolite, MCC950 was incubated with a panel of cytochrome P450 enzymes. The result indicated that CYP1A2, CYP2A6, CYP2B6, CYP2C9, CYP2C18, CYP2C19, CYP2J2 and CYP3A4 are most likely responsible for the formation of the major metabolite. The biological activity of the major metabolite and the other synthesized regioisomers was also investigated by screening for for NLRP3 inflammasome inhibitory activity and cytotoxicity. The major metabolite had 170-fold less inhibitory activity (IC50-1238 nM) than MCC950 (IC50-7.5 nM). Interestingly, one regioisomer had shown nanomolar inhibitory activity (IC50-232 nM). However, no evidence of cytotoxicity was observed with any of these synthesized compounds when tested in human embryonic kidney 293 cells (HEK293) and human liver hepatocellular carcinoma G2 cells (HepG2). These key findings give an insight into the SAR of the hexahydroindacene moiety of MCC950 and reveal a metabolic soft spot which could be blocked by chemical modification.Keywords: Cytochrome P450, inflammasome, MCC950, metabolite, microsome, NLRP3
Procedia PDF Downloads 252127 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant
Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula
Abstract:
Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning
Procedia PDF Downloads 133126 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks
Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba
Abstract:
The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.Keywords: authentication, long term evolution, security, vehicle-to-everything
Procedia PDF Downloads 167125 Emotion Motives Predict the Mood States of Depression and Happiness
Authors: Paul E. Jose
Abstract:
A new self-report measure named the General Emotion Regulation Measure (GERM) assesses four key goals for experiencing broad valenced groups of emotions: 1) trying to experience positive emotions (e.g., joy, pride, liking a person); 2) trying to avoid experiencing positive emotions; 3) trying to experience negative emotions (e.g., anger, anxiety, contempt); and 4) trying to avoid experiencing negative emotions. Although individual differences in GERM motives have been identified, evidence of validity with common mood outcomes is lacking. In the present study, whether GERM motives predict self-reported subjective happiness and depressive symptoms (CES-D) was tested with a community sample of 833 young adults. It was predicted that the GERM motive of trying to experience positive emotions would positively predict subjective happiness, and analogously trying to experience negative emotions would predict depressive symptoms. An initial path model was constructed in which the four GERM motives predicted both subjective happiness and depressive symptoms. The fully saturated model included three non-significant paths, which were subsequently pruned, and a good fitting model was obtained (CFI = 1.00; RMR = .007). Two GERM motives significantly predicted subjective happiness: 1) trying to experience positive emotions ( = .38, p < .001) and 2) trying to avoid experiencing positive emotions ( = -.48, p <.001). Thus, individuals who reported high levels of trying to experience positive emotions reported high levels of happiness, and individuals who reported low levels of trying to avoid experiencing positive emotions also reported high levels of happiness. Three GERM motives significantly predicted depressive symptoms: 1) trying to avoid experiencing positive emotions ( = .20, p <.001); 2) trying to experience negative emotions ( = .15, p <.001); and 3) trying to experience positive emotions (= -.07, p <.001). In agreement with predictions, trying to experience positive emotions was positively associated with subjective happiness and trying to experience negative emotions was positively associated with depressive symptoms. In essence, these two valenced mood states seem to be sustained by trying to experience similarly valenced emotions. However, the three other significant paths in the model indicated that emotional motives play a complicated role in supporting both positive and negative mood states. For subjective happiness, the GERM motive of not trying to avoid positive emotions, i.e., not avoiding happiness, was also a strong predictor of happiness. Thus, people who report being the happiest are those individuals who not only strive to experience positive emotions but also are not ambivalent about them. The pattern for depressive symptoms was more nuanced. Individuals who reported higher depressive symptoms also reported higher levels of avoiding positive emotions and trying to experience negative emotions. The strongest predictor for depressed mood was avoiding positive emotions, which would suggest that happiness aversion or fear of happiness is an important motive for dysphoric people. Future work should determine whether these patterns of association are similar among clinically depressed people, and longitudinal data are needed to determine temporal relationships between motives and mood states.Keywords: emotions motives, depression, subjective happiness, path model
Procedia PDF Downloads 202124 Non-Newtonian Fluid Flow Simulation for a Vertical Plate and a Square Cylinder Pair
Authors: Anamika Paul, Sudipto Sarkar
Abstract:
The flow behaviour of non-Newtonian fluid is quite complicated, although both the pseudoplastic (n < 1, n being the power index) and dilatant (n > 1) fluids under this category are used immensely in chemical and process industries. A limited research work is carried out for flow over a bluff body in non-Newtonian flow environment. In the present numerical simulation we control the vortices of a square cylinder by placing an upstream vertical splitter plate for pseudoplastic (n=0.8), Newtonian (n=1) and dilatant (n=1.2) fluids. The position of the upstream plate is also varied to calculate the critical distance between the plate and cylinder, below which the cylinder vortex shedding suppresses. Here the Reynolds number is considered as Re = 150 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid), which comes under laminar periodic vortex shedding regime. The vertical plate is having a dimension of 0.5a × 0.05a and it is placed at the cylinder centre-line. Gambit 2.2.30 is used to construct the flow domain and to impose the boundary conditions. In detail, we imposed velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. The unsteady 2-D Navier Stokes equations in fully conservative form are then discretized in second-order spatial and first-order temporal form. These discretized equations are then solved by Ansys Fluent 14.5 implementing SIMPLE algorithm written in finite volume method. Here, fine meshing is used surrounding the plate and cylinder. Away from the cylinder, the grids are slowly stretched out in all directions. To get an account of mesh quality, a total of 297 × 208 grid points are used for G/a = 3 (G being the gap between the plate and cylinder) in the streamwise and flow-normal directions respectively after a grid independent study. The computed mean flow quantities obtained from Newtonian flow are agreed well with the available literatures. The results are depicted with the help of instantaneous and time-averaged flow fields. Qualitative and quantitative noteworthy differences are obtained in the flow field with the changes in rheology of fluid. Also, aerodynamic forces and vortex shedding frequencies differ with the gap-ratio and power index of the fluid. We can conclude from the present simulation that fluent is capable to capture the vortex dynamics of unsteady laminar flow regime even in the non-Newtonian flow environment.Keywords: CFD, critical gap-ratio, splitter plate, wake-wake interactions, dilatant, pseudoplastic
Procedia PDF Downloads 112123 The Need For Higher Education Stem Integrated into the Social Science
Authors: Luis Fernando Calvo Prieto, Raul Herrero Martínez, Mónica Santamarta Llorente, Sergio Paniagua Bermejo
Abstract:
The project that is presented starts from the questioning about the compartmentalization of knowledge that occurs in university higher education. There are several authors who describe the problems associated with this reality (Rodamillans, M) indicating a lack of integration of the knowledge acquired by students throughout the subjects taken in their university degree. Furthermore, this disintegration is accentuated by the enrollment system of some Faculties and/or Schools of Engineering, which allows the student to take subjects outside the recommended curricular path. This problem is accentuated in an ostentatious way when trying to find an integration between humanistic subjects and the world of experimental sciences or engineering. This abrupt separation between humanities and sciences can be observed in any study plan of Spanish degrees. Except for subjects such as economics or English, in the Faculties of Sciences and the Schools of Engineering, the absence of any humanistic content is striking. At some point it was decided that the only value to take into account when designing their study plans was “usefulness”, considering the humanities systematically useless for their training, and therefore banishing them from the study plans. forgetting the role they have on the capacity of both Leadership and Civic Humanism in our professionals of tomorrow. The teaching guides for the different subjects in the branch of science or engineering do not include any competency, not even transversal, related to leadership capacity or the need, in today's world, for social, civic and humanitarian knowledge part of the people who will offer medical, pharmaceutical, environmental, biotechnological or engineering solutions to a society that is generated thanks to more or less complex relationships based on human relationships and historical events that have occurred so far. If we want professionals who know how to deal effectively and rationally with their leadership tasks and who, in addition, find and develop an ethically civic sense and a humanistic profile in their functions and scientific tasks, we must not leave aside the importance that it has, for the themselves, know the causes, facts and consequences of key events in the history of humanity. The words of the humanist Paul Preston are well known: “he who does not know his history is condemned to repeat the mistakes of the past.” The idea, therefore, that today there can be men of science in the way that the scientists of the Renaissance were, becomes, at the very least, difficult to conceive. To think that a Leonardo da Vinci can be repeated in current times is a more than crazy idea; and although at first it may seem that the specialization of a professional is inevitable but beneficial, there are authors who consider (Sánchez Inarejos) that it has an extremely serious negative side effect: the entrenchment behind the different postulates of each area of knowledge, disdaining everything. what is foreign to it.Keywords: STEM, higher education, social sciences, history
Procedia PDF Downloads 66122 Call-Back Laterality and Bilaterality: Possible Screening Mammography Quality Metrics
Authors: Samson Munn, Virginia H. Kim, Huija Chen, Sean Maldonado, Michelle Kim, Paul Koscheski, Babak N. Kalantari, Gregory Eckel, Albert Lee
Abstract:
In terms of screening mammography quality, neither the portion of reports that advise call-back imaging that should be bilateral versus unilateral nor how much the unilateral call-backs may appropriately diverge from 50–50 (left versus right) is known. Many factors may affect detection laterality: display arrangement, reflections preferentially striking one display location, hanging protocols, seating positions with respect to others and displays, visual field cuts, health, etc. The call-back bilateral fraction may reflect radiologist experience (not in our data) or confidence level. Thus, laterality and bilaterality of call-backs advised in screening mammography reports could be worthy quality metrics. Here, laterality data did not reveal a concern until drilling down to individuals. Bilateral screening mammogram report recommendations by five breast imaging, attending radiologists at Harbor-UCLA Medical Center (Torrance, California) 9/1/15--8/31/16 and 9/1/16--8/31/17 were retrospectively reviewed. Recommended call-backs for bilateral versus unilateral, and for left versus right, findings were counted. Chi-square (χ²) statistic was applied. Year 1: of 2,665 bilateral screening mammograms, reports of 556 (20.9%) recommended call-back, of which 99 (17.8% of the 556) were for bilateral findings. Of the 457 unilateral recommendations, 222 (48.6%) regarded the left breast. Year 2: of 2,106 bilateral screening mammograms, reports of 439 (20.8%) recommended call-back, of which 65 (14.8% of the 439) were for bilateral findings. Of the 374 unilateral recommendations, 182 (48.7%) regarded the left breast. Individual ranges of call-backs that were bilateral were 13.2–23.3%, 10.2–22.5%, and 13.6–17.9%, by year(s) 1, 2, and 1+2, respectively; these ranges were unrelated to experience level; the two-year mean was 15.8% (SD=1.9%). The lowest χ² p value of the group's sidedness disparities years 1, 2, and 1+2 was > 0.4. Regarding four individual radiologists, the lowest p value was 0.42. However, the fifth radiologist disfavored the left, with p values of 0.21, 0.19, and 0.07, respectively; that radiologist had the greatest number of years of experience. There was a concerning, 93% likelihood that bias against left breast findings evidenced by one of our radiologists was not random. Notably, very soon after the period under review, he retired, presented with leukemia, and died. We call for research to be done, particularly by large departments with many radiologists, of two possible, new, quality metrics in screening mammography: laterality and bilaterality. (Images, patient outcomes, report validity, and radiologist psychological confidence levels were not assessed. No intervention nor subsequent data collection was conducted. This uncomplicated collection of data and simple appraisal were not designed, nor had there been any intention to develop or contribute, to generalizable knowledge (per U.S. DHHS 45 CFR, part 46)).Keywords: mammography, screening mammography, quality, quality metrics, laterality
Procedia PDF Downloads 162121 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery
Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser
Abstract:
Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.Keywords: adverse events, global trigger tool, patient safety, record review
Procedia PDF Downloads 249120 Printed Electronics for Enhanced Monitoring of Organ-on-Chip Culture Media Parameters
Authors: Alejandra Ben-Aissa, Martina Moreno, Luciano Sappia, Paul Lacharmoise, Ana Moya
Abstract:
Organ-on-Chip (OoC) stands out as a highly promising approach for drug testing, presenting a cost-effective and ethically superior alternative to conventional in vivo experiments. These cutting-edge devices emerge from the integration of tissue engineering and microfluidic technology, faithfully replicating the physiological conditions of targeted organs. Consequently, they offer a more precise understanding of drug responses without the ethical concerns associated with animal testing. When addressing the limitations of OoC due to conventional and time-consuming techniques, Lab-On-Chip (LoC) emerge as a disruptive technology capable of providing real-time monitoring without compromising sample integrity. This work develops LoC platforms that can be integrated within OoC platforms to monitor essential culture media parameters, including glucose, oxygen, and pH, facilitating the straightforward exchange of sensing units within a dynamic and controlled environment without disrupting cultures. This approach preserves the experimental setup, minimizes the impact on cells, and enables efficient, prolonged measurement. The LoC system is fabricated following the patented methodology protected by EU patent EP4317957A1. One of the key challenges of integrating sensors in a biocompatible, feasible, robust, and scalable manner is addressed through fully printed sensors, ensuring a customized, cost-effective, and scalable solution. With this technique, sensor reliability is enhanced, providing high sensitivity and selectivity for accurate parameter monitoring. In the present study, LoC is validated measuring a complete culture media. The oxygen sensor provided a measurement range from 0 mgO2/L to 6.3 mgO2/L. The pH sensor demonstrated a measurement range spanning 2 pH units to 9.5 pH units. Additionally, the glucose sensor achieved a measurement range from 0 mM to 11 mM. All the measures were performed with the sensors integrated in the LoC. In conclusion, this study showcases the impactful synergy of OoC technology with LoC systems using fully printed sensors, marking a significant step forward in ethical and effective biomedical research, particularly in drug development. This innovation not only meets current demands but also lays the groundwork for future advancements in precision and customization within scientific exploration.Keywords: organ on chip, lab on chip, real time monitoring, biosensors
Procedia PDF Downloads 16119 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis
Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon
Abstract:
Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage
Procedia PDF Downloads 161118 Physical Model Testing of Storm-Driven Wave Impact Loads and Scour at a Beach Seawall
Authors: Sylvain Perrin, Thomas Saillour
Abstract:
The Grande-Motte port and seafront development project on the French Mediterranean coastline entailed evaluating wave impact loads (pressures and forces) on the new beach seawall and comparing the resulting scour potential at the base of the existing and new seawall. A physical model was built at ARTELIA’s hydraulics laboratory in Grenoble (France) to provide insight into the evolution of scouring overtime at the front of the wall, quasi-static and impulsive wave force intensity and distribution on the wall, and water and sand overtopping discharges over the wall. The beach was constituted of fine sand and approximately 50 m wide above mean sea level (MSL). Seabed slopes were in the range of 0.5% offshore to 1.5% closer to the beach. A smooth concrete structure will replace the existing concrete seawall with an elevated curved crown wall. Prior the start of breaking (at -7 m MSL contour), storm-driven maximum spectral significant wave heights of 2.8 m and 3.2 m were estimated for the benchmark historical storm event dated of 1997 and the 50-year return period storms respectively, resulting in 1 m high waves at the beach. For the wave load assessment, a tensor scale measured wave forces and moments and five piezo / piezo-resistive pressure sensors were placed on the wall. Light-weight sediment physical model and pressure and force measurements were performed with scale 1:18. The polyvinyl chloride light-weight particles used to model the prototype silty sand had a density of approximately 1 400 kg/m3 and a median diameter (d50) of 0.3 mm. Quantitative assessments of the seabed evolution were made using a measuring rod and also a laser scan survey. Testing demonstrated the occurrence of numerous impulsive wave impacts on the reflector (22%), induced not by direct wave breaking but mostly by wave run-up slamming on the top curved part of the wall. Wave forces of up to 264 kilonewtons and impulsive pressure spikes of up to 127 kilonewtons were measured. Maximum scour of -0.9 m was measured for the new seawall versus -0.6 m for the existing seawall, which is imputable to increased wave reflection (coefficient was 25.7 - 30.4% vs 23.4 - 28.6%). This paper presents a methodology for the setup and operation of a physical model in order to assess the hydrodynamic and morphodynamic processes at a beach seawall during storms events. It discusses the pros and cons of such methodology versus others, notably regarding structures peculiarities and model effects.Keywords: beach, impacts, scour, seawall, waves
Procedia PDF Downloads 153117 Plastic Waste Sorting by the People of Dakar
Authors: E. Gaury, P. Mandausch, O. Picot, A. R. Thomas, L. Veisblat, L. Ralambozanany, C. Delsart
Abstract:
In Dakar, demographic and spatial growth was accompanied by a 50% increase in household waste between 1988 and 2008 in the city. In addition, a change in the nature of household waste was observed between 1990 and 2007. The share of plastic increased by 15% between 2004 and 2007 in Dakar. Plastics represent the seventh category of household waste, the most produced per year in Senegal. The share of plastic in household and similar waste is 9% in Senegal. Waste management in the city of Dakar is a complex process involving a multitude of formal and informal actors with different perceptions and objectives. The objective of this study was to understand the motivations that could lead to sorting action, as well as the perception of plastic waste sorting within the Dakar population (households and institutions). The problematic of this study was as follows: what may be the factors playing a role in the sorting action? In an attempt to answer this, two approaches have been developed: (1) An exploratory qualitative study by semi-structured interviews with two groups of individuals concerned by the sorting of plastic waste: on the one hand, the experts in charge of waste management and on the other the households-producers of waste plastics. This study served as the basis for formulating the hypotheses and thus for the quantitative analysis. (2) A quantitative study using a questionnaire survey method among households producing plastic waste in order to test the previously formulated hypotheses. The objective was to have quantitative results representative of the population of Dakar in relation to the behavior and the process inherent in the adoption of the plastic waste sorting action. The exploratory study shows that the perception of state responsibility varies between institutions and households. Public institutions perceive this as a shared responsibility because the problem of plastic waste affects many sectors (health, environmental education, etc.). Their involvement is geared more towards raising awareness and educating young people. As state action is limited, the emergence of private companies in this sector seems logical as they are setting up collection networks to develop a recycling activity. The state plays a moral support role in these activities and encourages companies to do more. The study of the understanding of the action of sorting plastic waste by the population of Dakar through a quantitative analysis was able to demonstrate the attitudes and constraints inherent in the adoption of plastic waste sorting.Cognitive attitude, knowledge, and visible consequences have been shown to correlate positively with sorting behavior. Thus, it would seem that the population of Dakar is more sensitive to what they see and what they know to adopt sorting behavior.It has also been shown that the strongest constraints that could slow down sorting behavior were the complexity of the process, too much time and the lack of infrastructure in which to deposit plastic waste.Keywords: behavior, Dakar, plastic waste, waste management
Procedia PDF Downloads 94116 Experimental Study of Impregnated Diamond Bit Wear During Sharpening
Authors: Rui Huang, Thomas Richard, Masood Mostofi
Abstract:
The lifetime of impregnated diamond bits and their drilling efficiency are in part governed by the bit wear conditions, not only the extent of the diamonds’ wear but also their exposure or protrusion out of the matrix bonding. As much as individual diamonds wear, the bonding matrix does also wear through two-body abrasion (direct matrix-rock contact) and three-body erosion (cuttings trapped in the space between rock and matrix). Although there is some work dedicated to the study of diamond bit wear, there is still a lack of understanding on how matrix erosion and diamond exposure relate to the bit drilling response and drilling efficiency, as well as no literature on the process that governs bit sharpening a procedure commonly implemented by drillers when the extent of diamond polishing yield extremely low rate of penetration. The aim of this research is (i) to derive a correlation between the wear state of the bit and the drilling performance but also (ii) to gain a better understanding of the process associated with tool sharpening. The research effort combines specific drilling experiments and precise mapping of the tool-cutting face (impregnated diamond bits and segments). Bit wear is produced by drilling through a rock sample at a fixed rate of penetration for a given period of time. Before and after each wear test, the bit drilling response and thus efficiency is mapped out using a tailored design experimental protocol. After each drilling test, the bit or segment cutting face is scanned with an optical microscope. The test results show that, under the fixed rate of penetration, diamond exposure increases with drilling distance but at a decreasing rate, up to a threshold exposure that corresponds to the optimum drilling condition for this feed rate. The data further shows that the threshold exposure scale with the rate of penetration up to a point where exposure reaches a maximum beyond which no more matrix can be eroded under normal drilling conditions. The second phase of this research focuses on the wear process referred as bit sharpening. Drillers rely on different approaches (increase feed rate or decrease flow rate) with the aim of tearing worn diamonds away from the bit matrix, wearing out some of the matrix, and thus exposing fresh sharp diamonds and recovering a higher rate of penetration. Although a common procedure, there is no rigorous methodology to sharpen the bit and avoid excessive wear or bit damage. This paper aims to gain some insight into the mechanisms that accompany bit sharpening by carefully tracking diamond fracturing, matrix wear, and erosion and how they relate to drilling parameters recorded while sharpening the tool. The results show that there exist optimal conditions (operating parameters and duration of the procedure) for sharpening that minimize overall bit wear and that the extent of bit sharpening can be monitored in real-time.Keywords: bit sharpening, diamond exposure, drilling response, impregnated diamond bit, matrix erosion, wear rate
Procedia PDF Downloads 99115 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 231114 Returns to Communities of the Social Entrepreneurship and Environmental Design (SEED) Integration Results in Architectural Training
Authors: P. Kavuma, J. Mukasa, M. Lusunku
Abstract:
Background and Problem: The widespread poverty in Africa- together with the negative impacts of climate change-are two great global challenges that call for everyone’s involvement including Architects. This in particular places serious challenges on architects to have additional skills in both Entrepreneurship and Environmental Design (SEED). Regrettably, while Architectural Training in most African Universities including those from Uganda lack comprehensive implementation of SEED in their curricula, regulatory bodies have not contributed towards the effective integration of SEED in their professional practice. In response to these challenges, Nkumba University (NU) under Architect Kavuma Paul supported by the Uganda Chambers of Architects– initiated the SEED integration in the undergraduate Architectural curricula to cultivate SEED know-how and examples of best practices. Main activities: Initiated in 2007, going beyond the traditional Architectural degree curriculum, the NU Architect department offers SEED courses including provoking passions for creating desirable positive changes in communities. Learning outcomes are assessed theoretically and practically through field projects. The first set of SEED graduates came out in 2012. As part of the NU post-graduation and alumni survey, in October 2014, the pioneer SEED graduates were contacted through automated reminder emails followed by individual, repeated personal follow-ups via email and phone. Out of the 36 graduates who responded to the survey, 24 have formed four (4) private consortium agencies of 5-7 graduates all of whom have pioneered Ugandan-own-cultivated Architectural social projects that include: fishing farming in shipping containers; solar powered mobile homes in shipping containers, solar powered retail kiosks in rural and fishing communities, and floating homes in the flood-prone areas. Primary outcomes: include being business self –reliant in creating the social change the architects desired in the communities. Examples of the SEED project returns to communities reported by the graduates include; employment creation via fabrication, retail business, marketing, improved diets, safety of life and property, decent shelter in the remote mining and oil exploration areas. Negative outcomes-though not yet evaluated include the disposal of used-up materials. Conclusion: The integration of SEED in Architectural Training has established a baseline benchmark and a replicable model based on best practice projects.Keywords: architectural training, entrepreneurship, environment, integration
Procedia PDF Downloads 403113 Incidence and Molecular Mechanism of Human Pathogenic Bacterial Interaction with Phylloplane of Solanum lycopersicum
Authors: Indu Gaur, Neha Bhadauria, Shilpi Shilpi, Susmita Goswami, Prem D. Sharma, Prabir K. Paul
Abstract:
The concept of organic agriculture has been accepted as novelty in Indian society, but there is no data available on the human pathogens colonizing plant parts due to such practices. Also, the pattern and mechanism of their colonization need to be understood in order to devise possible strategies for their prevention. In the present study, human pathogenic bacteria were isolated from organically grown tomato plants and five of them were identified as Klebsiella pneumoniae, Enterobacter ludwigii, Serratia fonticola, Stenotrophomonas maltophilia and Chryseobacterium jejuense. Tomato plants were grown in controlled aseptic conditions with 25±1˚C, 70% humidity and 12 hour L/D photoperiod. Six weeks old plants were divided into 6 groups of 25 plants each and treated as follows: Group 1: K. pneumonia, Group 2: E. ludwigii, Group 3: S. fonticola, Group 4: S. maltophilia, Group 5: C. jejuense, Group 6: Sterile distilled water (control). The inoculums for all treatments were prepared by overnight growth with uniform concentration of 108 cells/ml. Leaf samples from above groups were collected at 0.5, 2, 4, 6 and 24 hours post inoculation for the colony forming unit counts (CFU/cm2 of leaf area) of individual pathogens using leaf impression method. These CFU counts were used for the in vivo colonization assay and adherence assay of individual pathogens. Also, resistance of these pathogens to at least 12 antibiotics was studied. Based on these findings S. fonticola was found to be most prominently colonizing the phylloplane of tomato and was further studied. Tomato plants grown in controlled aseptic conditions same as mentioned above were divided into 2 groups of 25 plants each and treated as follows: Group 1: S. fonticola, Group 2: Sterile distilled water (control). Leaf samples from above groups were collected at 0, 24, 48, 72 and 96 hours post inoculation and homogenized in suitable buffers for surface and cell wall protein isolation. Protein samples thus obtained were subjected to isocratic SDS-gel electrophoresis and analyzed. It was observed that presence of S. fonticola could induce the expression of at least 3 additional cell wall proteins at different time intervals. Surface proteins also showed variation in the expression pattern at different sampling intervals. Further identification of these proteins by MALDI-MS and bioinformatics tools revealed the gene(s) involved in the interaction of S. fonticola with tomato phylloplane.Keywords: cell wall proteins, human pathogenic bacteria, phylloplane, solanum lycopersicum
Procedia PDF Downloads 228112 Biosorption of Nickel by Penicillium simplicissimum SAU203 Isolated from Indian Metalliferous Mining Overburden
Authors: Suchhanda Ghosh, A. K. Paul
Abstract:
Nickel, an industrially important metal is not mined in India, due to the lack of its primary mining resources. But, the chromite deposits occurring in the Sukinda and Baula-Nuasahi region of Odhisa, India, is reported to contain around 0.99% of nickel entrapped in the goethite matrix of the lateritic iron rich ore. Weathering of the dumped chromite mining overburden often leads to the contamination of the ground as well as the surface water with toxic nickel. Microbes inherent to this metal contaminated environment are reported to be capable of removal as well as detoxification of various metals including nickel. Nickel resistant fungal isolates obtained in pure form from the metal rich overburden were evaluated for their potential to biosorb nickel by using their dried biomass. Penicillium simplicissimum SAU203 was the best nickel biosorbant among the 20 fungi tested and was capable to sorbing 16.85 mg Ni/g biomass from a solution containing 50 mg/l of Ni. The identity of the isolate was confirmed using 18S rRNA gene analysis. The sorption capacity of the isolate was further standardized following Langmuir and Freundlich adsorption isotherm models and the results reflected energy efficient sorption. Fourier-transform infrared spectroscopy studies of the nickel loaded and control biomass in a comparative basis revealed the involvement of hydroxyl, amine and carboxylic groups in Ni binding. The sorption process was also optimized for several standard parameters like initial metal ion concentration, initial sorbet concentration, incubation temperature and pH, presence of additional cations and pre-treatment of the biomass by different chemicals. Optimisation leads to significant improvements in the process of nickel biosorption on to the fungal biomass. P. simplicissimum SAU203 could sorb 54.73 mg Ni/g biomass with an initial Ni concentration of 200 mg/l in solution and 21.8 mg Ni/g biomass with an initial biomass concentration of 1g/l solution. Optimum temperature and pH for biosorption was recorded to be 30°C and pH 6.5 respectively. Presence of Zn and Fe ions improved the sorption of Ni(II), whereas, cobalt had a negative impact. Pre-treatment of biomass with various chemical and physical agents has affected the proficiency of Ni sorption by P. simplicissimum SAU203 biomass, autoclaving as well as treatment of biomass with 0.5 M sulfuric acid and acetic acid reduced the sorption as compared to the untreated biomass, whereas, NaOH and Na₂CO₃ and Twin 80 (0.5 M) treated biomass resulted in augmented metal sorption. Hence, on the basis of the present study, it can be concluded that P. simplicissimum SAU203 has the potential for the removal as well as detoxification of nickel from contaminated environments in general and particularly from the chromite mining areas of Odhisa, India.Keywords: nickel, fungal biosorption, Penicillium simplicissimum SAU203, Indian chromite mines, mining overburden
Procedia PDF Downloads 191111 Improved Functions For Runoff Coefficients And Smart Design Of Ditches & Biofilters For Effective Flow detention
Authors: Thomas Larm, Anna Wahlsten
Abstract:
An international literature study has been carried out for comparison of commonly used methods for the dimensioning of transport systems and stormwater facilities for flow detention. The focus of the literature study regarding the calculation of design flow and detention has been the widely used Rational method and its underlying parameters. The impact of chosen design parameters such as return time, rain intensity, runoff coefficient, and climate factor have been studied. The parameters used in the calculations have been analyzed regarding how they can be calculated and within what limits they can be used. Data used within different countries have been specified, e.g., recommended rainfall return times, estimated runoff times, and climate factors used for different cases and time periods. The literature study concluded that the determination of runoff coefficients is the most uncertain parameter that also affects the calculated flow and required detention volume the most. Proposals have been developed for new runoff coefficients, including a new proposed method with equations for calculating runoff coefficients as a function of return time (years) and rain intensity (l/s/ha), respectively. Suggestions have been made that it is recommended not to limit the use of the Rational Method to a specific catchment size, contrary to what many design manuals recommend, with references to this. The proposed relationships between return time or rain intensity and runoff coefficients need further investigation and to include the quantification of uncertainties. Examples of parameters that have not been considered are the influence on the runoff coefficients of different dimensioning rain durations and the degree of water saturation of green areas, which will be investigated further. The influence of climate effects and design rain on the dimensioning of the stormwater facilities grassed ditches and biofilters (bio retention systems) has been studied, focusing on flow detention capacity. We have investigated how the calculated runoff coefficients regarding climate effect and the influence of changed (increased) return time affect the inflow to and dimensioning of the stormwater facilities. We have developed a smart design of ditches and biofilters that results in both high treatment and flow detention effects and compared these with the effect from dry and wet ponds. Studies of biofilters have generally before focused on treatment of pollutants, but their effect on flow volume and how its flow detention capability can improve is only rarely studied. For both the new type of stormwater ditches and biofilters, it is required to be able to simulate their performance in a model under larger design rains and future climate, as these conditions cannot be tested in the field. The stormwater model StormTac Web has been used on case studies. The results showed that the new smart design of ditches and biofilters had similar flow detention capacity as dry and wet ponds for the same facility area.Keywords: runoff coefficients, flow detention, smart design, biofilter, ditch
Procedia PDF Downloads 87110 A Systematic Analysis of Knowledge Development Trends in Industrial Maintenance Projects
Authors: Lilian Ogechi Iheukwumere-Esotu, Akilu Yunusa-Kaltungo, Paul Chan
Abstract:
Industrial assets are prone to degradation and eventual failures due to repetitive loads and harsh environments in which they operate. These failures often lead to costly downtimes, which may involve loss of critical assets and/or human lives. The rising pressures from stakeholders for optimized systems’ outputs have further placed strains on business organizations. Traditional means of combating such failures are by adopting strategies capable of predicting, controlling, and/or reducing the likelihood of systems’ failures. Turnarounds, shutdowns, and outages (TSOs) projects are popular maintenance management activities conducted over a certain period of time. However, despite the critical and significant cost implications of TSOs, the management of the interface of knowledge between academia and industry to our best knowledge has not been fully explored in comparison to other aspects of industrial operations. This is perhaps one of the reasons for the limited knowledge transfer between academia and industry, which has affected the outcomes of most TSOs. Prior to now, the study of knowledge development trends as a failure analysis tool in the management of TSOs projects have not gained the required level of attention. Hence, this review provides useful references and their implications for future studies in this field. This study aims to harmonize the existing research trends of TSOs through a systematic review of more than 3,000 research articles published over 7 decades (1940- till date) which were extracted using very specific research criteria and later streamlined using nominated inclusion and exclusion parameters. The information obtained from the analysis were then synthesized and coded into 8 parameters, thereby allowing for a transformation into actionable outputs. The study revealed a variety of information, but the most critical findings can be classified into 4 folds: (1) Empirical validation of available conceptual frameworks and models is still a far cry in practice, (2) traditional project management views for managing uncertainties are still dominant, (3) Inconsistent approaches towards the adoption and promotion of knowledge management systems which supports creation, transfer and application of knowledge within and outside the project organization and, (4) exploration of social practices in industrial maintenance project environments are under-represented within the existing body of knowledge. Thus, the intention of this study is to depict the usefulness of a framework which incorporates fact findings emanating from careful analysis and illustrations of evidence based results as a suitable approach which can tackle reoccurring failures in industrial maintenance projects.Keywords: industrial maintenance, knowledge management, maintenance projects, systematic review, TSOs
Procedia PDF Downloads 116109 Capacity Building in Dietary Monitoring and Public Health Nutrition in the Eastern Mediterranean Region
Authors: Marisol Warthon-Medina, Jenny Plumb, Ayoub Aljawaldeh, Mark Roe, Ailsa Welch, Maria Glibetic, Paul M. Finglas
Abstract:
Similar to Western Countries, the Eastern Mediterranean Region (EMR) also presents major public health issues associated with the increased consumption of sugar, fat, and salt. Therefore, one of the policies of the World Health Organization’s (WHO) EMR is to reduce the intake of salt, sugar, and fat (Saturated fatty acids, trans fatty acids) to address the risk of non-communicable diseases (i.e. diabetes, cardiovascular disease, cancer) and obesity. The project objective is to assess status and provide training and capacity development in the use of improved standardized methodologies for updated food composition data, dietary intake methods, use of suitable biomarkers of nutritional value and determine health outcomes in low and middle-income countries (LMIC). Training exchanges have been developed with clusters of countries created resulting from regional needs including Sudan, Egypt and Jordan; Tunisia, Morocco, and Mauritania; and other Middle Eastern countries. This capacity building will lead to the development and sustainability of up-to-date national and regional food composition databases in LMIC for use in dietary monitoring assessment in food and nutrient intakes. Workshops were organized to provide training and capacity development in the use of improved standardized methodologies for food composition and food intake. Training needs identified and short-term scientific missions organized for LMIC researchers including (1) training and knowledge exchange workshops, (2) short-term exchange of researchers, (3) development and application of protocols and (4) development of strategies to reduce sugar and fat intake. An initial training workshop, Morocco 2018 was attended by 25 participants from 10 EMR countries to review status and support development of regional food composition. 4 training exchanges are in progress. The use of improved standardized methodologies for food composition and dietary intake will produce robust measurements that will reinforce dietary monitoring and policy in LMIC. The capacity building from this project will lead to the development and sustainability of up-to-date national and regional food composition databases in EMR countries. Supported by the UK Medical Research Council, Global Challenges Research Fund, (MR/R019576/1), and the World Health Organization’s Eastern Mediterranean Region.Keywords: dietary intake, food composition, low and middle-income countries, status.
Procedia PDF Downloads 161108 Evaluation of Prehabilitation Prior to Surgery for an Orthopaedic Pathway
Authors: Stephen McCarthy, Joanne Gray, Esther Carr, Gerard Danjoux, Paul Baker, Rhiannon Hackett
Abstract:
Background: The Go Well Health (GWH) platform is a web-based programme that allows patients to access personalised care plans and resources, aimed at prehabilitation prior to surgery. The online digital platform delivers essential patient education and support for patients prior to undergoing total hip replacements (THR) and total knee replacements (TKR). This study evaluated the impact of an online digital platform (ODP) in terms of functional health outcomes, health related quality of life and hospital length of stay following surgery. Methods: A retrospective cohort study comparing a cohort of patients who used the online digital platform (ODP) to deliver patient education and support (PES) prior to undergoing THR and TKR surgery relative to a cohort of patients who did not access the ODP and received usual care. Routinely collected Patient Reported Outcome Measures (PROMs) data was obtained on 2,406 patients who underwent a knee replacement (n=1,160) or a hip replacement (n=1,246) between 2018 and 2019 in a single surgical centre in the United Kingdom. The Oxford Hip and Knee Score and the European Quality of Life Five-Dimensional tool (EQ5D-5L) was obtained both pre-and post-surgery (at 6 months) along with hospital LOS. Linear regression was used to compare the estimate the impact of GWH on both health outcomes and negative binomial regressions were used to impact on LOS. All analyses adjusted for age, sex, Charlson Comorbidity Score and either pre-operative Oxford Hip/Knee scores or pre-operative EQ-5D scores. Fractional polynomials were used to represent potential non-linear relationships between the factors included in the regression model. Findings: For patients who underwent a knee replacement, GWH had a statistically significant impact on Oxford Knee Scores and EQ5D-5L utility post-surgery (p=0.039 and p=0.002 respectively). GWH did not have a statistically significant impact on the hospital length of stay. For those patients who underwent a hip replacement, GWH had a statistically significant impact on Oxford Hip Scores and EQ5D-5L utility post (p=0.000 and p=0.009 respectively). GWH also had a statistically significant reduction in the hospital length of stay (p=0.000). Conclusion: Health Outcomes were higher for patients who used the GWH platform and underwent THR and TKR relative to those who received usual care prior to surgery. Patients who underwent a hip replacement and used GWH also had a reduced hospital LOS. These findings are important for health policy and or decision makers as they suggest that prehabilitation via an ODP can maximise health outcomes for patients following surgery whilst potentially making efficiency savings with reductions in LOS.Keywords: digital prehabilitation, online digital platform, orthopaedics, surgery
Procedia PDF Downloads 190107 Evaluation of Cooperative Hand Movement Capacity in Stroke Patients Using the Cooperative Activity Stroke Assessment
Authors: F. A. Thomas, M. Schrafl-Altermatt, R. Treier, S. Kaufmann
Abstract:
Stroke is the main cause of adult disability. Especially upper limb function is affected in most patients. Recently, cooperative hand movements have been shown to be a promising type of upper limb training in stroke rehabilitation. In these movements, which are frequently found in activities of daily living (e.g. opening a bottle, winding up a blind), the force of one upper limb has to be equally counteracted by the other limb to successfully accomplish a task. The use of standardized and reliable clinical assessments is essential to evaluate the efficacy of therapy and the functional outcome of a patient. Many assessments for upper limb function or impairment are available. However, the evaluation of cooperative hand movement tasks are rarely included in those. Thus, the aim of this study was (i) to develop a novel clinical assessment (CASA - Cooperative Activity Stroke Assessment) for the evaluation of patients’ capacity to perform cooperative hand movements and (ii) to test its inter- and interrater reliability. Furthermore, CASA scores were compared to current gold standard assessments for upper extremity in stroke patients (i.e. Fugl-Meyer Assessment, Box & Blocks Test). The CASA consists of five cooperative activities of daily living including (1) opening a jar, (2) opening a bottle, (3) open and closing of a zip, (4) unscrew a nut and (5) opening a clipbox. Here, the goal is to accomplish the tasks as fast as possible. In addition to the quantitative rating (i.e. time) which is converted to a 7-point scale, also the quality of the movement is rated in a 4-point scale. To test the reliability of CASA, fifteen stroke subjects were tested within a week twice by the same two raters. Intra-and interrater reliability was calculated using the intraclass correlation coefficient (ICC) for total CASA score and single items. Furthermore, Pearson-correlation was used to compare the CASA scores to the scores of Fugl-Meyer upper limb assessment and the box and blocks test, which were assessed in every patient additionally to the CASA. ICC scores of the total CASA score indicated an excellent- and single items established a good to excellent inter- and interrater reliability. Furthermore, the CASA score was significantly correlated to the Fugl-Meyer and Box & Blocks score. The CASA provides a reliable assessment for cooperative hand movements which are crucial for many activities of daily living. Due to its non-costly setup, easy and fast implementation, we suggest it to be well suitable for clinical application. In conclusion, the CASA is a useful tool in assessing the functional status and therapy related recovery in cooperative hand movement capacity in stroke patients.Keywords: activitites of daily living, clinical assessment, cooperative hand movements, reliability, stroke
Procedia PDF Downloads 319106 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties
Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier
Abstract:
The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA
Procedia PDF Downloads 64