Search results for: in-phase component
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2577

Search results for: in-phase component

387 Communication Skills Training in Continuing Nursing Education: Enabling Nurses to Improve Competency and Performance in Communication

Authors: Marzieh Moattari Mitra Abbasi, Masoud Mousavinasab, Poorahmad

Abstract:

Background: Nurses in their daily practice need to communicate with patients and their families as well as health professional team members. Effective communication contributes to patients’ satisfaction which is a fundamental outcome of nursing practice. There are some evidences in support of patients' dissatisfaction with nurses’ performance in communication process. Therefore improving nurses’ communication skills is a necessity for nursing scholars and nursing administrators. Objective: The aim of the present study was to evaluate the effect of a 2-days workshop on nurses’ competencies and performances in communication in a central hospital located in the sought of Iran. Materials and Method: This is a randomized controlled trial which comprised of a convenient sample of 70 eligible nurses, working in a central hospital. They were randomly divided into 2 experimental and control groups. Nurses’ competencies was measured by an Objective Structured Clinical Examination (OSCE) and their performance was measured by asking eligible patients hospitalized in the nurses work setting during a one month period to evaluate nurses' communication skills before and 2 months after intervention. The experimental group participated in a 2 day workshop on communication skills. Content included in this workshop were: the importance of communication (verbal and non verbal), basic communication skills such as initiating the communication, active listening and questioning technique. Other subjects were patient teaching, problem solving, and decision making, cross cultural communication and breaking bad news. Appropriate teaching strategies such as brief didactic sessions, small group discussion and reflection were applied to enhance participants learning. The data was analyzed using SPSS 16. Result: A significant between group differences was found in nurses’ communication skills competencies and performances in the posttest. The mean scores of the experimental group was higher than that of the control group in the total score of OSCE as well as all stations of OSCE (p<0.003). Overall posttest mean scores of patient satisfaction with nurse's communication skills and all of its four dimensions significantly differed between the two groups of the study (p<0.001). Conclusion: This study shows that the education of nurses in communication skills, improves their competencies and performances. Measurement of Nurses’ communication skills as a central component of efficient nurse patient relationship by valid and reliable methods of evaluation is recommended. Also it is necessary to integrate teaching of communication skills in continuing nursing education programs. Trial Registration Number: IRCT201204042621N11

Keywords: communication skills, simulation, performance, competency, objective structure, clinical evaluation

Procedia PDF Downloads 218
386 The Effect of Alternative Organic Fertilizer and Chemical Fertilizer on Nitrogen and Yield of Peppermint (Mentha peperita)

Authors: Seyed Ali Mohammad, Modarres Sanavy, Hamed Keshavarz, Ali Mokhtassi-Bidgoli

Abstract:

One of the biggest challenges for the current and future generations is to produce sufficient food for the world population with the existing limited available water resources. Peppermint is a specialty crop used for food and medicinal purposes. Its main component is menthol. It is used predominantly for oral hygiene, pharmaceuticals, and foods. Although drought stress is considered as a negative factor in agriculture, being responsible for severe yield losses; medicinal plants grown under semi-arid conditions usually produce higher concentrations of active substances than same species grown under moderate climates. Nitrogen (N) fertilizer management is central to the profitability and sustainability of forage crop production. Sub-optimal N supply will result in poor yields, and excess N application can lead to nitrate leaching and environmental pollution. In order to determine the response of peppermint to drought stress and different fertilizer treatments, a field experiment with peppermint was conducted in a sandy loam soil at a site of the Tarbiat Modares University, Agriculture Faculty, Tehran, Iran. The experiment used a complete randomized block design, with six rates of fertilizer strategies (F1: control, F2: Urea, F3: 75% urea + 25% vermicompost, F4: 50% urea + 50% vermicompost, F5: 25% urea + 75% vermicompost and F6: vermicompost) and three irrigation regime (S1: 45%, S2: 60% and S3: 75% FC) with three replication. The traits such as nitrogen, chlorophyll, carotenoids, anthocyanin, flavonoid and fresh biomass were studied. The results showed that the treatments had a significant effect on the studied traits as drought stress reduced photosynthetic pigment concentration. Also, drought stress reduced fresh yield of peppermint. Non stress condition had the greater amount of chlorophyll and fresh yield more than other irrigation treatments. The highest concentration of chlorophyll and the fresh biomass was obtained in F2 fertilizing treatments. Sever water stress (S1) produced decreased photosynthetic pigment content fresh yield of peppermint. Supply of N could improve photosynthetic capacity by enhancing photosynthetic pigment content. Perhaps application of vermicompost significantly improved the organic carbon, available N, P and K content in soil over urea fertilization alone. To get sustainable production of peppermint, application of vermicompost along with N through synthetic fertilizer is recommended for light textured sandy loam soils.

Keywords: fresh yield, peppermint, synthetic nitrogen, vermicompost, water stress

Procedia PDF Downloads 217
385 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 109
384 Modeling of Anode Catalyst against CO in Fuel Cell Using Material Informatics

Authors: M. Khorshed Alam, H. Takaba

Abstract:

The catalytic properties of metal usually change by intermixturing with another metal in polymer electrolyte fuel cells. Pt-Ru alloy is one of the much-talked used alloy to enhance the CO oxidation. In this work, we have investigated the CO coverage on the Pt2Ru3 nanoparticle with different atomic conformation of Pt and Ru using a combination of material informatics with computational chemistry. Density functional theory (DFT) calculations used to describe the adsorption strength of CO and H with different conformation of Pt Ru ratio in the Pt2Ru3 slab surface. Then through the Monte Carlo (MC) simulations we examined the segregation behaviour of Pt as a function of surface atom ratio, subsurface atom ratio, particle size of the Pt2Ru3 nanoparticle. We have constructed a regression equation so as to reproduce the results of DFT only from the structural descriptors. Descriptors were selected for the regression equation; xa-b indicates the number of bonds between targeted atom a and neighboring atom b in the same layer (a,b = Pt or Ru). Terms of xa-H2 and xa-CO represent the number of atoms a binding H2 and CO molecules, respectively. xa-S is the number of atom a on the surface. xa-b- is the number of bonds between atom a and neighboring atom b located outside the layer. The surface segregation in the alloying nanoparticles is influenced by their component elements, composition, crystal lattice, shape, size, nature of the adsorbents and its pressure, temperature etc. Simulations were performed on different size (2.0 nm, 3.0 nm) of nanoparticle that were mixing of Pt and Ru atoms in different conformation considering of temperature range 333K. In addition to the Pt2Ru3 alloy we also considered pure Pt and Ru nanoparticle to make comparison of surface coverage by adsorbates (H2, CO). Hence, we assumed the pure and Pt-Ru alloy nanoparticles have an fcc crystal structures as well as a cubo-octahedron shape, which is bounded by (111) and (100) facets. Simulations were performed up to 50 million MC steps. From the results of MC, in the presence of gases (H2, CO), the surfaces are occupied by the gas molecules. In the equilibrium structure the coverage of H and CO as a function of the nature of surface atoms. In the initial structure, the Pt/Ru ratios on the surfaces for different cluster sizes were in range of 0.50 - 0.95. MC simulation was employed when the partial pressure of H2 (PH2) and CO (PCO) were 70 kPa and 100-500 ppm, respectively. The Pt/Ru ratios decrease as the increase in the CO concentration, without little exception only for small nanoparticle. The adsorption strength of CO on the Ru site is higher than the Pt site that would be one of the reason for decreasing the Pt/Ru ratio on the surface. Therefore, our study identifies that controlling the nanoparticle size, composition, conformation of alloying atoms, concentration and chemical potential of adsorbates have impact on the steadiness of nanoparticle alloys which ultimately and also overall catalytic performance during the operations.

Keywords: anode catalysts, fuel cells, material informatics, Monte Carlo

Procedia PDF Downloads 192
383 Enzymatic Hydrolysis of Sugar Cane Bagasse Using Recombinant Hemicellulases

Authors: Lorena C. Cintra, Izadora M. De Oliveira, Amanda G. Fernandes, Francieli Colussi, Rosália S. A. Jesuíno, Fabrícia P. Faria, Cirano J. Ulhoa

Abstract:

Xylan is the main component of hemicellulose and for its complete degradation is required cooperative action of a system consisting of several enzymes including endo-xylanases (XYN), β-xylosidases (XYL) and α-L-arabinofuranosidases (ABF). The recombinant hemicellulolytic enzymes an endoxylanase (HXYN2), β-xylosidase (HXYLA), and an α-L-arabinofuranosidase (ABF3) were used in hydrolysis tests. These three enzymes are produced by filamentous fungi and were expressed heterologously and produced in Pichia pastoris previously. The aim of this work was to evaluate the effect of recombinant hemicellulolytic enzymes on the enzymatic hydrolysis of sugarcane bagasse (SCB). The interaction between the three recombinant enzymes during SCB pre-treated by steam explosion hydrolysis was performed with different concentrations of HXYN2, HXYLA and ABF3 in different ratios in according to a central composite rotational design (CCRD) 23, including six axial points and six central points, totaling 20 assays. The influence of the factors was assessed by analyzing the main effects and interaction between the factors, calculated using Statistica 8.0 software (StatSoft Inc. Tulsa, OK, USA). The Pareto chart was constructed with this software and showed the values of the Student’s t test for each recombinant enzyme. It was considered as response variable the quantification of reducing sugars by DNS (mg/mL). The Pareto chart showed that the recombinant enzyme ABF3 exerted more significant effect during SCB hydrolysis, with higher concentrations and with the lowest concentration of this enzyme. It was performed analysis of variance according to Fisher method (ANOVA). In ANOVA for the release of reducing sugars (mg/ml) as the variable response, the concentration of ABF3 showed significance during hydrolysis SCB. The result obtained by ANOVA, is in accordance with those presented in the analysis method based on the statistical Student's t (Pareto chart). The degradation of the central chain of xylan by HXYN2 and HXYLA was more strongly influenced by ABF3 action. A model was obtained, and it describes the performance of the interaction of all three enzymes for the release of reducing sugars, and can be used to better explain the results of the statistical analysis. The formulation capable of releasing the higher levels of reducing sugars had the following concentrations: HXYN2 with 600 U/g of substrate, HXYLA with 11.5 U.g-1 and ABF3 with 0.32 U.g-1. In conclusion, the recombinant enzyme that has a more significant effect during SCB hydrolysis was ABF3. It is noteworthy that the xylan present in the SCB is arabinoglucoronoxylan, due to this fact debranching enzymes are important to allow access of enzymes that act on the central chain.

Keywords: experimental design, hydrolysis, recombinant enzymes, sugar cane bagasse

Procedia PDF Downloads 229
382 Geological Characteristics and Hydrocarbon Potential of M’Rar Formation Within NC-210, Atshan Saddle Ghadamis-Murzuq Basins, Libya

Authors: Sadeg M. Ghnia, Mahmud Alghattawi

Abstract:

The NC-210 study area is located in Atshan Saddle between both Ghadamis and Murzuq basins, west Libya. The preserved Palaeozoic successions are predominantly clastics reaching thickness of more than 20,000 ft in northern Ghadamis Basin depocenter. The Carboniferous series consist of interbedded sandstone, siltstone, shale, claystone and minor limestone deposited in a fluctuating shallow marine to brackish lacustrine/fluviatile environment which attain maximum thickness of over 5,000ft in the area of Atshan Saddle and recorded 3,500 ft. in outcrops of Murzuq Basin flanks. The Carboniferous strata was uplifted and eroded during Late Paleozoic and early Mesozoic time in northern Ghadamis Basin and Atshan Saddle. The M'rar Formation age is Tournaisian to Late Serpukhovian based on palynological markers and contains about 12 cycles of sandstone and shale deposited in shallow to outer neritic deltaic settings. The hydrocarbons in the M'rar reservoirs possibly sourced from the Lower Silurian and possibly Frasinian radioactive hot shales. The M'rar Formation lateral, vertical and thickness distribution is possibly influenced by the reactivation of Tumarline Strik-Slip fault and its conjugate faults. A pronounced structural paleohighs and paleolows, trending SE & NW through the Gargaf Saddle, is possibly indicative of the present of two sub-basins in the area of Atshan Saddle. A number of identified seismic reflectors from existing 2D seismic covering Atshan Saddle reflect M’rar deltaic 12 sandstone cycles. M’rar7, M’rar9, M’rar10 and M’rar12 are characterized by high amplitude reflectors, while M’rar2 and M’rar6 are characterized by medium amplitude reflectors. These horizons are productive reservoirs in the study area. Available seismic data in the study area contributed significantly to the identification of M’rar potential traps, which are prominently 3- way dip closure against fault zone. Also seismic data indicates the presence of a significant strikeslip component with the development of flower-structure. The M'rar Formation hydrocarbon discoveries are concentrated mainly in the Atshan Saddle located in southern Ghadamis Basin, Libya and Illizi Basin in southeast of Algeria. Significant additional hydrocarbons may be present in areas adjacent to the Gargaf Uplift, along structural highs and fringing the Hoggar Uplift, providing suitable migration pathways.

Keywords: hydrocarbon potential, stratigraphy, Ghadamis basin, seismic, well data integration

Procedia PDF Downloads 74
381 Optimization of Heat Source Assisted Combustion on Solid Rocket Motors

Authors: Minal Jain, Vinayak Malhotra

Abstract:

Solid Propellant ignition consists of rapid and complex events comprising of heat generation and transfer of heat with spreading of flames over the entire burning surface area. Proper combustion and thus propulsion depends heavily on the modes of heat transfer characteristics and cavity volume. Fire safety is an integral component of a successful rocket flight failing to which may lead to overall failure of the rocket. This leads to enormous forfeiture in resources viz., money, time, and labor involved. When the propellant is ignited, thrust is generated and the casing gets heated up. This heat adds on to the propellant heat and the casing, if not at proper orientation starts burning as well, leading to the whole rocket being completely destroyed. This has necessitated active research efforts emphasizing a comprehensive study on the inter-energy relations involved for effective utilization of the solid rocket motors for better space missions. Present work is focused on one of the major influential aspects of this detrimental burning which is the presence of an external heat source, in addition to a potential heat source which is already ignited. The study is motivated by the need to ensure better combustion and fire safety presented experimentally as a simplified small-scale mode of a rocket carrying a solid propellant inside a cavity. The experimental setup comprises of a paraffin wax candle as the pilot fuel and incense stick as the external heat source. The candle is fixed and the incense stick position and location is varied to investigate the find the influence of the pilot heat source. Different configurations of the external heat source presence with separation distance are tested upon. Regression rates of the pilot thin solid fuel are noted to fundamentally understand the non-linear heat and mass transfer which is the governing phenomenon. An attempt is made to understand the phenomenon fundamentally and the mechanism governing it. Results till now indicate non-linear heat transfer assisted with the occurrence of flaming transition at selected critical distances. With an increase in separation distance, the effect is noted to drop in a non-monotonic trend. The parametric study results are likely to provide useful physical insight about the governing physics and utilization in proper testing, validation, material selection, and designing of solid rocket motors with enhanced safety.

Keywords: combustion, propellant, regression, safety

Procedia PDF Downloads 161
380 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners

Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani

Abstract:

Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.

Keywords: accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks

Procedia PDF Downloads 309
379 Positive-Negative Asymmetry in the Evaluations of Political Candidates: The Mediating Role of Affect in the Relationship between Cognitive Evaluation and Voting Intention

Authors: Magdalena Jablonska, Andrzej Falkowski

Abstract:

The negativity effect is one of the most intriguing and well-studied psychological phenomena that can be observed in many areas of human life. The aim of the following study is to investigate how valence framing and positive and negative information about political candidates affect judgments about similarity to an ideal and bad politician. Based on the theoretical framework of features of similarity, it is hypothesized that negative features have a stronger effect on similarity judgments than positive features of comparable value. Furthermore, the mediating role of affect is tested. Method: One hundred sixty-one people took part in an experimental study. Participants were divided into 6 research conditions that differed in the reference point (positive vs negative framing) and the number of favourable and unfavourable information items about political candidates (a positive, neutral and negative candidate profile). In positive framing condition, the concept of an ideal politician was primed; in the negative condition, participants were to think about a bad politician. The effect of independent variables on similarity judgments, affective evaluation, and voting intention was tested. Results: In the positive condition, the analysis showed that the negative effect of additional unfavourable features was greater than the positive effect of additional favourable features in judgements about similarity to the ideal candidate. In negative framing condition, ANOVA was insignificant, showing that neither the addition of positive features nor additional negative information had a significant impact on the similarity to a bad political candidate. To explain this asymmetry, two mediational analyses were conducted that tested the mediating role of affect in the relationship between similarity judgments and voting intention. In both situations the mediating effect was significant, but the comparison of two models showed that the mediation was stronger for a negative framing. Discussion: The research supports the negativity effect and attempts to explain the psychological mechanism behind the positive-negative asymmetry. The results of mediation analyses point to a stronger mediating role of affect in the relationship between cognitive evaluation and voting intention. Such a result suggests that negative comparisons, leading to the activation of negative features, give rise to stronger emotions than positive features of comparable strength. The findings are in line with positive-negative asymmetry, however, by adopting Tversky’s framework of features of similarity, the study integrates the cognitive mechanism of the negativity effect delineated in the contrast model of similarity with its emotional component resulting from the asymmetrical effect of positive and negative emotions on decision-making.

Keywords: affect, framing, negativity effect, positive-negative asymmetry, similarity judgements

Procedia PDF Downloads 198
378 Time of Death Determination in Medicolegal Death Investigations

Authors: Michelle Rippy

Abstract:

Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.

Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic

Procedia PDF Downloads 118
377 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery

Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser

Abstract:

Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.

Keywords: adverse events, global trigger tool, patient safety, record review

Procedia PDF Downloads 249
376 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments

Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz

Abstract:

Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.

Keywords: LSTMs, streamflow, hyperparameters, hydrology

Procedia PDF Downloads 70
375 Analyzing Bridge Response to Wind Loads and Optimizing Design for Wind Resistance and Stability

Authors: Abdul Haq

Abstract:

The goal of this research is to better understand how wind loads affect bridges and develop strategies for designing bridges that are more stable and resistant to wind. The effect of wind on bridges is essential to their safety and functionality, especially in areas that are prone to high wind speeds or violent wind conditions. The study looks at the aerodynamic forces and vibrations caused by wind and how they affect bridge construction. Part of the research method involves first understanding the underlying ideas influencing wind flow near bridges. Computational fluid dynamics (CFD) simulations are used to model and forecast the aerodynamic behaviour of bridges under different wind conditions. These models incorporate several factors, such as wind directionality, wind speed, turbulence intensity, and the influence of nearby structures or topography. The results provide significant new insights into the loads and pressures that wind places on different bridge elements, such as decks, pylons, and connections. Following the determination of the wind loads, the structural response of bridges is assessed. By simulating their dynamic behavior under wind-induced forces, Finite Element Analysis (FEA) is used to model the bridge's component parts. This work contributes to the understanding of which areas are at risk of experiencing excessive stresses, vibrations, or oscillations due to wind excitations. Because the bridge has inherent modes and frequencies, the study considers both static and dynamic responses. Various strategies are examined to maximize the design of bridges to withstand wind. It is possible to alter the bridge's geometry, add aerodynamic components, add dampers or tuned mass dampers to lessen vibrations, and boost structural rigidity. Through an analysis of several design modifications and their effectiveness, the study aims to offer guidelines and recommendations for wind-resistant bridge design. In addition to the numerical simulations and analyses, there are experimental studies. In order to assess the computational models and validate the practicality of proposed design strategies, scaled bridge models are tested in a wind tunnel. These investigations help to improve numerical models and prediction precision by providing valuable information on wind-induced forces, pressures, and flow patterns. Using a combination of numerical models, actual testing, and long-term performance evaluation, the project aims to offer practical insights and recommendations for building wind-resistant bridges that are secure, long-lasting, and comfortable for users.

Keywords: wind effects, aerodynamic forces, computational fluid dynamics, finite element analysis

Procedia PDF Downloads 66
374 The Effectiveness of a Six-Week Yoga Intervention on Body Awareness, Warnings of Relapse, and Emotion Regulation among Incarcerated Females

Authors: James D. Beauchemin

Abstract:

Introduction: The incarceration of people with mental illness and substance use disorders is a major public health issue with social, clinical, and economic implications. Yoga participation has been associated with numerous psychological benefits; however, there is a paucity of research examining impacts of yoga with incarcerated populations. The purpose of this study was to evaluate effectiveness of a six-week yoga intervention on several mental health-related variables, including emotion regulation, body awareness, and warnings of substance relapse among incarcerated females. Methods: This study utilized a pre-post, three-arm design, with participants assigned to intervention, therapeutic community, or general population groups. A between-group analysis of covariance (ANCOVA) was conducted across groups to assess intervention effectiveness using the Difficulties in Emotion Regulation Scale (DERS), Scale of Body Connection (SBC), and Warnings of Relapse (AWARE) Questionnaire. Results: ANCOVA results for warnings of relapse (AWARE) revealed significant between-group differences F(2, 80) = 7.15, p = .001; np2 = .152), with significant pairwise comparisons between the intervention group and both the therapeutic community (p = .001) and the general population (p = .005) groups. Similarly, significant differences were found for emotional regulation (DERS) F(2, 83) = 10.521, p = .000; np2 = .278). Pairwise comparisons indicated a significant difference between the intervention and general population (p = .01). Finally, significant differences between the intervention and control groups were found for body awareness (SBC) F(2, 84) = 3.69, p = .029; np2 = .081). Between-group differences were clarified via pairwise comparisons, indicating significant differences between the intervention group and both the therapeutic community (p = .028) and general population groups (p = .020). Implications: Study results suggest that yoga may be an effective addition to integrative mental health and substance use treatment for incarcerated women and contributes to increasing evidence that holistic interventions may be an important component for treatment with this population. Specifically, given the prevalence of mental health and substance use disorders, findings revealed that changes in body awareness and emotion regulation might be particularly beneficial for incarcerated populations with substance use challenges as a result of yoga participation. From a systemic perspective, this proactive approach may have long-term implications for both physical and psychological well-being for the incarcerated population as a whole, thereby decreasing the need for traditional treatment. By integrating a more holistic, salutogenic model that emphasizes prevention, interventions like yoga may work to improve the wellness of this population while providing an alternative or complementary treatment option for those with current symptoms.

Keywords: wellness, solution-focused coaching, college students, prevention

Procedia PDF Downloads 121
373 Identification of Candidate Gene for Root Development and Its Association With Plant Architecture and Yield in Cassava

Authors: Abiodun Olayinka, Daniel Dzidzienyo, Pangirayi Tongoona, Samuel Offei, Edwige Gaby Nkouaya Mbanjo, Chiedozie Egesi, Ismail Yusuf Rabbi

Abstract:

Cassava (Manihot esculenta Crantz) is a major source of starch for various industrial applications. However, the traditional cultivation and harvesting methods of cassava are labour-intensive and inefficient, limiting the supply of fresh cassava roots for industrial starch production. To achieve improved productivity and quality of fresh cassava roots through mechanized cultivation, cassava cultivars with compact plant architecture and moderate plant height are needed. Plant architecture-related traits, such as plant height, harvest index, stem diameter, branching angle, and lodging tolerance, are critical for crop productivity and suitability for mechanized cultivation. However, the genetics of cassava plant architecture remain poorly understood. This study aimed to identify the genetic bases of the relationships between plant architecture traits and productivity-related traits, particularly starch content. A panel of 453 clones developed at the International Institute of Tropical Agriculture, Nigeria, was genotyped and phenotyped for 18 plant architecture and productivity-related traits at four locations in Nigeria. A genome-wide association study (GWAS) was conducted using the phenotypic data from a panel of 453 clones and 61,238 high-quality Diversity Arrays Technology sequencing (DArTseq) derived Single Nucleotide Polymorphism (SNP) markers that are evenly distributed across the cassava genome. Five significant associations between ten SNPs and three plant architecture component traits were identified through GWAS. We found five SNPs on chromosomes 6 and 16 that were significantly associated with shoot weight, harvest index, and total yield through genome-wide association mapping. We also discovered an essential candidate gene that is co-located with peak SNPs linked to these traits in M. esculenta. A review of the cassava reference genome v7.1 revealed that the SNP on chromosome 6 is in proximity to Manes.06G101600.1, a gene that regulates endodermal differentiation and root development in plants. The findings of this study provide insights into the genetic basis of plant architecture and yield in cassava. Cassava breeders could leverage this knowledge to optimize plant architecture and yield in cassava through marker-assisted selection and targeted manipulation of the candidate gene.

Keywords: manihot esculenta crantz, plant architecture, dartseq, snp markers, genome-wide association study

Procedia PDF Downloads 95
372 Subjective Realities of Neoliberalized Social Media Natives: Trading Affect for Effect

Authors: Rory Austin Clark

Abstract:

This primary research represents an ongoing two year inductive mixed-methods project endeavouring to unravel the subjective reality of hyperconnected young adults in Western societies who have come of age with social media and smartphones. It is to be presented as well as analyzed and contextualized through a written master’s thesis as well as a documentary/mockumentary meshed with a Web 2.0 app providing the capacity for prosumer, 'audience 2.0' functionality. The media component seeks to explore not only thematic issues via real-life research interviews and fictional narrative but technical issues within the format relating to the quest for intimate, authentic connection as well as compelling dissemination of scholarly knowledge in an age of ubiquitous personalized daily digital media creation and consumption. The overarching hypothesis is that the aforementioned individuals process and make sense of their world, find shared meaning, and formulate notions-of-self in ways drastically different than pre-2007 via hyper-mediation-of-self and surroundings. In this pursuit, research questions have progressed from examining how young adult digital natives understand their use of social media to notions relating to the potential functionality of Web 2.0 for prosocial and altruistic engagement, on and offline, through the eyes of these individuals no longer understood as simply digital natives, but social media natives, and at the conclusion of that phase of research, as 'neoliberalized social media natives' (NSMN). This represents the two most potent macro factors in the paradigmatic shift in NSMS’s worldview, that they are not just children of social media, but of the palpable shift to neoliberal ways of thinking and being in the western socio-cultures since the 1980s, two phenomena that have a reflexive æffective relationship on their perception of figure and ground. This phase also resulted in the working hypothesis of 'social media comparison anxiety' and a nascent understanding of NSMN’s habitus and habitation in a subjective reality of fully converged online/offline worlds, where any phenomena originating in one realm in some way are, or at the very least can be, re-presented or have effect in the other—creating hyperreal reception. This might also be understood through a 'society as symbolic cyborg model', in which individuals have a 'digital essence'-- the entirety of online content that references a single person, as an auric living, breathing cathedral, museum, gallery, and archive of self of infinite permutations and rhizomatic entry and exit points.

Keywords: affect, hyperreal, neoliberalism, postmodernism, social media native, subjective reality, Web 2.0

Procedia PDF Downloads 143
371 Understanding the Dynamics of Human-Snake Negative Interactions: A Study of Indigenous Perceptions in Tamil Nadu, Southern India

Authors: Ramesh Chinnasamy, Srishti Semalty, Vishnu S. Nair, Thirumurugan Vedagiri, Mahesh Ganeshan, Gautam Talukdar, Karthy Sivapushanam, Abhijit Das

Abstract:

Snakes form an integral component of ecological systems. Human population explosion and associated acceleration of habitat destruction and degradation, has led to a rapid increase in human-snake encounters. The study aims at understanding the level of awareness, knowledge, and attitude of the people towards human-snake negative interaction and role of awareness programmes in the Moyar river valley, Tamil Nadu. The study area is part of the Mudumalai and the Sathyamangalam Tiger Reserves, which are significant wildlife corridors between the Western Ghats and the Eastern Ghats in the Nilgiri Biosphere Reserve. The data was collected using questionnaire covering 644 respondents spread across 18 villages between 2018 and 2019. The study revealed that 86.5% of respondents had strong negative perceptions towards snakes which were propelled by fear, superstitions, and threat of snakebite which was common and did not vary among different villages (F=4.48; p = <0.05) and age groups (X2 = 1.946; p = 0.962). Cobra 27.8% (n = 294) and rat snake 21.3% (n = 225) were the most sighted species and most snake encounter occurred during the monsoon season i.e., July 35.6 (n = 218), June 19.1% (n = 117) and August 18.4% (n = 113). At least 1 out of 5 respondents was reportedly bitten by snakes during their lifetime. The most common species of snakes that were the cause of snakebite were Saw scaled viper (32.6%, n = 42) followed by Cobra 17.1% (n = 22). About 21.3% (n = 137) people reported livestock loss due to pythons and other snakes 21.3% (n = 137). Most people, preferred medical treatment for snakebite (87.3%), whereas 12.7%, still believed in traditional methods. The majority (82.3%) used precautionary measure by keeping traditional items such as garlic, kerosene, and snake plant to avoid snakes. About 30% of the respondents expressed need for technical and monetary support from the forest department that could aid in reducing the human-snake conflict. It is concluded that the general perception in the study area is driven by fear and negative attitude towards snakes. Though snakes such as Cobra were widely worshiped in the region, there are still widespread myths and misconceptions that have led to the irrational killing of snakes. Awareness and innovative education programs rooted in the local context and language should be integrated at the village level, to minimize risk and the associated threat of snakebite among the people. Results from this study shall help policy makers to devise appropriate conservation measures to reduce human-snake conflicts in India.

Keywords: Envenomation, Health-Education, Human-Wildlife Conflict, Neglected Tropical Disease, Snakebite Mitigation, Traditional Practitioners

Procedia PDF Downloads 227
370 Advantages of Computer Navigation in Knee Arthroplasty

Authors: Mohammad Ali Al Qatawneh, Bespalchuk Pavel Ivanovich

Abstract:

Computer navigation has been introduced in total knee arthroplasty to improve the accuracy of the procedure. Computer navigation improves the accuracy of bone resection in the coronal and sagittal planes. It was also noted that it normalizes the rotational alignment of the femoral component and fully assesses and balances the deformation of soft tissues in the coronal plane. The work is devoted to the advantages of using computer navigation technology in total knee arthroplasty in 62 patients (11 men and 51 women) suffering from gonarthrosis, aged 51 to 83 years, operated using a computer navigation system, followed up to 3 years from the moment of surgery. During the examination, the deformity variant was determined, and radiometric parameters of the knee joints were measured using the Knee Society Score (KSS), Functional Knee Society Score (FKSS), and Western Ontario and McMaster University Osteoarthritis Index (WOMAC) scales. Also, functional stress tests were performed to assess the stability of the knee joint in the frontal plane and functional indicators of the range of motion. After surgery, improvement was observed in all scales; firstly, the WOMAC values decreased by 5.90 times, and the median value to 11 points (p < 0.001), secondly KSS increased by 3.91 times and reached 86 points (p < 0.001), and the third one is that FKSS data increased by 2.08 times and reached 94 points (p < 0.001). After TKA, the axis deviation of the lower limbs of more than 3 degrees was observed in 4 patients at 6.5% and frontal instability of the knee joint just in 2 cases at 3.2%., The lower incidence of sagittal instability of the knee joint after the operation was 9.6%. The range of motion increased by 1.25 times; the volume of movement averaged 125 degrees (p < 0.001). Computer navigation increases the accuracy of the spatial orientation of the endoprosthesis components in all planes, reduces the variability of the axis of the lower limbs within ± 3 °, allows you to achieve the best results of surgical interventions, and can be used to solve most basic tasks, allowing you to achieve excellent and good outcomes of operations in 100% of cases according to the WOMAC scale. With diaphyseal deformities of the femur and/or tibia, as well as with obstruction of their medullary canal, the use of computer navigation is the method of choice. The use of computer navigation prevents the occurrence of flexion contracture and hyperextension of the knee joint during the distal sawing of the femur. Using the navigation system achieves high-precision implantation for the endoprosthesis; in addition, it achieves an adequate balance of the ligaments, which contributes to the stability of the joint, reduces pain, and allows for the achievement of a good functional result of the treatment.

Keywords: knee joint, arthroplasty, computer navigation, advantages

Procedia PDF Downloads 90
369 Comparison of Extracellular miRNA from Different Lymphocyte Cell Lines and Isolation Methods

Authors: Christelle E. Chua, Alicia L. Ho

Abstract:

The development of a panel of differential gene expression signatures has been of interest in the field of biomarker discovery for radiation exposure. In the absence of the availability of exposed human subjects, lymphocyte cell lines have often been used as a surrogate to human whole blood, when performing ex vivo irradiation studies. The extent of variation between different lymphocyte cell lines is currently unclear, especially with regard to the expression of extracellular miRNA. This study compares the expression profile of extracellular miRNA isolated from different lymphocyte cell lines. It also compares the profile of miRNA obtained when different exosome isolation kits are used. Lymphocyte cell lines were created using lymphocytes isolated from healthy adult males of similar racial descent (Chinese American and Chinese Singaporean) and immortalised with Epstein-Barr virus. The cell lines were cultured in exosome-free cell culture media for 72h and the cell culture supernatant was removed for exosome isolation. Two exosome isolation kits were used. Total exosome isolation reagent (TEIR, ThermoFisher) is a polyethylene glycol (PEG)-based exosome precipitation kit, while ExoSpin (ES, Cell Guidance Systems) is a PEG-based exosome precipitation kit that includes an additional size exclusion chromatography step. miRNA from the isolated exosomes were isolated using miRNEASY minikit (Qiagen) and analysed using nCounter miRNA assay (Nanostring). Principal component analysis (PCA) results suggested that the overall extracellular miRNA expression profile differed between the lymphocyte cell line originating from the Chinese American donor and the cell line originating from the Chinese Singaporean donor. As the gender, age and racial origins of both donors are similar, this may suggest that there are other genetic or epigenetic differences that account for the variation in extracellular miRNA gene expression in lymphocyte cell lines. However, statistical analysis showed that only 3 miRNA genes had a fold difference > 2 at p < 0.05, suggesting that the differences may not be of that great a significance as to impact overall conclusions drawn from different cell lines. Subsequent analysis using cell lines from other donors will give further insight into the reproducibility of results when difference cell lines are used. PCA results also suggested that the method of exosome isolation impacted the expression profile. 107 miRNA had a fold difference > 2 at p < 0.05. This suggests that the inclusion of an additional size exclusion chromatography step altered the subset of the extracellular vesicles that were isolated. In conclusion, these results suggest that extracellular miRNA can be isolated and analysed from exosomes derived from lymphocyte cell lines. However, care must be taken in the choice of cell line and method of exosome isolation used.

Keywords: biomarker, extracellular miRNA, isolation methods, lymphocyte cell line

Procedia PDF Downloads 199
368 Subjective Probability and the Intertemporal Dimension of Probability to Correct the Misrelation Between Risk and Return of a Financial Asset as Perceived by Investors. Extension of Prospect Theory to Better Describe Risk Aversion

Authors: Roberta Martino, Viviana Ventre

Abstract:

From a theoretical point of view, the relationship between the risk associated with an investment and the expected value are directly proportional, in the sense that the market allows a greater result to those who are willing to take a greater risk. However, empirical evidence proves that this relationship is distorted in the minds of investors and is perceived exactly the opposite. To deepen and understand the discrepancy between the actual actions of the investor and the theoretical predictions, this paper analyzes the essential parameters used for the valuation of financial assets with greater attention to two elements: probability and the passage of time. Although these may seem at first glance to be two distinct elements, they are closely related. In particular, the error in the theoretical description of the relationship between risk and return lies in the failure to consider the impatience that is generated in the decision-maker when events that have not yet happened occur in the decision-making context. In this context, probability loses its objective meaning and in relation to the psychological aspects of the investor, it can only be understood as the degree of confidence that the investor has in the occurrence or non-occurrence of an event. Moreover, the concept of objective probability does not consider the inter-temporality that characterizes financial activities and does not consider the condition of limited cognitive capacity of the decision maker. Cognitive psychology has made it possible to understand that the mind acts with a compromise between quality and effort when faced with very complex choices. To evaluate an event that has not yet happened, it is necessary to imagine that it happens in your head. This projection into the future requires a cognitive effort and is what differentiates choices under conditions of risk and choices under conditions of uncertainty. In fact, since the receipt of the outcome in choices under risk conditions is imminent, the mechanism of self-projection into the future is not necessary to imagine the consequence of the choice and the decision makers dwell on the objective analysis of possibilities. Financial activities, on the other hand, develop over time and the objective probability is too static to consider the anticipatory emotions that the self-projection mechanism generates in the investor. Assuming that uncertainty is inherent in valuations of events that have not yet occurred, the focus must shift from risk management to uncertainty management. Only in this way the intertemporal dimension of the decision-making environment and the haste generated by the financial market can be cautioned and considered. The work considers an extension of the prospectus theory with the temporal component with the aim of providing a description of the attitude towards risk with respect to the passage of time.

Keywords: impatience, risk aversion, subjective probability, uncertainty

Procedia PDF Downloads 107
367 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages

Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall

Abstract:

Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.

Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact

Procedia PDF Downloads 242
366 Quantification of Lawsone and Adulterants in Commercial Henna Products

Authors: Ruchi B. Semwal, Deepak K. Semwal, Thobile A. N. Nkosi, Alvaro M. Viljoen

Abstract:

The use of Lawsonia inermis L. (Lythraeae), commonly known as henna, has many medicinal benefits and is used as a remedy for the treatment of diarrhoea, cancer, inflammation, headache, jaundice and skin diseases in folk medicine. Although widely used for hair dyeing and temporary tattooing, henna body art has popularized over the last 15 years and changed from being a traditional bridal and festival adornment to an exotic fashion accessory. The naphthoquinone, lawsone, is one of the main constituents of the plant and responsible for its dyeing property. Henna leaves typically contain 1.8–1.9% lawsone, which is used as a marker compound for the quality control of henna products. Adulteration of henna with various toxic chemicals such as p-phenylenediamine, p-methylaminophenol, p-aminobenzene and p-toluenodiamine to produce a variety of colours, is very common and has resulted in serious health problems, including allergic reactions. This study aims to assess the quality of henna products collected from different parts of the world by determining the lawsone content, as well as the concentrations of any adulterants present. Ultra high performance liquid chromatography-mass spectrometry (UPLC-MS) was used to determine the lawsone concentrations in 172 henna products. Separation of the chemical constituents was achieved on an Acquity UPLC BEH C18 column using gradient elution (0.1% formic acid and acetonitrile). The results from UPLC-MS revealed that of 172 henna products, 11 contained 1.0-1.8% lawsone, 110 contained 0.1-0.9% lawsone, whereas 51 samples did not contain detectable levels of lawsone. High performance thin layer chromatography was investigated as a cheaper, more rapid technique for the quality control of henna in relation to the lawsone content. The samples were applied using an automatic TLC Sampler 4 (CAMAG) to pre-coated silica plates, which were subsequently developed with acetic acid, acetone and toluene (0.5: 1.0: 8.5 v/v). A Reprostar 3 digital system allowed the images to be captured. The results obtained corresponded to those from UPLC-MS analysis. Vibrational spectroscopy analysis (MIR or NIR) of the powdered henna, followed by chemometric modelling of the data, indicates that this technique shows promise as an alternative quality control method. Principal component analysis (PCA) was used to investigate the data by observing clustering and identifying outliers. Partial least squares (PLS) multivariate calibration models were constructed for the quantification of lawsone. In conclusion, only a few of the samples analysed contain lawsone in high concentrations, indicating that they are of poor quality. Currently, the presence of adulterants that may have been added to enhance the dyeing properties of the products, is being investigated.

Keywords: Lawsonia inermis, paraphenylenediamine, temporary tattooing, lawsone

Procedia PDF Downloads 459
365 Development of an Appropriate Method for the Determination of Multiple Mycotoxins in Pork Processing Products by UHPLC-TCFLD

Authors: Jason Gica, Yi-Hsieng Samuel Wu, Deng-Jye Yang, Yi-Chen Chen

Abstract:

Mycotoxins, harmful secondary metabolites produced by certain fungi species, pose significant risks to animals and humans worldwide. Their stable properties lead to contamination during grain harvesting, transportation, and storage, as well as in processed food products. The prevalence of mycotoxin contamination has attracted significant attention due to its adverse impact on food safety and global trade. The secondary contamination pathway from animal products has been identified as an important route of exposure, posing health risks for livestock and humans consuming contaminated products. Pork, one of the highly consumed meat products in Taiwan according to the National Food Consumption Database, plays a critical role in the nation's diet and economy. Given its substantial consumption, pork processing products are a significant component of the food supply chain and a potential source of mycotoxin contamination. This study is paramount for formulating effective regulations and strategies to mitigate mycotoxin-related risks in the food supply chain. By establishing a reliable analytical method, this research contributes to safeguarding public health and enhancing the quality of pork processing products. The findings will serve as valuable guidance for policymakers, food industries, and consumers to ensure a safer food supply chain in the face of emerging mycotoxin challenges. An innovative and efficient analytical approach is proposed using Ultra-High Performance Liquid Chromatography coupled with Temperature Control Fluorescence Detector Light (UHPLC-TCFLD) to determine multiple mycotoxins in pork meat samples due to its exceptional capacity to detect multiple mycotoxins at the lowest levels of concentration, making it highly sensitive and reliable for comprehensive mycotoxin analysis. Additionally, its ability to simultaneously detect multiple mycotoxins in a single run significantly reduces the time and resources required for analysis, making it a cost-effective solution for monitoring mycotoxin contamination in pork processing products. The research aims to optimize the efficient mycotoxin QuEChERs extraction method and rigorously validate its accuracy and precision. The results will provide crucial insights into mycotoxin levels in pork processing products.

Keywords: multiple-mycotoxin analysis, pork processing products, QuEChERs, UHPLC-TCFLD, validation

Procedia PDF Downloads 69
364 Designing Form, Meanings, and Relationships for Future Industrial Products. Case Study Observation of PAD

Authors: Elisabetta Cianfanelli, Margherita Tufarelli, Paolo Pupparo

Abstract:

The dialectical mediation between desires and objects or between mass production and consumption continues to evolve over time. This relationship is influenced both by variable geometries of contexts that are distant from the mere design of product form and by aspects rooted in the very definition of industrial design. In particular, the overcoming of macro-areas of innovation in the technological, social, cultural, formal, and morphological spheres, supported by recent theories in critical and speculative design, seems to be moving further and further away from the design of the formal dimension of advanced products. The articulated fabric of theories and practices that feed the definition of “hyperobjects”, and no longer objects describes a common tension in all areas of design and production of industrial products. The latter are increasingly detached from the design of the form and meaning of the same in mass productions, thus losing the quality of products capable of social transformation. For years we have been living in a transformative moment as regards the design process in the definition of the industrial product. We are faced with a dichotomy in which there is, on the one hand, a reactionary aversion to the new techniques of industrial production and, on the other hand, a sterile adoption of the techniques of mass production that we can now consider traditional. This ambiguity becomes even more evident when we talk about industrial products, and we realize that we are moving further and further away from the concepts of "form" as a synthesis of a design thought aimed at the aesthetic-emotional component as well as the functional one. The design of forms and their contents, as statutes of social acts, allows us to investigate the tension on mass production that crosses seasons, trends, technicalities, and sterile determinisms. The design culture has always determined the formal qualities of objects as a sum of aesthetic characteristics functional and structural relationships that define a product as a coherent unit. The contribution proposes a reflection and a series of practical experiences of research on the form of advanced products. This form is understood as a kaleidoscope of relationships through the search for an identity, the desire for democratization, and between these two, the exploration of the aesthetic factor. The study of form also corresponds to the study of production processes, technological innovations, the definition of standards, distribution, advertising, the vicissitudes of taste and lifestyles. Specifically, we will investigate how the genesis of new forms for new meanings introduces a change in the relative innovative production techniques. It becomes, therefore, fundamental to investigate, through the reflections and the case studies exposed inside the contribution, also the new techniques of production and elaboration of the forms of the products, as new immanent and determining element inside the planning process.

Keywords: industrial design, product advanced design, mass productions, new meanings

Procedia PDF Downloads 122
363 The International Fight against the Financing of Terrorism: Analysis of the Anti-Money Laundering and Combating Financing of Terrorism Regime

Authors: Loukou Amoin Marie Djedri

Abstract:

Financing is important for all terrorists – from the largest organizations in control of territories, to the smallest groups – not only for spreading fear through attacks, but also to finance the expansion of terrorist dogmas. These organizations pose serious threats to the international community. The disruption of terrorist financing aims to create a hostile environment for the growth of terrorism and to limit considerably the terrorist groups capacities. The World Bank (WB), together with the International Monetary Fund (IMF), decided to include in their scope the Fight against the money laundering and the financing of terrorism, in order to assist Member States in protecting their internal financial system from terrorism use and abuse and reinforcing their legal system. To do so, they have adopted the Anti-Money Laundering /Combating Financing of Terrorism (AML/CFT) standards that have been set up by the Financial Action Task Force. This set of standards, recognized as the international standards for anti-money laundering and combating the financing of terrorism, has to be implemented by States Members in order to strengthen their judicial system and relevant national institutions. However, we noted that, to date, some States Members still have significant AML/CFT deficiencies, which can constitute serious threats not only to the country’s economic stability but also for the global financial system. In addition, studies stressed out that repressive measures are more implemented by countries than preventive measures, which could be an important weakness in a state security system. Furthermore, we noticed that the AML/CFT standards evolve slowly, while techniques used by terrorist networks keep developing. The goal of the study is to show how to enhance the AML/CFT global compliance through the work of the IMF and the WB, to help member states to consolidate their financial system. To encourage and ensure the effectiveness of these standards, a methodology for assessing the compliance with the AML/CFT standards has been created to follow up the concrete implementation of these standards and to provide accurate technical assistance to countries in need. A risk-based approach has also been adopted as a key component of the implementation of the AML/CFT Standards, with the aim of strengthening the efficiency of the standards. Instead, we noted that the assessment is not efficient in the process of enhancing AML/CFT measures because it seems to lack of adaptation to the country situation. In other words, internal and external factors are not enough taken into account in a country assessment program. The purpose of this paper is to analyze the AML/CFT regime in the fight against the financing of terrorism and to find lasting solutions to achieve the global AML/CFT compliance. The work of all the organizations involved in this combat is imperative to protect the financial network and to lead to the disintegration of terrorist groups in the future.

Keywords: AML/CFT standards, financing of terrorism, international financial institutions, risk-based approach

Procedia PDF Downloads 275
362 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature

Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi

Abstract:

The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.

Keywords: hardness, powder metallurgy, spark plasma sintering, wear

Procedia PDF Downloads 273
361 Combining Nitrocarburisation and Dry Lubrication for Improving Component Lifetime

Authors: Kaushik Vaideeswaran, Jean Gobet, Patrick Margraf, Olha Sereda

Abstract:

Nitrocarburisation is a surface hardening technique often applied to improve the wear resistance of steel surfaces. It is considered to be a promising solution in comparison with other processes such as flame spraying, owing to the formation of a diffusion layer which provides mechanical integrity, as well as its cost-effectiveness. To improve other tribological properties of the surface such as the coefficient of friction (COF), dry lubricants are utilized. Currently, the lifetime of steel components in many applications using either of these techniques individually are faced with the limitations of the two: high COF for nitrocarburized surfaces and low wear resistance of dry lubricant coatings. To this end, the current study involves the creation of a hybrid surface using the impregnation of a dry lubricant on to a nitrocarburized surface. The mechanical strength and hardness of Gerster SA’s nitrocarburized surfaces accompanied by the impregnation of the porous outermost layer with a solid lubricant will create a hybrid surface possessing both outstanding wear resistance and a low friction coefficient and with high adherence to the substrate. Gerster SA has the state-of-the-art technology for the surface hardening of various steels. Through their expertise in the field, the nitrocarburizing process parameters (atmosphere, temperature, dwelling time) were optimized to obtain samples that have a distinct porous structure (in terms of size, shape, and density) as observed by metallographic and microscopic analyses. The porosity thus obtained is suitable for the impregnation of a dry lubricant. A commercially available dry lubricant with a thermoplastic matrix was employed for the impregnation process, which was optimized to obtain a void-free interface with the surface of the nitrocarburized layer (henceforth called hybrid surface). In parallel, metallic samples without nitrocarburisation were also impregnated with the same dry lubricant as a reference (henceforth called reference surface). The reference and the nitrocarburized surfaces, with and without the dry lubricant were tested for their tribological behavior by sliding against a quenched steel ball using a nanotribometer. Without any lubricant, the nitrocarburized surface showed a wear rate 5x lower than the reference metal. In the presence of a thin film of dry lubricant ( < 2 micrometers) and under the application of high loads (500 mN or ~800 MPa), while the COF for the reference surface increased from ~0.1 to > 0.3 within 120 m, the hybrid surface retained a COF < 0.2 for over 400m of sliding. In addition, while the steel ball sliding against the reference surface showed heavy wear, the corresponding ball sliding against the hybrid surface showed very limited wear. Observations of the sliding tracks in the hybrid surface using Electron Microscopy show the presence of the nitrocarburized nodules as well as the lubricant, whereas no traces of the lubricant were found in the sliding track on the reference surface. In this manner, the clear advantage of combining nitrocarburisation with the impregnation of a dry lubricant towards forming a hybrid surface has been demonstrated.

Keywords: dry lubrication, hybrid surfaces, improved wear resistance, nitrocarburisation, steels

Procedia PDF Downloads 122
360 A Comparison and Discussion of Modern Anaesthetic Techniques in Elective Lower Limb Arthroplasties

Authors: P. T. Collett, M. Kershaw

Abstract:

Introduction: The discussion regarding which method of anesthesia provides better results for lower limb arthroplasty is a continuing debate. Multiple meta-analysis has been performed with no clear consensus. The current recommendation is to use neuraxial anesthesia for lower limb arthroplasty; however, the evidence to support this decision is weak. The Enhanced Recovery After Surgery (ERAS) society has recommended, either technique can be used as part of a multimodal anesthetic regimen. A local study was performed to see if the current anesthetic practice correlates with the current recommendations and to evaluate the efficacy of the different techniques utilized. Method: 90 patients who underwent total hip or total knee replacements at Nevill Hall Hospital between February 2019 to July 2019 were reviewed. Data collected included the anesthetic technique, day one opiate use, pain score, and length of stay. The data was collected from anesthetic charts, and the pain team follows up forms. Analysis: The average of patients undergoing lower limb arthroplasty was 70. Of those 83% (n=75) received a spinal anaesthetic and 17% (n=15) received a general anaesthetic. For patients undergoing knee replacement under general anesthetic the average day, one pain score was 2.29 and 1.94 if a spinal anesthetic was performed. For hip replacements, the scores were 1.87 and 1.8, respectively. There was no statistical significance between these scores. Day 1 opiate usage was significantly higher in knee replacement patients who were given a general anesthetic (45.7mg IV morphine equivalent) vs. those who were operated on under spinal anesthetic (19.7mg). This difference was not noticeable in hip replacement patients. There was no significant difference in length of stay between the two anesthetic techniques. Discussion: There was no significant difference in the day one pain score between the patients who received a general or spinal anesthetic for either knee or hip replacements. The higher pain scores in the knee replacement group overall are consistent with this being a more painful procedure. This is a small patient population, which means any difference between the two groups is unlikely to be representative of a larger population. The pain scale has 4 points, which means it is difficult to identify a significant difference between pain scores. Conclusion: There is currently little standardization between the different anesthetic approaches utilized in Nevill Hall Hospital. This is likely due to the lack of adherence to a standardized anesthetic regimen. In accordance with ERAS recommends a standard anesthetic protocol is a core component. The results of this study and the guidance from the ERAS society will support the implementation of a new health board wide ERAS protocol.

Keywords: anaesthesia, orthopaedics, intensive care, patient centered decision making, treatment escalation

Procedia PDF Downloads 127
359 Wind Load Reduction Effect of Exterior Porous Skin on Facade Performance

Authors: Ying-Chang Yu, Yuan-Lung Lo

Abstract:

Building envelope design is one of the most popular design fields of architectural profession in nowadays. The main design trend of such system is to highlight the designer's aesthetic intention from the outlook of building project. Due to the trend of current façade design, the building envelope contains more and more layers of components, such as double skin façade, photovoltaic panels, solar control system, or even ornamental components. These exterior components are designed for various functional purposes. Most researchers focus on how these exterior elements should be structurally sound secured. However, not many researchers consider these elements would help to improve the performance of façade system. When the exterior elements are deployed in large scale, it creates an additional layer outside of original façade system and acts like a porous interface which would interfere with the aerodynamic of façade surface in micro-scale. A standard façade performance consists with 'water penetration, air infiltration rate, operation force, and component deflection ratio', and these key performances are majorly driven by the 'Design Wind Load' coded in local regulation. A design wind load is usually determined by the maximum wind pressure which occurs on the surface due to the geometry or location of building in extreme conditions. This research was designed to identify the air damping phenomenon of micro turbulence caused by porous exterior layer leading to surface wind load reduction for improvement of façade system performance. A series of wind tunnel test on dynamic pressure sensor array covered by various scale of porous exterior skin was conducted to verify the effect of wind pressure reduction. The testing specimens were designed to simulate the typical building with two-meter extension offsetting from building surface. Multiple porous exterior skins were prepared to replicate various opening ratio of surface which may cause different level of damping effect. This research adopted 'Pitot static tube', 'Thermal anemometers', and 'Hot film probe' to collect the data of surface dynamic pressure behind porous skin. Turbulence and distributed resistance are the two main factors of aerodynamic which would reduce the actual wind pressure. From initiative observation, the reading of surface wind pressure was effectively reduced behind porous media. In such case, an actual building envelope system may be benefited by porous skin from the reduction of surface wind pressure, which may improve the performance of envelope system consequently.

Keywords: multi-layer facade, porous media, facade performance, turbulence and distributed resistance, wind tunnel test

Procedia PDF Downloads 219
358 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks

Authors: Nafisa Mahbub, Hajo Ribberink

Abstract:

Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.

Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger

Procedia PDF Downloads 51