Search results for: confidence intervals
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1637

Search results for: confidence intervals

227 Human Capital Divergence and Team Performance: A Study of Major League Baseball Teams

Authors: Yu-Chen Wei

Abstract:

The relationship between organizational human capital and organizational effectiveness have been a common topic of interest to organization researchers. Much of this research has concluded that higher human capital can predict greater organizational outcomes. Whereas human capital research has traditionally focused on organizations, the current study turns to the team level human capital. In addition, there are no known empirical studies assessing the effect of human capital divergence on team performance. Team human capital refers to the sum of knowledge, ability, and experience embedded in team members. Team human capital divergence is defined as the variation of human capital within a team. This study is among the first to assess the role of human capital divergence as a moderator of the effect of team human capital on team performance. From the traditional perspective, team human capital represents the collective ability to solve problems and reducing operational risk of all team members. Hence, the higher team human capital, the higher the team performance. This study further employs social learning theory to explain the relationship between team human capital and team performance. According to this theory, the individuals will look for progress by way of learning from teammates in their teams. They expect to have upper human capital, in turn, to achieve high productivity, obtain great rewards and career success eventually. Therefore, the individual can have more chances to improve his or her capability by learning from peers of the team if the team members have higher average human capital. As a consequence, all team members can develop a quick and effective learning path in their work environment, and in turn enhance their knowledge, skill, and experience, leads to higher team performance. This is the first argument of this study. Furthermore, the current study argues that human capital divergence is negative to a team development. For the individuals with lower human capital in the team, they always feel the pressure from their outstanding colleagues. Under the pressure, they cannot give full play to their own jobs and lose more and more confidence. For the smart guys in the team, they are reluctant to be colleagues with the teammates who are not as intelligent as them. Besides, they may have lower motivation to move forward because they are prominent enough compared with their teammates. Therefore, human capital divergence will moderate the relationship between team human capital and team performance. These two arguments were tested in 510 team-seasons drawn from major league baseball (1998–2014). Results demonstrate that there is a positive relationship between team human capital and team performance which is consistent with previous research. In addition, the variation of human capital within a team weakens the above relationships. That is to say, an individual working with teammates who are comparable to them can produce better performance than working with people who are either too smart or too stupid to them.

Keywords: human capital divergence, team human capital, team performance, team level research

Procedia PDF Downloads 237
226 Correlation Analysis between Sensory Processing Sensitivity (SPS), Meares-Irlen Syndrome (MIS) and Dyslexia

Authors: Kaaryn M. Cater

Abstract:

Students with sensory processing sensitivity (SPS), Meares-Irlen Syndrome (MIS) and dyslexia can become overwhelmed and struggle to thrive in traditional tertiary learning environments. An estimated 50% of tertiary students who disclose learning related issues are dyslexic. This study explores the relationship between SPS, MIS and dyslexia. Baseline measures will be analysed to establish any correlation between these three minority methods of information processing. SPS is an innate sensitivity trait found in 15-20% of the population and has been identified in over 100 species of animals. Humans with SPS are referred to as Highly Sensitive People (HSP) and the measure of HSP is a 27 point self-test known as the Highly Sensitive Person Scale (HSPS). A 2016 study conducted by the author established base-line data for HSP students in a tertiary institution in New Zealand. The results of the study showed that all participating HSP students believed the knowledge of SPS to be life-changing and useful in managing life and study, in addition, they believed that all tutors and in-coming students should be given information on SPS. MIS is a visual processing and perception disorder that is found in approximately 10% of the population and has a variety of symptoms including visual fatigue, headaches and nausea. One way to ease some of these symptoms is through the use of colored lenses or overlays. Dyslexia is a complex phonological based information processing variation present in approximately 10% of the population. An estimated 50% of dyslexics are thought to have MIS. The study exploring possible correlations between these minority forms of information processing is due to begin in February 2017. An invitation will be extended to all first year students enrolled in degree programmes across all faculties and schools within the institution. An estimated 900 students will be eligible to participate in the study. Participants will be asked to complete a battery of on-line questionnaires including the Highly Sensitive Person Scale, the International Dyslexia Association adult self-assessment and the adapted Irlen indicator. All three scales have been used extensively in literature and have been validated among many populations. All participants whose score on any (or some) of the three questionnaires suggest a minority method of information processing will receive an invitation to meet with a learning advisor, and given access to counselling services if they choose. Meeting with a learning advisor is not mandatory, and some participants may choose not to receive help. Data will be collected using the Question Pro platform and base-line data will be analysed using correlation and regression analysis to identify relationships and predictors between SPS, MIS and dyslexia. This study forms part of a larger three year longitudinal study and participants will be required to complete questionnaires at annual intervals in subsequent years of the study until completion of (or withdrawal from) their degree. At these data collection points, participants will be questioned on any additional support received relating to their minority method(s) of information processing. Data from this study will be available by April 2017.

Keywords: dyslexia, highly sensitive person (HSP), Meares-Irlen Syndrome (MIS), minority forms of information processing, sensory processing sensitivity (SPS)

Procedia PDF Downloads 230
225 Strategies for Enhancing Academic Honesty as an Ethical Concern in Electronic Learning (E-learning) among University Students: A Philosophical Perspective

Authors: Ekeh Greg

Abstract:

Learning has been part of human existence from time immemorial. The aim of every learning is to know the truth. In education, it is desirable that true knowledge is imparted and imbibed. For this to be achieved, there is need for honesty, in this context, academic honesty among students, especially in e-learning. This is an ethical issue since honesty bothers on human conduct. However, research findings have shown that academic honesty has remained a big challenge to online learners, especially among the university students. This is worrisome since the university education is the final education system and a gateway to life in the wider society after schooling. If they are practicing honesty in their academic life, it is likely that they will practice honesty in the in the society, thereby bringing positive contributions to the society wherever they find themselves. With this in mind, the significance of this study becomes obvious. On grounds of this significance, this paper focuses on strategies that are adjudged certain to enhance the practice of honesty in e-learning so as to enable learners to be well equipped to contribute to the society through honest ways. The aim of the paper is to contribute to the efforts of instilling the consciousness and practice of honesty in the minds and hearts of learners. This will, in turn, promote effective teaching and learning, academic high standard, competence and self-confidence in university education. Philosophical methods of conceptual analysis, clarification, description and prescription are adopted for the study. Philosophical perspective is chosen so as to ground the paper on the basis of rationality rather than emotional sentiments and biases emanating from cultural, religious and ethnic differences and orientations. Such sentiments and biases can becloud objective reasoning and sound judgment. A review of related literature is also carried out. The findings show that academic honesty in e-learning is a cherished value, but it is bedeviled by some challenges, such as care-free attitude on the part of students and absence of monitoring. The findings also show that despite the challenges facing academic honesty, strategies such as self-discipline, determination, hard work, imbibing ethical and philosophical principles, among others, can certainly enhance the practice of honesty in e-learning among university students. The paper, therefore, concludes that these constitute strategies for enhancing academic honesty among students. Consequently, it is suggested that instructors, school counsellors and other stakeholders should endeavour to see that students are helped to imbibe these strategies and put them into practice. Students themselves are enjoined to cherish honesty in their academic pursuit and avoid short-cuts. Short-cuts can only lead to mediocrity and incompetence on the part of the learners, which may have long adverse consequences, both on themselves and others.

Keywords: academic, ethical, philosophical, strategies

Procedia PDF Downloads 69
224 Barriers to Participation in Sport for Children without Disability: A Systematic Review

Authors: S. Somerset, D. J. Hoare

Abstract:

Participation in sport is linked to better mental and physical health in children and adults. Studies have shown children who participate in sports benefit from improved social skills, self-confidence, communication skills and a better quality of life. Children who participate in sports from a young age are also more likely to continue to have active lifestyles during adulthood. This is an important consideration with a nation where physical activity levels are declining and the incidences of obesity are rising. Getting children active and keeping them active can provide long term health benefits to the individual but also a potential reduction in health costs in the future. This systematic review aims to identify the barriers to participation in sport for children aged up to 18 years and encompasses both qualitative and quantitative studies. The bibliographic databases, EMBASE, Medline, CINAHL and SportDiscus were searched. Additional hand searches were carried out on review articles found in the searches to identify any studies that may have been missed. Studies involving children up to 18 years without additional needs focusing on barriers to participation in sport were included. Randomised control trials, policy guidelines, studies with sport as an intervention, studies focusing on the female athlete triad, tobacco abuse, alcohol abuse, drug abuse, pre exercise testing, and cardiovascular disease were excluded. Abstract review, full paper review and quality appraisal were conducted by two researchers. A consensus meeting took place to resolve any differences at the abstract, full text and data extraction / quality appraisal stages. The CASP qualitative studies appraisal tool and the CASP cohort studies tool (excluding question 3 and 4 which refer to interventions) were used for quality appraisal in this review. The review identified several salient barriers to participation in sport for children. These barriers ranged from the uniform worn during school physical education lessons to the weather during participation in sport. The most commonly identified barriers in the review include parental support, time allocation, location of the activity and the cost of the activity. Therefore, it would be beneficial for a greater provision to be made within the school environment for children to participate sport. This can reduce the cost and time commitment required from parents to encourage participation. This would help to increase activity levels of children, which ultimately can only be a good thing.

Keywords: barrier, children, participation, sport

Procedia PDF Downloads 357
223 Disparities in Language Competence and Conflict: The Moderating Role of Cultural Intelligence in Intercultural Interactions

Authors: Catherine Peyrols Wu

Abstract:

Intercultural interactions are becoming increasingly common in organizations and life. These interactions are often the stage of miscommunication and conflict. In management research, these problems are commonly attributed to cultural differences in values and interactional norms. As a result, the notion that intercultural competence can minimize these challenges is widely accepted. Cultural differences, however, are not the only source of a challenge during intercultural interactions. The need to rely on a lingua franca – or common language between people who have different mother tongues – is another important one. In theory, a lingua franca can improve communication and ease coordination. In practice however, disparities in people’s ability and confidence to communicate in the language can exacerbate tensions and generate inefficiencies. In this study, we draw on power theory to develop a model of disparities in language competence and conflict in a multicultural work context. Specifically, we hypothesized that differences in language competence between interaction partners would be positively related to conflict such that people would report greater conflict with partners who have more dissimilar levels of language competence and lesser conflict with partners with more similar levels of language competence. Furthermore, we proposed that cultural intelligence (CQ) an intercultural competence that denotes an individual’s capability to be effective in intercultural situations, would weaken the relationship between disparities in language competence and conflict such that people would report less conflict with partners who have more dissimilar levels of language competence when the interaction partner has high CQ and more conflict when the partner has low CQ. We tested this model with a sample of 135 undergraduate students working in multicultural teams for 13 weeks. We used a round-robin design to examine conflict in 646 dyads nested within 21 teams. Results of analyses using social relations modeling provided support for our hypotheses. Specifically, we found that in intercultural dyads with large disparities in language competence, partners with the lowest level of language competence would report higher levels of interpersonal conflict. However, this relationship disappeared when the partner with higher language competence was also high in CQ. These findings suggest that communication in a lingua franca can be a source of conflict in intercultural collaboration when partners differ in their level of language competence and that CQ can alleviate these effects during collaboration with partners who have relatively lower levels of language competence. Theoretically, this study underscores the benefits of CQ as a complement to language competence for intercultural effectiveness. Practically, these results further attest to the benefits of investing resources to develop language competence and CQ in employees engaged in multicultural work.

Keywords: cultural intelligence, intercultural interactions, language competence, multicultural teamwork

Procedia PDF Downloads 161
222 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping

Authors: Emily Rowe

Abstract:

Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.

Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables

Procedia PDF Downloads 144
221 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 101
220 Hydrocarbons and Diamondiferous Structures Formation in Different Depths of the Earth Crust

Authors: A. V. Harutyunyan

Abstract:

The investigation results of rocks at high pressures and temperatures have revealed the intervals of changes of seismic waves and density, as well as some processes taking place in rocks. In the serpentinized rocks, as a consequence of dehydration, abrupt changes in seismic waves and density have been recorded. Hydrogen-bearing components are released which combine with carbon-bearing components. As a result, hydrocarbons formed. The investigated samples are smelted. Then, geofluids and hydrocarbons migrate into the upper horizons of the Earth crust by the deep faults. Then their differentiation and accumulation in the jointed rocks of the faults and in the layers with collecting properties takes place. Under the majority of the hydrocarbon deposits, at a certain depth, magmatic centers and deep faults are recorded. The investigation results of the serpentinized rocks with numerous geological-geophysical factual data allow understanding that hydrocarbons are mainly formed in both the offshore part of the ocean and at different depths of the continental crust. Experiments have also shown that the dehydration of the serpentinized rocks is accompanied by an explosion with the instantaneous increase in pressure and temperature and smelting the studied rocks. According to numerous publications, hydrocarbons and diamonds are formed in the upper part of the mantle, at the depths of 200-400km, and as a consequence of geodynamic processes, they rise to the upper horizons of the Earth crust through narrow channels. However, the genesis of metamorphogenic diamonds and the diamonds found in the lava streams formed within the Earth crust, remains unclear. As at dehydration, super high pressures and temperatures arise. It is assumed that diamond crystals are formed from carbon containing components present in the dehydration zone. It can be assumed that besides the explosion at dehydration, secondary explosions of the released hydrogen take place. The process is naturally accompanied by seismic phenomena, causing earthquakes of different magnitudes on the surface. As for the diamondiferous kimberlites, it is well-known that the majority of them are located within the ancient shield and platforms not obligatorily connected with the deep faults. The kimberlites are formed at the shallow location of dehydrated masses in the Earth crust. Kimberlites are younger in respect of containing ancient rocks containing serpentinized bazites and ultrbazites of relicts of the paleooceanic crust. Sometimes, diamonds containing water and hydrocarbons showing their simultaneous genesis are found. So, the geofluids, hydrocarbons and diamonds, according to the new concept put forward, are formed simultaneously from serpentinized rocks as a consequence of their dehydration at different depths of the Earth crust. Based on the concept proposed by us, we suggest discussing the following: -Genesis of gigantic hydrocarbon deposits located in the offshore area of oceans (North American, Mexican Gulf, Cuanza-Kamerunian, East Brazilian etc.) as well as in the continental parts of different mainlands (Kanadian-Arctic Caspian, East Siberian etc.) - Genesis of metamorphogenic diamonds and diamonds in the lava streams (Guinea-Liberian, Kokchetav, Kanadian, Kamchatka-Tolbachinian, etc.).

Keywords: dehydration, diamonds, hydrocarbons, serpentinites

Procedia PDF Downloads 332
219 An Institutional Mapping and Stakeholder Analysis of ASEAN’s Preparedness for Nuclear Power Disaster

Authors: Nur Azha Putra Abdul Azim, Denise Cheong, S. Nivedita

Abstract:

Currently, there are no nuclear power reactors among the Association of Southeast Asian Nations (ASEAN) member states (AMS) but there are seven operational nuclear research reactors, and Indonesia is about to construct the region’s first experimental power reactor by the end of the decade. If successful, the experimental power reactor will lay the foundation for the country’s and region’s first nuclear power plant. Despite projecting confidence during the period of nuclear power renaissance in the region in the last decade, none of the AMS has committed to a political decision on the use of nuclear energy and this is largely due to the Fukushima nuclear power accident in 2011. Of the ten AMS, Vietnam, Indonesia and Malaysia have demonstrated the most progress in developing nuclear energy based on the nuclear power infrastructure development assessments made by the International Atomic Energy Agency. Of these three states, Vietnam came closest to building its first nuclear power plant but decided to delay construction further due to safety and security concerns. Meanwhile, Vietnam along with Indonesia and Malaysia continue with their nuclear power infrastructure development and the remaining SEA states, with the exception of Brunei and Singapore, continue to build their expertise and capacity for nuclear power energy. At the current rate of progress, Indonesia is expected to make a national decision on the use of nuclear power by 2023 while Malaysia, the Philippines, and Thailand have included the use of nuclear power in their mid to long-term power development plans. Vietnam remains open to nuclear power but has not placed a timeline. The medium to short-term power development projection in the region suggests that the use of nuclear energy in the region is a matter of 'when' rather than 'if'. In lieu of the prospects for nuclear energy in Southeast Asia (SEA), this presentation will review the literature on ASEAN radiological emergency and preparedness response (EPR) plans and examine ASEAN’s disaster management and emergency framework. Through a combination of institutional mapping and stakeholder analysis methods, which we examine in the context of the international EPR, and nuclear safety and security regimes, we will identify the issues and challenges in developing a regional radiological EPR framework in the SEA. We will conclude with the observation that ASEAN faces serious structural, institutional and governance challenges due to the AMS inherent political structures and history of interstate conflicts, and propose that ASEAN should either enlarge the existing scope of its disaster management and response framework or that its radiological EPR framework should exist as a separate entity.

Keywords: nuclear power, nuclear accident, ASEAN, Southeast Asia

Procedia PDF Downloads 145
218 4D Monitoring of Subsurface Conditions in Concrete Infrastructure Prior to Failure Using Ground Penetrating Radar

Authors: Lee Tasker, Ali Karrech, Jeffrey Shragge, Matthew Josh

Abstract:

Monitoring for the deterioration of concrete infrastructure is an important assessment tool for an engineer and difficulties can be experienced with monitoring for deterioration within an infrastructure. If a failure crack, or fluid seepage through such a crack, is observed from the surface often the source location of the deterioration is not known. Geophysical methods are used to assist engineers with assessing the subsurface conditions of materials. Techniques such as Ground Penetrating Radar (GPR) provide information on the location of buried infrastructure such as pipes and conduits, positions of reinforcements within concrete blocks, and regions of voids/cavities behind tunnel lining. This experiment underlines the application of GPR as an infrastructure-monitoring tool to highlight and monitor regions of possible deterioration within a concrete test wall due to an increase in the generation of fractures; in particular, during a time period of applied load to a concrete wall up to and including structural failure. A three-point load was applied to a concrete test wall of dimensions 1700 x 600 x 300 mm³ in increments of 10 kN, until the wall structurally failed at 107.6 kN. At each increment of applied load, the load was kept constant and the wall was scanned using GPR along profile lines across the wall surface. The measured radar amplitude responses of the GPR profiles, at each applied load interval, were reconstructed into depth-slice grids and presented at fixed depth-slice intervals. The corresponding depth-slices were subtracted from each data set to compare the radar amplitude response between datasets and monitor for changes in the radar amplitude response. At lower values of applied load (i.e., 0-60 kN), few changes were observed in the difference of radar amplitude responses between data sets. At higher values of applied load (i.e., 100 kN), closer to structural failure, larger differences in radar amplitude response between data sets were highlighted in the GPR data; up to 300% increase in radar amplitude response at some locations between the 0 kN and 100 kN radar datasets. Distinct regions were observed in the 100 kN difference dataset (i.e., 100 kN-0 kN) close to the location of the final failure crack. The key regions observed were a conical feature located between approximately 3.0-12.0 cm depth from surface and a vertical linear feature located approximately 12.1-21.0 cm depth from surface. These key regions have been interpreted as locations exhibiting an increased change in pore-space due to increased mechanical loading, or locations displaying an increase in volume of micro-cracks, or locations showing the development of a larger macro-crack. The experiment showed that GPR is a useful geophysical monitoring tool to assist engineers with highlighting and monitoring regions of large changes of radar amplitude response that may be associated with locations of significant internal structural change (e.g. crack development). GPR is a non-destructive technique that is fast to deploy in a production setting. GPR can assist with reducing risk and costs in future infrastructure maintenance programs by highlighting and monitoring locations within the structure exhibiting large changes in radar amplitude over calendar-time.

Keywords: 4D GPR, engineering geophysics, ground penetrating radar, infrastructure monitoring

Procedia PDF Downloads 173
217 Effect of Malnutrition at Admission on Length of Hospital Stay among Adult Surgical Patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia: Prospective Cohort Study, 2022

Authors: Yoseph Halala Handiso, Zewdi Gebregziabher

Abstract:

Background: Malnutrition in hospitalized patients remains a major public health problem in both developed and developing countries. Despite the fact that malnourished patients are more prone to stay longer in hospital, there is limited data regarding the magnitude of malnutrition and its effect on length of stay among surgical patients in Ethiopia, while nutritional assessment is also often a neglected component of the health service practice. Objective: This study aimed to assess the prevalence of malnutrition at admission and its effect on the length of hospital stay among adult surgical patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia, 2022. Methods: A facility-based prospective cohort study was conducted among 398 adult surgical patients admitted to the hospital. Participants in the study were chosen using a convenient sampling technique. Subjective global assessment was used to determine the nutritional status of patients with a minimum stay of 24 hours within 48 hours after admission (SGA). Data were collected using the open data kit (ODK) version 2022.3.3 software, while Stata version 14.1 software was employed for statistical analysis. The Cox regression model was used to determine the effect of malnutrition on the length of hospital stay (LOS) after adjusting for several potential confounders taken at admission. Adjusted hazard ratio (HR) with a 95% confidence interval was used to show the effect of malnutrition. Results: The prevalence of hospital malnutrition at admission was 64.32% (95% CI: 59%-69%) according to the SGA classification. Adult surgical patients who were malnourished at admission had higher median LOS (12 days: 95% CI: 11-13) as compared to well-nourished patients (8 days: 95% CI: 8-9), means adult surgical patients who were malnourished at admission were at higher risk of reduced chance of discharge with improvement (prolonged LOS) (AHR: 0.37, 95% CI: 0.29-0.47) as compared to well-nourished patients. Presence of comorbidity (AHR: 0.68, 95% CI: 0.50-90), poly medication (AHR: 0.69, 95% CI: 0.55-0.86), and history of admission (AHR: 0.70, 95% CI: 0.55-0.87) within the previous five years were found to be the significant covariates of the length of hospital stay (LOS). Conclusion: The magnitude of hospital malnutrition at admission was found to be high. Malnourished patients at admission had a higher risk of prolonged length of hospital stay as compared to well-nourished patients. The presence of comorbidity, polymedication, and history of admission were found to be the significant covariates of LOS. All stakeholders should give attention to reducing the magnitude of malnutrition and its covariates to improve the burden of LOS.

Keywords: effect of malnutrition, length of hospital stay, surgical patients, Ethiopia

Procedia PDF Downloads 59
216 Anaerobic Digestion of Green Wastes at Different Solids Concentrations and Temperatures to Enhance Methane Generation

Authors: A. Bayat, R. Bello-Mendoza, D. G. Wareham

Abstract:

Two major categories of green waste are fruit and vegetable (FV) waste and garden and yard (GY) waste. Although, anaerobic digestions (AD) is able to manage FV waste; there is less confidence in the conditions for AD to handle GY wastes (grass, leaves, trees and bush trimmings); mainly because GY contains lignin and other recalcitrant organics. GY in the dry state (TS ≥ 15 %) can be digested at mesophilic temperatures; however, little methane data has been reported under thermophilic conditions, where conceivably better methane yields could be achieved. In addition, it is suspected that at lower solids concentrations, the methane yield could be increased. As such, the aim of this research is to find the temperature and solids concentration conditions that produce the most methane; under two different temperature regimes (mesophilic, thermophilic) and three solids states (i.e. 'dry', 'semi-dry' and 'wet'). Twenty liters of GY waste was collected from a public park located in the northern district in Tehran. The clippings consisted of freshly cut grass as well as dry branches and leaves. The GY waste was chopped before being fed into a mechanical blender that reduced it to a paste-like consistency. An initial TS concentration of approximately 38 % was achieved. Four hundred mL of anaerobic inoculum (average total solids (TS) concentration of 2.03 ± 0.131 % of which 73.4% were volatile solid (VS), soluble chemical oxygen demand (sCOD) of 4.59 ± 0.3 g/L) was mixed with the GY waste substrate paste (along with distilled water) to achieve a TS content of approximately 20 %. For comparative purposes, approximately 20 liters of FV waste was ground in the same manner as the GY waste. Since FV waste has a much higher natural water content than GY, it was dewatered to obtain a starting TS concentration in the dry solid-state range (TS ≥ 15 %). Three samples were dewatered to an average starting TS concentration of 32.71 %. The inoculum was added (along with distilled water) to dilute the initial FV TS concentrations down to semi-dry conditions (10-15 %) and wet conditions (below 10 %). Twelve 1-L batch bioreactors were loaded simultaneously with either GY or FV waste at TS solid concentrations ranging from 3.85 ± 1.22 % to 20.11 ± 1.23 %. The reactors were sealed and were operated for 30 days while being immersed in water baths to maintain a constant temperature of 37 ± 0.5 °C (mesophilic) or 55 ± 0.5 °C (thermophilic). A maximum methane yield of 115.42 (L methane/ kg VS added) was obtained for the GY thermophilic-wet AD combination. Methane yield was enhanced by 240 % compared to the GY waste mesophilic-dry condition. The results confirm that high temperature regimes and small solids concentrations are conditions that enhance methane yield from GY waste. A similar trend was observed for the anaerobic digestion of FV waste. Furthermore, a maximum value of VS (53 %) and sCOD (84 %) reduction was achieved during the AD of GY waste under the thermophilic-wet condition.

Keywords: anaerobic digestion, thermophilic, mesophilic, total solids concentration

Procedia PDF Downloads 130
215 Critical Core Skills Profiling in the Singaporean Workforce

Authors: Bi Xiao Fang, Tan Bao Zhen

Abstract:

Soft skills, core competencies, and generic competencies are exchangeable terminologies often used to represent a similar concept. In the Singapore context, such skills are currently being referred to as Critical Core Skills (CCS). In 2019, SkillsFuture Singapore (SSG) reviewed the Generic Skills and Competencies (GSC) framework that was first introduced in 2016, culminating in the development of the Critical Core Skills (CCS) framework comprising 16 soft skills classified into three clusters. The CCS framework is part of the Skills Framework, and whose stated purpose is to create a common skills language for individuals, employers and training providers. It is also developed with the objectives of building deep skills for a lean workforce, enhance business competitiveness and support employment and employability. This further helps to facilitate skills recognition and support the design of training programs for skills and career development. According to SSG, every job role requires a set of technical skills and a set of Critical Core Skills to perform well at work, whereby technical skills refer to skills required to perform key tasks of the job. There has been an increasing emphasis on soft skills for the future of work. A recent study involving approximately 80 organizations across 28 sectors in Singapore revealed that more enterprises are beginning to recognize that soft skills support their employees’ performance and business competitiveness. Though CCS is of high importance for the development of the workforce’s employability, there is little attention paid to the CCS use and profiling across occupations. A better understanding of how CCS is distributed across the economy will thus significantly enhance SSG’s career guidance services as well as training providers’ services to graduates and workers and guide organizations in their hiring for soft skills. This CCS profiling study sought to understand how CCS is demanded in different occupations. To achieve its research objectives, this study adopted a quantitative method to measure CCS use across different occupations in the Singaporean workforce. Based on the CCS framework developed by SSG, the research team adopted a formative approach to developing the CCS profiling tool to measure the importance of and self-efficacy in the use of CCS among the Singaporean workforce. Drawing on the survey results from 2500 participants, this study managed to profile them into seven occupation groups based on the different patterns of importance and confidence levels of the use of CCS. Each occupation group is labeled according to the most salient and demanded CCS. In the meantime, the CCS in each occupation group, which may need some further strengthening, were also identified. The profiling of CCS use has significant implications for different stakeholders, e.g., employers could leverage the profiling results to hire the staff with the soft skills demanded by the job.

Keywords: employability, skills profiling, skills measurement, soft skills

Procedia PDF Downloads 90
214 Variability of the X-Ray Sun during Descending Period of Solar Cycle 23

Authors: Zavkiddin Mirtoshev, Mirabbos Mirkamalov

Abstract:

We have analyzed the time series of full disk integrated soft X-ray (SXR) and hard X-ray (HXR) emission from the solar corona during 2004 January 1 to 2009 December 31, covering the descending phase of solar cycle 23. We employed the daily X-ray index (DXI) derived from X-ray observations from the Solar X-ray Spectrometer (SOXS) mission in four different energy bands: 4-5.5; 5.5-7.5 keV (SXR) and 15-20; 20-25 keV (HXR). The application of Lomb-Scargle periodogram technique to the DXI time series observed by the Silicium detector in the energy bands reveals several short and intermediate periodicities of the X-ray corona. The DXI explicitly show the periods of 13.6 days, 26.7 days, 128.5 days, 151 days, 180 days, 220 days, 270 days, 1.24 year and 1.54 year periods in SXR as well as in HXR energy bands. Although all periods are above 70% confidence level in all energy bands, they show strong power in HXR emission in comparison to SXR emission. These periods are distinctly clear in three bands but somehow not unambiguously clear in 5.5-7.5 keV band. This might be due to the presence of Ferrum and Ferrum/Niccolum line features, which frequently vary with small scale flares like micro-flares. The regular 27-day rotation and 13.5 day period of sunspots from the invisible side of the Sun are found stronger in HXR band relative to SXR band. However, flare activity Rieger periods (150 and 180 days) and near Rieger period 220 days are very strong in HXR emission which is very much expected. On the other hand, our current study reveals strong 270 day periodicity in SXR emission which may be connected with tachocline, similar to a fundamental rotation period of the Sun. The 1.24 year and 1.54 year periodicities, represented from the present research work, are well observable in both SXR as well as in HXR channels. These long-term periodicities must also have connection with tachocline and should be regarded as a consequence of variation in rotational modulation over long time scales. The 1.24 year and 1.54 year periods are also found great importance and significance in the life formation and it evolution on the Earth, and therefore they also have great astro-biological importance. We gratefully acknowledge support by the Indian Centre for Space Science and Technology Education in Asia and the Pacific (CSSTEAP, the Centre is affiliated to the United Nations), Physical Research Laboratory (PRL) at Ahmedabad, India. This work has done under the supervision of Prof. Rajmal Jain and paper consist materials of pilot project and research part of the M. Tech program which was made during Space and Atmospheric Science Course.

Keywords: corona, flares, solar activity, X-ray emission

Procedia PDF Downloads 339
213 Temporal Variation of Surface Runoff and Interrill Erosion in Different Soil Textures of a Semi-arid Region, Iran

Authors: Ali Reza Vaezi, Naser Fakori Ivand, Fereshteh Azarifam

Abstract:

Interrill erosion is the detachment and transfer of soil particles between the rills due to the impact of raindrops and the shear stress of shallow surface runoff. This erosion can be affected by some soil properties such as texture, amount of organic matter and stability of soil aggregates. Information on the temporal variation of interrill erosion during a rainfall event and the effect soil properties have on it can help in understanding the process of runoff production and soil loss between the rills in hillslopes. The importance of this study is especially grate in semi-arid regions, where the soil is weakly aggregated and vegetation cover is mostly poor. Therefore, this research was conducted to investigate the temporal variation of surface flow and interrill erosion and the effect of soil properties on it in some semi-arid soils. A field experiment was done in eight different soil textures under simulated rainfalls with uniform intensity. A total of twenty four plots were installed for eight study soils with three replicates in the form of a random complete block design along the land. The plots were 1.2 m (length) × 1 m (width) in dimensions which designed with a distance of 3 m from each other across the slope. Then, soil samples were purred into the plots. The plots were surrounded by a galvanized sheet, and runoff and soil erosion equipment were placed at their outlets. Rainfall simulation experiments were done using a designed portable simulator with an intensity of 60 mm per hour for 60 minutes. A plastic cover was used around the rainfall simulator frame to prevent the impact of the wind on the free fall of water drops. Runoff production and soil loss were measured during 1 hour time with 5-min intervals. In order to study soil properties, such as particle size distribution, aggregate stability, bulk density, ESP and Ks were determined in the laboratory. Correlation and regression analysis was done to determine the effect of soil properties on runoff and interrill erosion. Results indicated that the study soils have lower booth organic matter content and aggregate stability. The soils, except for coarse textured textures, are calcareous and with relatively higher exchangeable sodium percentages (ESP). Runoff production and soil loss didn’t occur in sand, which was associated with higher infiltration and drainage rates. In other study soils, interrill erosion occurred simultaneously with the generation of runoff. A strong relationship was found between interrill erosion and surface runoff (R2 = 0.75, p< 0.01). The correlation analysis showed that surface runoff was significantly affected by some soil properties consisting of sand, silt, clay, bulk density, gravel, hydraulic conductivity (Ks), lime (calcium carbonate), and ESP. The soils with lower Ks such as fine-textured soils, produced higher surface runoff and more interrill erosion. In the soils, Surface runoff production temporally increased during rainfall and finally reached a peak after about 25-35 min. Time to peak was very short (30 min) in fine-textured soils, especially clay, which was related to their lower infiltration rate.

Keywords: erosion plot, rainfall simulator, soil properties, surface flow

Procedia PDF Downloads 56
212 Public Health Impact and Risk Factors Associated with Uterine Leiomyomata(UL) Among Women in Imo State

Authors: Eze Chinwe Catherine, Orji Nkiru Marykate, Anyaegbunam L. C., Igbodika M.C.

Abstract:

Uterine Leiomyomata (ULs) are the most frequently occurring pelvic and gynaecologic tumors in premenopausal women, occurring globally with a prevalence of 21.4%. UL represents a major public health problem in African women; therefore, this study aimed to reveal public health impact and risk factors associated with uterine leiomyomata among women in Imo state. A convenience sample of 2965 women was studied for gynaecological cases from October 2020 to March 2021 at the selected clinics of study. Eligible women were recruited to participate in a non interventional descriptive cross-sectional study. Data on socio demographic and gynaecological characteristics, BMI, parity, age, age at menarche, knowledge, attitudes, and perception were collected using a structured questionnaire, guided interviews, anthropometrics, and haematological tests. These were analyzed using SPSS Version 23. Associations between continuous variables were analysed appropriately and tested at 95% confidence level and standard error of 5%. A total of 652(22.0%) were diagnosed having uterine Leiomyomata (UL), and the overall prevalence of UF at clinics/Diagnostic centre in Imo State was 22%. A total of 652 women (46.1%) responded. More than half of the women had a parity of zero (1623: 54.8%), 664 (22.4%) had a parity of 1-2, and 491 (16.6%) had a parity of 3-4. Majority (68.6%) indicated that they experience an irregular menstrual cycle, and a similar proportion (67%) number experience pelvic pain. Age was found as a significant associating factor of uterine fibroids in this study (p=0.046, χ2= 6.158), lowest among the women between 16 to 25 years old and highest among the women between 36 – 45 years of age. The rate of UF was found to be 62.1% on the studied women menarche age of 11 years old or less while it was approximately 18% among the women whose age at menarche were at least 14 years old. Education ((p=0.003, χ²= 13.826), residency (p=0.066, χ²= 3.372). BMI (p= 0.000, χ²=102.36) were significantly associated with the risk of UL. Some of the Clinical presentation includes anaemia, abdominal pelvic mass, and infertility. The poor positive perception was obtained on the general perception (16.7%) as well as on treatment seeking behavior (28%). The study concluded that UL had a significant impact on health related quality of life on respondents due to its relatively high prevalence and their probable impact on patient’s quality of life.UL was especially prevalent in women aged between 36 to 45 years, nulliparous women, and women of higher BMI. Community enlightenment to enhance knowledge, attitude, and perception on fibroids and risk factors necessary to ensure early diagnosis and presentation, including patient centered treatment option.

Keywords: fibroids, prevalence, risk factors, body mass index, menarche, anaemia, KAP

Procedia PDF Downloads 154
211 To Compare Norepinephrine and Norepinephrine with Methylene Blue for the Management of Septic Shock

Authors: K. Rajarajeswaran, Krishna Prasad

Abstract:

Introduction: Refractory shock is a typical consequence of sepsis that does not improve with standard vasopressor therapy. A possible adjuvant therapeutic option for treating refractory shock in sepsis is methylene blue. This study looked at the effects of intravenous methylene blue plus norepinephrine given as a single bolus infusion on mortality and hemodynamic improvement in patients suffering from refractory shock. Methodology: This six-month observational prospective study was carried out at an intensive care unit, teaching hospital, and medical college. It involved 112 patients who had been diagnosed with refractory septic shock and needed vasopressor medication. Group B received injection norepinephrine 0.01 µg/kg/min infusion alone, while Group A received injection methylene blue 2 mg/kg iv single bolus (fixed dose) in addition to injection norepinephrine 0.01 µg/kg/min infusion. Both groups' noradrenaline doses were titrated to reach the desired MAP of 60–75 mm Hg. The amount of norepinephrine needed to sustain a MAP of more than 60 mm Hg was the data gathered. Serum lactate, procalcitonin level, C-reactive protein, length of stay in the intensive care unit (ICU), sequential organ failure assessment (SOFA) score, and duration of mechanical ventilation, incidence of acute kidney injury (AKI), and mortality were compared. Results: A total of 112 patients with refractory shock were included in the study. With the use of IV methylene blue, 36 (59.3%) patients showed significant improvement in MAP within 2 hours (77.12 ± 8.90 vs 74.28 ± 21.84, p = 0.005). Responders were 4.009 times more likely to have vasopressor-free time within 24 hours (19.5% vs 6.1%, p = 0.022, odds ratio 5.017, 95% confidence interval, 1.110–14.283). The serum lactate was lower, and urine output was higher in group I than in group II (p <0.05). Group I had a significantly greater reduction in SOFA score in 12 hours than group II. However, there was no significant difference in terms of mortality, length of ICU stay, ventilator free days, and incidence of AKI. In the responder group, there was a significant increase in the MAP and decrease in vasopressor requirement pre- and post-infusion of methylene blue (p < 0.05). Responder had shorter vasopressor-free days as compared with non-responder (5.44 vs 6.99, p = 0.007). Conclusion: When administered as adjuvant therapy, a single-dose bolus infusion of Methylene Blue plus Norepinephrine may aid in meeting early resuscitation goals for the management of patients with septic shock. But the patients' death rate, ICU stay duration, ventilator-free days, or incidence of AKI were unchanged.

Keywords: norepinephrine, methylene blue, shock, vasopressor

Procedia PDF Downloads 12
210 New Suspension Mechanism for a Formula Car using Camber Thrust

Authors: Shinji Kajiwara

Abstract:

The basic ability of a vehicle is the ability to “run”, “turn” and “stop”. The safeness and comfort during a drive on various road surfaces and speed depends on the performance of these basic abilities of the vehicle. Stability and maneuverability of a vehicle is vital in automotive engineering. Stability of a vehicle is the ability of the vehicle to revert back to a stable state during a drive when faced with crosswind and irregular road conditions. Maneuverability of a vehicle is the ability of the vehicle to change direction during a drive swiftly based on the steering of the driver. The stability and maneuverability of a vehicle can also be defined as the driving stability of the vehicle. Since fossil fueled vehicle is the main type of transportation today, the environmental factor in automotive engineering is also vital. By improving the fuel efficiency of the vehicle, the overall carbon emission will be reduced thus reducing the effect of global warming and greenhouse gas on the Earth. Another main focus of the automotive engineering is the safety performance of the vehicle especially with the worrying increase of vehicle collision every day. With better safety performance on a vehicle, every driver will be more confidence driving every day. Next, let us focus on the “turn” ability of a vehicle. By improving this particular ability of the vehicle, the cornering limit of the vehicle can be improved thus increasing the stability and maneuverability factor. In order to improve the cornering limit of the vehicle, a study to find the balance between the steering systems, the stability of the vehicle, higher lateral acceleration and the cornering limit detection must be conducted. The aim of this research is to study and develop a new suspension system that that will boost the lateral acceleration of the vehicle and ultimately improving the cornering limit of the vehicle. This research will also study environmental factor and the stability factor of the new suspension system. The double wishbone suspension system is widely used in four-wheel vehicle especially for high cornering performance sports car and racing car. The double wishbone designs allow the engineer to carefully control the motion of the wheel by controlling such parameters as camber angle, caster angle, toe pattern, roll center height, scrub radius, scuff and more. The development of the new suspension system will focus on the ability of the new suspension system to optimize the camber control and to improve the camber limit during a cornering motion. The research will be carried out using the CAE analysis tool. Using this analysis tool we will develop a JSAE Formula Machine equipped with the double wishbone system and also the new suspension system and conduct simulation and conduct studies on performance of both suspension systems.

Keywords: automobile, camber thrust, cornering force, suspension

Procedia PDF Downloads 317
209 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 165
208 Collaboration with Governmental Stakeholders in Positioning Reputation on Value

Authors: Zeynep Genel

Abstract:

The concept of reputation in corporate development comes to the fore as one of the most frequently discussed topics in recent years. Many organizations, which make worldwide investments, make effort in order to adapt themselves to the topics within the scope of this concept and to promote the name of the organization through the values that might become prominent. The stakeholder groups are considered as the most important actors determining the reputation. Even, the effect of stakeholders is not evaluated as a direct factor; it is signed as indirect effects of their perception are a very strong on ultimate reputation. It is foreseen that the parallelism between the projected reputation and the perceived c reputation, which is established as a result of communication experiences perceived by the stakeholders, has an important effect on achieving these objectives. In assessing the efficiency of these efforts, the opinions of stakeholders are widely utilized. In other words, the projected reputation, in which the positive and/or negative reflections of corporate communication play effective role, is measured through how the stakeholders perceptively position the organization. From this perspective, it is thought that the interaction and cooperation of corporate communication professionals with different stakeholder groups during the reputation positioning efforts play significant role in achieving the targeted reputation or in sustainability of this value. The governmental stakeholders having intense communication with mass stakeholder groups are within the most effective stakeholder groups of organization. The most important reason of this is that the organizations, regarding which the governmental stakeholders have positive perception, inspire more confidence to the mass stakeholders. At this point, the organizations carrying out joint projects with governmental stakeholders in parallel with sustainable communication approach come to the fore as the organizations having strong reputation, whereas the reputation of organizations, which fall behind in this regard or which cannot establish the efficiency from this aspect, is thought to be perceived as weak. Similarly, the social responsibility campaigns, in which the governmental stakeholders are involved and which play efficient role in strengthening the reputation, are thought to draw more attention. From this perspective, the role and effect of governmental stakeholders on the reputation positioning is discussed in this study. In parallel with this objective, it is aimed to reveal perspectives of seven governmental stakeholders towards the cooperation in reputation positioning. The sample group representing the governmental stakeholders is examined under the lights of results obtained from in-depth interviews with the executives of different ministries. It is asserted that this study, which aims to express the importance of stakeholder participation in corporate reputation positioning especially in Turkey and the effective role of governmental stakeholders in strong reputation, might provide a new perspective on measuring the corporate reputation, as well as establishing an important source to contribute to the studies in both academic and practical domains.

Keywords: collaborative communications, reputation management, stakeholder engagement, ultimate reputation

Procedia PDF Downloads 221
207 Quality Care from the Perception of the Patient in Ambulatory Cancer Services: A Qualitative Study

Authors: Herlin Vallejo, Jhon Osorio

Abstract:

Quality is a concept that has gained importance in different scenarios over time, especially in the area of health. The nursing staff is one of the actors that contributes most to the care process and the satisfaction of the users in the evaluation of quality. However, until now, there are few tools to measure the quality of care in specialized performance scenarios. Patients receiving ambulatory cancer treatments can face various problems, which can increase their level of distress, so improving the quality of outpatient care for cancer patients should be a priority for oncology nursing. The experience of the patient in relation to the care in these services has been little investigated. The purpose of this study was to understand the perception that patients have about quality care in outpatient chemotherapy services. A qualitative, exploratory, descriptive study was carried out in 9 patients older than 18 years, diagnosed with cancer, who were treated at the Institute of Cancerology, in outpatient chemotherapy rooms, with a minimum of three months of treatment with curative intention and which had given your informed consent. The total of participants was determined by the theoretical saturation, and the selection of these was for convenience. Unstructured interviews were conducted, recorded and transcribed. The analysis of the information was done under the technique of content analysis. Three categories emerged that reflect the perception that patients have regarding quality care: patient-centered care, care with love and effects of care. Patients highlighted situations that show that care is centered on them, incorporating elements of patient-centered care from the institutional, infrastructure, qualities of care and what for them, in contrast, means inappropriate care. Care with love as a perception of quality care means for patients that the nursing staff must have certain qualities, perceive caring with love as a family affair, limits on care with love and the nurse-patient relationship. Quality care has effects on both the patient and the nursing staff. One of the most relevant effects was the confidence that the patient develops towards the nurse, besides to transform the unreal images about cancer treatment with chemotherapy. On the other hand, care with quality generates a commitment to self-care and is a facilitator in the transit of oncological disease and chemotherapeutic treatment, but from the perception of a healing transit. It is concluded that care with quality from the perception of patients, is a construction that goes beyond the structural issues and is related to an institutional culture of quality that is reflected in the attitude of the nursing staff and in the acts of Care that have positive effects on the experience of chemotherapy and disease. With the results, it contributes to better understand how quality care is built from the perception of patients and to open a range of possibilities for the future development of an individualized instrument that allows evaluating the quality of care from the perception of patients with cancer.

Keywords: nursing care, oncology service hospital, quality management, qualitative studies

Procedia PDF Downloads 133
206 A Randomized Active Controlled Clinical Trial to Assess Clinical Efficacy and Safety of Tapentadol Nasal Spray in Moderate to Severe Post-Surgical Pain

Authors: Kamal Tolani, Sandeep Kumar, Rohit Luthra, Ankit Dadhania, Krishnaprasad K., Ram Gupta, Deepa Joshi

Abstract:

Background: Post-operative analgesia remains a clinical challenge, with central and peripheral sensitization playing a pivotal role in treatment-related complications and impaired quality of life. Centrally acting opioids offer poor risk benefit profile with increased intensity of gastrointestinal or central side effects and slow onset of clinical analgesia. The objective of this study was to assess the clinical feasibility of induction and maintenance therapy with Tapentadol Nasal Spray (NS) in moderate to severe acute post-operative pain. Methods: Phase III, randomized, active-controlled, non-inferiority clinical trial involving 294 cases who had undergone surgical procedures under general anesthesia or regional anesthesia. Post-surgery patients were randomized to receive either Tapentadol NS 45 mg or Tramadol 100mg IV as a bolus and subsequent 50 mg or 100 mg dose over 2-3 minutes. The frequency of administration of NS was at every 4-6 hours. At the end of 24 hrs, patients in the tramadol group who had a pain intensity score of ≥4 were switched to oral tramadol immediate release 100mg capsule until the pain intensity score reduced to <4. All patients who had achieved pain intensity ≤ 4 were shifted to a lower dose of either Tapentadol NS 22.5 mg or oral Tramadol immediate release 50mg capsule. The statistical analysis plan was envisaged as a non-inferiority trial involving comparison with Tramadol for Pain intensity difference at 60 minutes (PID60min), Sum of Pain intensity difference at 60 minutes (SPID60min), and Physician Global Assessment at 24 hrs (PGA24 hrs). Results: The per-protocol analyses involved 255 hospitalized cases undergoing surgical procedures. The median age of patients was 38.0 years. For the primary efficacy variables, Tapentadol NS was non-inferior to Inj/Oral Tramadol in relief of moderate to severe post-operative pain. On the basis of SPID60min, no clinically significant difference was observed between Tapentadol NS and Tramadol IV (1.73±2.24 vs. 1.64± 1.92, -0.09 [95% CI, -0.43, 0.60]). In the co-primary endpoint PGA24hrs, Tapentadol NS was non–inferior to Tramadol IV (2.12 ± 0.707 vs. 2.02 ±0.704, - 0.11[95% CI, -0.07, 0.28). However, on further assessment at 48hr, 72 hrs, and 120hrs, clinically superior pain relief was observed with the Tapentadol NS formulation that was statistically significant (p <0.05) at each of the time intervals. Secondary efficacy measures, including the onset of clinical analgesia and TOTPAR, showed non-inferiority to Tramadol. The safety profile and need for rescue medication were also similar in both the groups during the treatment period. The most common concomitant medications were anti-bacterial (98.3%). Conclusion: Tapentadol NS is a clinically feasible option for improved compliance as induction and maintenance therapy while offering a sustained and persistent patient response that is clinically meaningful in post-surgical settings.

Keywords: tapentadol nasal spray, acute pain, tramadol, post-operative pain

Procedia PDF Downloads 238
205 Family-School-Community Engagement: Building a Growth Mindset

Authors: Michelann Parr

Abstract:

Family-school-community engagement enhances family-school-community well-being, collective confidence, and school climate. While it is often referred to as a positive thing in the literature for families, schools, and communities, it does not come without its struggles. While there are numerous things families, schools, and communities do each and every day to enhance engagement, it is often difficult to find our way to true belonging and engagement. Working our way surface level barriers is easy; we can provide childcare, transportation, resources, and refreshments. We can even change the environment so that families will feel welcome, valued, and respected. But there are often mindsets and perpsectives buried deep below the surface, most often grounded in societal, familial, and political norms, expectations, pressures, and narratives. This work requires ongoing energy, commitment, and engagement of all stakeholders, including families, schools, and communities. Each and every day, we need to take a reflective and introspective stance at what is said and done and how it supports the overall goal of family-school-community engagement. And whatever we must occur within a paradigm of care in additional to one of critical thinking and social justice. Families, and those working with families, must not simply accept all that is given, but should instead ask these types of questions: a) How, and by whom, are the current philosophies and practices of family-school engagement interrogated? b) How might digging below surface level meanings support understanding of what is being said and done? c) How can we move toward meaningful and authentic engagement that balances knowledge and power between family, school, district, community (local and global), and government? This type of work requires conscious attention and intentional decision-making at all levels bringing us one step closer to authentic and meaningful partnerships. Strategies useful to building a growth mindset include: a) interrogating and exploring consistencies and inconsistencies by looking at what is done and what is not done through multiple perspectives; b) recognizing that enhancing family-engagement and changing mindsets take place at the micro-level (e.g., family and school), but also require active engagement and awareness at the macro-level (e.g., community agencies, district school boards, government); c) taking action as an advocate or activist. Negative narratives about families, schools, and communities should not be maintained, but instead critical and courageous conversations in and out of school should be initiated and sustained; and d) maintaining consistency, simplicity, and steady progress. All involved in engagement need to be aware of the struggles, but keep them in check with the many successes. Change may not be observed on a day-to-day basis or even immediately, but stepping back and looking from the outside in, might change the view. Working toward a growth mindset will produce better results than a fixed mindset, and this takes time.

Keywords: family engagment, family-school-community engagement, parent engagement, parent involvment

Procedia PDF Downloads 177
204 Affects Associations Analysis in Emergency Situations

Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko

Abstract:

Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.

Keywords: data mining, emergency phone calls, emotional profiles, rules

Procedia PDF Downloads 403
203 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin

Authors: Parisa Shirzadeh

Abstract:

Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.

Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin

Procedia PDF Downloads 117
202 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 280
201 A Deep Dive into the Multi-Pronged Nature of Student Engagement

Authors: Rosaline Govender, Shubnam Rambharos

Abstract:

Universities are, to a certain extent, the source of under-preparedness ideologically, structurally, and pedagogically, particularly since organizational cultures often alienate students by failing to enable epistemological access. This is evident in the unsustainably low graduation rates that characterize South African higher education, which indicate that under 30% graduate in minimum time, under two-thirds graduate within 6 years, and one-third have not graduated after 10 years. Although the statistics for the Faculty of Accounting and Informatics at the Durban University of Technology (DUT) in South Africa have improved significantly from 2019 to 2021, the graduation (32%), throughput (50%), and dropout rates (16%) are still a matter for concern as the graduation rates, in particular, are quite similar to the national statistics. For our students to succeed, higher education should take a multi-pronged approach to ensure student success, and student engagement is one of the ways to support our students. Student engagement depends not only on students’ teaching and learning experiences but, more importantly, on their social and academic integration, their sense of belonging, and their emotional connections in the institution. Such experiences need to challenge students academically and engage their intellect, grow their communication skills, build self-discipline, and promote confidence. The aim of this mixed methods study is to explore the multi-pronged nature of student success within the Faculty of Accounting and Informatics at DUT and focuses on the enabling and constraining factors of student success. The sources of data were the Mid-year student experience survey (N=60), the Hambisa Student Survey (N=85), and semi structured focus group interviews with first, second, and third year students of the Faculty of Accounting and Informatics Hambisa program. The Hambisa (“Moving forward”) focus area is part of the Siyaphumelela 2.0 project at DUT and seeks to understand the multiple challenges that are impacting student success which create a large “middle” cohort of students that are stuck in transition within academic programs. Using the lens of the sociocultural influences on student engagement framework, we conducted a thematic analysis of the two surveys and focus group interviews. Preliminary findings indicate that living conditions, choice of program, access to resources, motivation, institutional support, infrastructure, and pedagogical practices impact student engagement and, thus, student success. It is envisaged that the findings from this project will assist the university in being better prepared to enable student success.

Keywords: social and academic integration, socio-cultural influences, student engagement, student success

Procedia PDF Downloads 68
200 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 115
199 The Integration of Prosecutorial Discretion in the Anti-Money Laundering Regime in Nigeria: A Focus on Politically Exposed Persons

Authors: Chineduum Okpala

Abstract:

Nigeria, since her independence, has been engulfed in financial crimes of different forms. From embezzlement and conversion of public funds by public servants to stealing, contract inflation, and money laundering. Money laundering in Nigeria, particularly by political exposed persons, has been an issue of concern since independence. Corruption has been endemic, and Nigeria needs to integrate pro-active measures to show to the international community that it is ready to move against this vice. This paper discusses the negative effect of corruption and its effect on prosecutorial discretion. It also takes cognisance of the policy and aims of the anti-money laundering (AML) policy as enacted in Nigeria. It also takes as valid the assumption that the effective application of the rule of law will improve the efficacy of the Nigerian regime. In this regard, the perspective is internal to the Nigerian regime and its internal policy discourse which also reflect its policy discourse at international level. This paper takes notice of the typology of money laundering (ML) offences that most affect Nigeria, which hinges on corruption and abuse of office by a specific type of person, politically exposed persons (PEP). This typology of money laundering offence appears to be the most prevalent in developing nations like Nigeria. The application of essential principles of law provides an opportunity for the internalisation of the rule of law in the anti-money laundering regime in Nigeria, which could aid the successful prosecution of politically exposed persons on money laundering offences. The rule of law and how well the Nigerian legal system manages to deal with the interface between high level politics and the criminal justice system in Nigeria cannot be understood from internal sources but must be developed as a genuine but critical account informed by perspectives external to the Nigerian regime. If the efficacy of the regime is to be assessed in view of notorious failures of the regime, an external assessment is needed. Hence the paper discusses the need to integrate the essential principles of law in the application of prosecutorial discretion in the anti-money laundering regime in Nigeria, particularly with politically exposed persons. The paper highlights jurisdiction where prosecutorial discretion is integrated into the anti-money laundering regime in accordance to the rule of law which forms a basis for comparative analysis of the success of the anti-money laundering regime in Nigeria. This paper discusses why the application of prosecutorial discretion should not be used as a tool to extricate or avail the rich and powerful in the society from justice. The paper aims to argue that the successful prosecution of politically exposed persons, will raise the confidence of the citizens and the international community in the anti-money laundering regime in Nigeria.

Keywords: money laundering, politically exposed persons, corruption, Nigeria

Procedia PDF Downloads 124
198 Edible Active Antimicrobial Coatings onto Plastic-Based Laminates and Its Performance Assessment on the Shelf Life of Vacuum Packaged Beef Steaks

Authors: Andrey A. Tyuftin, David Clarke, Malco C. Cruz-Romero, Declan Bolton, Seamus Fanning, Shashi K. Pankaj, Carmen Bueno-Ferrer, Patrick J. Cullen, Joe P. Kerry

Abstract:

Prolonging of shelf-life is essential in order to address issues such as; supplier demands across continents, economical profit, customer satisfaction, and reduction of food wastage. Smart packaging solutions presented in the form of naturally occurred antimicrobially-active packaging may be a solution to these and other issues. Gelatin film forming solution with adding of natural sourced antimicrobials is a promising tool for the active smart packaging. The objective of this study was to coat conventional plastic hydrophobic packaging material with hydrophilic antimicrobial active beef gelatin coating and conduct shelf life trials on beef sub-primal cuts. Minimal inhibition concentration (MIC) of Caprylic acid sodium salt (SO) and commercially available Auranta FV (AFV) (bitter oranges extract with mixture of nutritive organic acids) were found of 1 and 1.5 % respectively against bacterial strains Bacillus cereus, Pseudomonas fluorescens, Escherichia coli, Staphylococcus aureus and aerobic and anaerobic beef microflora. Therefore SO or AFV were incorporated in beef gelatin film forming solution in concentration of two times of MIC which was coated on a conventional plastic LDPE/PA film on the inner cold plasma treated polyethylene surface. Beef samples were vacuum packed in this material and stored under chilling conditions, sampled at weekly intervals during 42 days shelf life study. No significant differences (p < 0.05) in the cook loss was observed among the different treatments compared to control samples until the day 29. Only for AFV coated beef sample it was 3% higher (37.3%) than the control (34.4 %) on the day 36. It was found antimicrobial films did not protect beef against discoloration. SO containing packages significantly (p < 0.05) reduced Total viable bacterial counts (TVC) compared to the control and AFV samples until the day 35. No significant reduction in TVC was observed between SO and AFV films on the day 42 but a significant difference was observed compared to control samples with a 1.40 log of bacteria reduction on the day 42. AFV films significantly (p < 0.05) reduced TVC compared to control samples from the day 14 until the day 42. Control samples reached the set value of 7 log CFU/g on day 27 of testing, AFV films did not reach this set limit until day 35 and SO films until day 42 of testing. The antimicrobial AFV and SO coated films significantly prolonged the shelf-life of beef steaks by 33 or 55% (on 7 and 14 days respectively) compared to control film samples. It is concluded antimicrobial coated films were successfully developed by coating the inner polyethylene layer of conventional LDPE/PA laminated films after plasma surface treatment. The results indicated that the use of antimicrobial active packaging coated with SO or AFV increased significantly (p < 0.05) the shelf life of the beef sub-primal. Overall, AFV or SO containing gelatin coatings have the potential of being used as effective antimicrobials for active packaging applications for muscle-based food products.

Keywords: active packaging, antimicrobials, edible coatings, food packaging, gelatin films, meat science

Procedia PDF Downloads 298