Search results for: multi objective particle swarm optimization (MOPSO)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14105

Search results for: multi objective particle swarm optimization (MOPSO)

3545 Problems concerning Formation of Institutional Framework for Electronic Democracy in Georgia

Authors: Giorgi Katamadze

Abstract:

Open public service and accountability towards citizens is an important feature of democratic state based on rule of law. Effective use of electronic resources simplifies bureaucratic procedures, makes direct communications, helps exchange information, ensures government’s openness and in general helps develop electronic/digital democracy. Development of electronic democracy should be a strategic dimension of Georgian governance. Formation of electronic democracy, its functional improvement should become an important dimension of the state’s information policy. Electronic democracy is based on electronic governance and implies modern information and communication systems, their adaptation to universal standards. E-democracy needs involvement of governments, voters, political parties and social groups in an electronic form. In the last years the process of interaction between the citizen and the state becomes simpler. This process is achieved by the use of modern technological systems which gives to a citizen a possibility to use different public services online. For example, the website my.gov.ge makes interaction between the citizen, business and the state more simple, comfortable and secure. A higher standard of accountability and interaction is being established. Electronic democracy brings new forms of interactions between the state and the citizen: e-engagement – participation of society in state politics via electronic systems; e-consultation – electronic interaction among public officials, citizens and interested groups; e-controllership – electronic rule and control of public expenses and service. Public transparency is one of the milestones of electronic democracy as well as representative democracy as only on mutual trust and accountability can democracy be established. In Georgia, institutional changes concerning establishment and development of electronic democracy are not enough. Effective planning and implementation of a comprehensive and multi component e-democracy program (central, regional, local levels) requires telecommunication systems, institutional (public service, competencies, logical system) and informational (relevant conditions for public involvement) support. Therefore, a systematic project of formation of electronic governance should be developed which will include central, regional, municipal levels and certain aspects of development of instrumental basis for electronic governance.

Keywords: e-democracy, e-governance, e-services, information technology, public administration

Procedia PDF Downloads 328
3544 Cooperative Learning Promotes Successful Learning. A Qualitative Study to Analyze Factors that Promote Interaction and Cooperation among Students in Blended Learning Environments

Authors: Pia Kastl

Abstract:

Potentials of blended learning are the flexibility of learning and the possibility to get in touch with lecturers and fellow students on site. By combining face-to-face sessions with digital self-learning units, the learning process can be optimized, and learning success increased. To examine wether blended learning outperforms online and face-to-face teaching, a theory-based questionnaire survey was conducted. The results show that the interaction and cooperation among students is poorly provided in blended learning, and face-to-face teaching performs better in this respect. The aim of this article is to identify concrete suggestions students have for improving cooperation and interaction in blended learning courses. For this purpose, interviews were conducted with students from various academic disciplines in face-to-face, online, or blended learning courses (N= 60). The questions referred to opinions and suggestions for improvement regarding the course design of the respective learning environment. The analysis was carried out by qualitative content analysis. The results show that students perceive the interaction as beneficial to their learning. They verbalize their knowledge and are exposed to different perspectives. In addition, emotional support is particularly important in exam phases. Interaction and cooperation were primarily enabled in the face-to-face component of the courses studied, while there was very limited contact with fellow students in the asynchronous component. Forums offered were hardly used or not used at all because the barrier to asking a question publicly is too high, and students prefer private channels for communication. This is accompanied by the disadvantage that the interaction occurs only among people who already know each other. Creating contacts is not fostered in the blended learning courses. Students consider optimization possibilities as a task of the lecturers in the face-to-face sessions: Here, interaction and cooperation should be encouraged through get-to-know-you rounds or group work. It is important here to group the participants randomly to establish contact with new people. In addition, sufficient time for interaction is desired in the lecture, e.g., in the context of discussions or partner work. In the digital component, students prefer synchronous exchange at a fixed time, for example, in breakout rooms or an MS Teams channel. The results provide an overview of how interaction and cooperation can be implemented in blended learning courses. Positive design possibilities are partly dependent on subject area and course. Future studies could tie in here with a course-specific analysis.

Keywords: blended learning, higher education, hybrid teaching, qualitative research, student learning

Procedia PDF Downloads 65
3543 Applying Lean Six Sigma in an Emergency Department, of a Private Hospital

Authors: Sarah Al-Lumai, Fatima Al-Attar, Nour Jamal, Badria Al-Dabbous, Manal Abdulla

Abstract:

Today, many commonly used Industrial Engineering tools and techniques are being used in hospitals around the world for the goal of producing a more efficient and effective healthcare system. A common quality improvement methodology known as Lean Six-Sigma has been successful in manufacturing industries and recently in healthcare. The objective of our project is to use the Lean Six-Sigma methodology to reduce waiting time in the Emergency Department (ED), in a local private hospital. Furthermore, a comprehensive literature review was conducted to evaluate the success of Lean Six-Sigma in the ED. According to the study conducted by Ibn Sina Hospital, in Morocco, the most common problem that patients complain about is waiting time. To ensure patient satisfaction many hospitals such as North Shore University Hospital were able to reduce waiting time up to 37% by using Lean Six-Sigma. Other hospitals, such as John Hopkins’s medical center used Lean Six-Sigma successfully to enhance the overall patient flow that ultimately decreased waiting time. Furthermore, it was found that capacity constraints, such as staff shortages and lack of beds were one of the main reasons behind long waiting time. With the use of Lean Six-Sigma and bed management, hospitals like Memorial Hermann Southwest Hospital were able to reduce patient delays. Moreover, in order to successfully implement Lean Six-Sigma in our project, two common methodologies were considered, DMAIC and DMADV. After the assessment of both methodologies, it was found that DMAIC was a more suitable approach to our project because it is more concerned with improving an already existing process. With many of its successes, Lean Six-Sigma has its limitation especially in healthcare; but limitations can be minimized if properly approached.

Keywords: lean six sigma, DMAIC, hospital, methodology

Procedia PDF Downloads 487
3542 Reading the Interior Furnishings of the Houses through Turkish Films in the 1980's

Authors: Dicle Aydın, Tuba Bulbul Bahtiyar, Esra Yaldız

Abstract:

Housing offers a confirmed space for individuals. In the sense of interior decoration design, housing is a kind of typology in which user’s profile and individual preferences are considered as primary determinants. In Turkish society, the transition from traditional residences to apartment buildings brings the change in interior fittings depending upon the location of houses in its wake. The social status of the users in the residence and the differences of their everyday life can be represented more evident in these interior fittings. Hence, space becomes a tool to carry the information of users and the act. From this aspect, space as a concrete tool also enables a multidirectional communication with the cinema which reflects the social, cultural and economic changes of the society. While space takes a virtual or real part of the cinema, architecture discipline has also been influenced by cinematic phenomenas in its own practice. The subject of the movie and its content commune with the space, therefore, the design of the space is formed to support the subject. The purpose of this study is to analyze the space through motion pictures that convey the information of social life with an objective perspective. In addition, this study aims to determine the space, fittings and the use of fittings with respect to the social status of users. Morever, three films in 1980s in which Kemal Sunal, protagonist of the scripts that reflect society in many ways, performed are examined in this study. Movie sets are considered in many ways. For instance, in one of these movies, different houses from an apartment are analyzed vis a vis the perspective of the study.

Keywords: housing, interior, furniture, furnishing, user

Procedia PDF Downloads 196
3541 A Novel Method for Isolation of Kaempferol and Quercetin from Podophyllum Hexandrum Rhizome

Authors: S. B. Bhandare, K. S. Laddha

Abstract:

Podphyllum hexandrum belonging to family berberidaceae has gained attention in phytochemical and pharmacological research as it shows excellent anticancer activity and has been used in treatment of skin diseases, sunburns and radioprotection. Chemically it contains lignans and flavonoids such as kaempferol, quercetin and their glycosides. Objective: To isolate and identify Kaempferol and Quercetin from Podophyllum rhizome. Method: The powdered rhizome of Podophyllum hexandrum was subjected to soxhlet extraction with methanol. This methanolic extract is used to obtain podophyllin. Podohyllin was extracted with ethyl acetate and this extract was then concentrated and subjected to column chromatography to obtain purified kaempferol and quercetin. Result: Isolated kaempferol, quercetin were light yellow and dark yellow in colour respectively. TLC of the isolated compounds was performed using chloroform: methanol (9:1) which showed single band on silica plate at Rf 0.6 and 0.4 for kaempferol and quercetin. UV spectrometric studies showed UV maxima (methanol) at 259, 360 nm and 260, 370 nm which are identical with standard kaempferol and quercetin respectively. Both IR spectra exhibited prominent absorption bands for free phenolic OH at 3277 and 3296.2 cm-1 and for conjugated C=O at 1597 and 1659.7 cm-1 respectively. The mass spectrum of kaempferol and quercetin showed (M+1) peak at m/z 287 and 303.09 respectively. 1H NMR analysis of both isolated compounds exhibited typical four-peak pattern of two doublets at δ 6.86 and δ 8.01 which was assigned to H-3’,5’ and H-2’,6’ respectively. Absence of signals less than δ 6.81 in the 1H NMR spectrum supported the aromatic nature of compound. Kaempferol and Quercetin showed 98.1% and 97% purity by HPLC at UV 370 nm. Conclusion: Easy and simple method for isolation of Kaempferol and Quercetin was developed and their structures were confirmed by UV, IR, NMR and mass studies. Method has shown good reproducibility, yield and purity.

Keywords: flavonoids, kaempferol, podophyllum rhizome, quercetin

Procedia PDF Downloads 300
3540 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging

Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati

Abstract:

Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.

Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization

Procedia PDF Downloads 70
3539 To Allow or to Forbid: Investigating How Europeans Reason about Endorsing Rights to Minorities: A Vignette Methodology Based Cross-Cultural Study

Authors: Silvia Miele, Patrice Rusconi, Harriet Tenenbaum

Abstract:

An increasingly multi-ethnic Europe has been pushing citizens’ boundaries on who should be entitled and to what extent to practise their own diversity. Indeed, according to a Standard Eurobarometer survey conducted in 2017, immigration is seen by Europeans as the most serious issue facing the EU, and a third of respondents reported they do not feel comfortable interacting with migrants from outside the EU. Many of these come from Muslim countries, accounting for 4.9% of Europe population in 2016. However, the figure is projected to rise up to 14% by 2050. Additionally, political debates have increasingly focused on Muslim immigrants, who are frequently portrayed as difficult to integrate, while nationalist parties across Europe have fostered the idea of insuperable cultural differences, creating an atmosphere of hostility. Using a 3 X 3 X 2 between-subjects design, it was investigated how people reason about endorsing religious and non-religious rights to minorities. An online survey has been administered to university students of three different countries (Italy, Spain and the UK) via Qualtrics, presenting hypothetical scenarios through a vignette methodology. Each respondent has been randomly allocated to one of the three following conditions: Christian, Muslim or non-religious (vegan) target. Each condition entailed three questions about children self-determination rights to exercise some control over their own lives and 3 questions about children nurturance rights of care and protection. Moreover, participants have been required to further elaborate on their answers via free-text entries and have been asked about their contact and quality of contact with the three targets, and to self-report religious, national and ethnic identification. Answers have been recorded on a Likert scale of 1-5, 1 being "not at all", 5 being "very much". A two-way ANCOVA will be used to analyse answers to closed-ended questions, while free-text answers will be coded and data will be dichotomised based on Social Cognitive Domain Theory for four categories: moral, social conventional and psychological reasons, and analysed via ANCOVAs. This study’s findings aim to contribute to the implementation of educational interventions and speak to the introduction of governmental policies on human rights.

Keywords: children's rights, Europe, migration, minority

Procedia PDF Downloads 124
3538 A Focused, High-Intensity Spread-Spectrum Ultrasound Solution to Prevent Biofouling

Authors: Alan T. Sassler

Abstract:

Biofouling is a significant issue for ships, especially those based in warm water ports. Biofouling damages hull coatings, degrades platform hydrodynamics, blocks cooling water intakes, and returns, reduces platform range and speed, and increases fuel consumption. Although platforms are protected to some degree by antifouling paints, these paints are much less effective on stationary platforms, and problematic biofouling can occur on antifouling paint-protected stationary platforms in some environments in as little as a matter of weeks. Remediation hull cleaning operations are possible, but they are very expensive, sometimes result in damage to the vessel’s paint or hull and are generally not completely effective. Ultrasound with sufficient intensity focused on specific frequency ranges can be used to prevent the growth of biofouling organisms. The use of ultrasound to prevent biofouling isn't new, but systems to date have focused on protecting platforms by shaking the hull using internally mounted transducers similar to those used in ultrasonic cleaning machines. While potentially effective, this methodology doesn't scale well to large platforms, and there are significant costs associated with installing and maintaining these systems, which dwarf the initial purchase price. An alternative approach has been developed, which uses highly directional pier-mounted transducers to project high-intensity spread-spectrum ultrasonic energy into the water column focused near the surface. This focused energy has been shown to prevent biofouling at ranges of up to 50 meters from the source. Spreading the energy out over a multi-kilohertz band makes the system both more effective and more environmentally friendly. This system has been shown to be both effective and inexpensive in small-scale testing and is now being characterized on a larger scale in selected marinas. To date, test results have been collected in Florida marinas suggesting that this approach can be used to keep ensonified areas of thousands of square meters free from biofouling, although care must be taken to minimize shaded areas.

Keywords: biofouling, ultrasonic, environmentally friendly antifoulant, marine protection, antifouling

Procedia PDF Downloads 52
3537 The Femoral Eversion Endarterectomy Technique with Transection: Safety and Efficacy

Authors: Hansraj Riteesh Bookun, Emily Maree Stevens, Jarryd Leigh Solomon, Anthony Chan

Abstract:

Objective: This was a retrospective cross-sectional study evaluating the safety and efficacy of femoral endarterectomy using the eversion technique with transection as opposed to the conventional endarterectomy technique with either vein or synthetic patch arterioplasty. Methods: Between 2010 to mid 2017, 19 patients with mean age of 75.4 years, underwent eversion femoral endarterectomy with transection by a single surgeon. There were 13 males (68.4%), and the comorbid burden was as follows: ischaemic heart disease (53.3%), diabetes (43.8%), stage 4 kidney impairment (13.3%) and current or ex-smoking (73.3%). The indications were claudication (45.5%), rest pain (18.2%) and tissue loss (36.3%). Results: The technical success rate was 100%. One patient required a blood transfusion following bleeding from intraoperative losses. Two patients required blood transfusions from low post operative haemogloblin concentrations – one of them in the context of myelodysplastic syndrome. There were no unexpected returns to theatre. The mean length of stay was 11.5 days with two patients having inpatient stays of 36 and 50 days respectively due to the need for rehabilitation. There was one death unrelated to the operation. Conclusion: The eversion technique with transection is safe and effective with low complication rates and a normally expected length of stay. It poses the advantage of not requiring a synthetic patch. This technique features minimal extraneous dissection as there is no need to harvest vein for a patch. Additionally, future endovascular interventions can be performed by puncturing the native vessel. There is no change to the femoral bifurcation anatomy after this technique. We posit that this is a useful adjunct to the surgeon’s panoply of vascular surgical techniques.

Keywords: endarterectomy, eversion, femoral, vascular

Procedia PDF Downloads 190
3536 Eye Tracking Syntax in Language Education

Authors: Marcus Maia

Abstract:

The present study reports and discusses the use of eye tracking qualitative data in reading workshops in Brazilian middle and high schools and in Generative Syntax and Sentence Processing courses at the undergraduate and graduate levels at the Federal University of Rio de Janeiro, respectively. Both endeavors take the sentential level as the proper object to be metacognitively explored in language education (cf. Chomsky, Gallego & Ott, 2019) to develop innate science forming capacity and knowledge of language. In both projects, non-discrepant qualitative eye tracking data collected and quantitatively analyzed in experimental syntax and psycholinguistic studies carried out in Lapex (Experimental Psycholinguistics Laboratory of the Federal University of Rio de Janeiro) were displayed to students as a point of departure, triggering discussions. Classes would generally start with the display of videos showing eye tracking data, such as gaze plots and heatmaps from several studies in Psycholinguistics and Experimental Syntax that we had already developed in our laboratory. The videos usually triggered discussions with students about linguistic and psycholinguistic issues, such as the reading of sentences for gist, garden-path sentences, syntactic and semantic anomalies, the filled-gap effect, island effects, direct and indirect cause, and recursive constructions, among other topics. Active, problem-solving based methodologies were employed with the objective of stimulating student participation. The communication also discusses the importance of developing full literacy, epistemic vigilance and intellectual self-defense in an infodemic world in the lines of Maia (2022).

Keywords: reading, educational psycholinguistics, eye-tracking, active methodology

Procedia PDF Downloads 55
3535 Valorisation of Waste Chicken Feathers: Electrospun Antibacterial Nanoparticles-Embedded Keratin Composite Nanofibers

Authors: Lebogang L. R. Mphahlele, Bruce B. Sithole

Abstract:

Chicken meat is the highest consumed meat in south Africa, with a per capita consumption of >33 kg yearly. Hence, South Africa produces over 250 million kg of waste chicken feathers each year, the majority of which is landfilled or incinerated. The discarded feathers have caused environmental pollution and natural protein resource waste. Therefore, the valorisation of waste chicken feathers is measured as a more environmentally friendly and cost-effective treatment. Feather contains 91% protein, the main component being beta-keratin, a fibrous and insoluble structural protein extensively cross linked by disulfide bonds. Keratin is usually converted it into nanofibers via electrospinning for a variety of applications. keratin nanofiber composites have many potential biomedical applications for their attractive features, such as high surface-to-volume ratio and very high porosity. The application of nanofibers in the biomedical wound dressing requires antimicrobial properties for materials. One approach is incorporating inorganic nanoparticles, among which silver nanoparticles played an important alternative antibacterial agent and have been studied against many types of microbes. The objective of this study is to combine synthetic polymer, chicken feather keratin, and antibacterial nanoparticles to develop novel electrospun antibacterial nanofibrous composites for possible wound dressing application. Furthermore, this study will converting a two-dimensional electrospun nanofiber membrane to three-dimensional fiber networks that resemble the structure of the extracellular matrix (ECM)

Keywords: chicken feather keratin, nanofibers, nanoparticles, nanocomposites, wound dressing

Procedia PDF Downloads 122
3534 Identification Algorithm of Critical Interface, Modelling Perils on Critical Infrastructure Subjects

Authors: Jiří. J. Urbánek, Hana Malachová, Josef Krahulec, Jitka Johanidisová

Abstract:

The paper deals with crisis situations investigation and modelling within the organizations of critical infrastructure. Every crisis situation has an origin in the emergency event occurrence in the organizations of energetic critical infrastructure especially. Here, the emergency events can be both the expected events, then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping or the unexpected event (Black Swan effect) – without pre-prepared scenario, but it needs operational coping of crisis situations as well. The forms, characteristics, behaviour and utilization of crisis scenarios have various qualities, depending on real critical infrastructure organization prevention and training processes. An aim is always better organizational security and continuity obtainment. This paper objective is to find and investigate critical/ crisis zones and functions in critical situations models of critical infrastructure organization. The DYVELOP (Dynamic Vector Logistics of Processes) method is able to identify problematic critical zones and functions, displaying critical interfaces among actors of crisis situations on the DYVELOP maps named Blazons. Firstly, for realization of this ability is necessary to derive and create identification algorithm of critical interfaces. The locations of critical interfaces are the flags of crisis situation in real organization of critical infrastructure. Conclusive, the model of critical interface will be displayed at real organization of Czech energetic crisis infrastructure subject in Black Out peril environment. The Blazons need live power Point presentation for better comprehension of this paper mission.

Keywords: algorithm, crisis, DYVELOP, infrastructure

Procedia PDF Downloads 403
3533 Characterization of Practices among Pig Smallholders in Cambodia and Implications for Disease Risk

Authors: Phalla Miech, William Leung, Ty Chhay, Sina Vor, Arata Hidano

Abstract:

Smallholder pig farms (SPFs) are prevalent in Cambodia but are vulnerable to disease impacts, as evidenced by the recent incursion of African swine fever into the region. As part of the ‘PigFluCam+’ project, we sought to provide an updated picture of pig husbandry and biosecurity practices among SPFs in south-central Cambodia. A multi-stage sampling design was adopted to select study districts and villages within four provinces: Phnom Penh, Kandal, Takeo, and Kampong Speu. Structured interviews were conductedbetween October 2020 - May 2021 among all consenting households keeping pigs in 16 target villages. Recruited SPFs (n=176) kept 6.8 pigs on average (s.d.=7.7), with most (88%) keeping cross-bred varieties of sows (77%), growers/finishers (39%), piglets/weaners (22%), and few keeping boars (5%). Chickens (83%) and waterfowl (56%) were commonly raised and could usually contact pigs directly (79%). Pigs were the primary source of household income for 28% of participants. While pigs tended to be housed individually (40%) or in groups (33%), 13% kept pigs free-ranging/tethered. Pigs were commonly fed agricultural by-products (80%), commercial feed (60%), and, notably, household waste (59%). Under half of SPFs vaccinated their pigs (e.g., against classical swine fever, Aujesky’s, and pasteurellosis, although the target disease was often unknown). Among 20 SPFs who experienced pig morbidities/mortalities within the past 6 months, only 3 (15%) reported to animal health workers, and disease etiology was rarely known. Common biosecurity measures included nets covering pig pens (62%) and restricting access to the site/pens (46%). Boot dips (0.6%) and PPE (1.2%) were rarely used. Pig smallholdings remain an important contributor to rural livelihoods. Current practices and biosecurity challenges increase risk pathways for a range of disease threats of both local and global concern. Ethnographic studies are needed to better understand local determinants and develop context-appropriate strategies.

Keywords: smallholder production, swine, biosecurity practices, Cambodia, African swine fever

Procedia PDF Downloads 175
3532 Effect of Perceived Importance of a Task in the Prospective Memory Task

Authors: Kazushige Wada, Mayuko Ueda

Abstract:

In the present study, we reanalyzed lapse errors in the last phase of a job, by re-counting near lapse errors and increasing the number of participants. We also examined the results of this study from the perspective of prospective memory (PM), which concerns future actions. This study was designed to investigate whether perceiving the importance of PM tasks caused lapse errors in the last phase of a job and to determine if such errors could be explained from the perspective of PM processing. Participants (N = 34) conducted a computerized clicking task, in which they clicked on 10 figures that they had learned in advance in 8 blocks of 10 trials. Participants were requested to click the check box in the start display of a block and to click the checking off box in the finishing display. This task was a PM task. As a measure of PM performance, we counted the number of omission errors caused by forgetting to check off in the finishing display, which was defined as a lapse error. The perceived importance was manipulated by different instructions. Half the participants in the highly important task condition were instructed that checking off was very important, because equipment would be overloaded if it were not done. The other half in the not important task condition was instructed only about the location and procedure for checking off. Furthermore, we controlled workload and the emotion of surprise to confirm the effect of demand capacity and attention. To manipulate emotions during the clicking task, we suddenly presented a photo of a traffic accident and the sound of a skidding car followed by an explosion. Workload was manipulated by requesting participants to press the 0 key in response to a beep. Results indicated too few forgetting induced lapse errors to be analyzed. However, there was a weak main effect of the perceived importance of the check task, in which the mouse moved to the “END” button before moving to the check box in the finishing display. Especially, the highly important task group showed more such near lapse errors, than the not important task group. Neither surprise, nor workload affected the occurrence of near lapse errors. These results imply that high perceived importance of PM tasks impair task performance. On the basis of the multiprocess framework of PM theory, we have suggested that PM task performance in this experiment relied not on monitoring PM tasks, but on spontaneous retrieving.

Keywords: prospective memory, perceived importance, lapse errors, multi process framework of prospective memory.

Procedia PDF Downloads 440
3531 Use of Simulation in Medical Education: Role and Challenges

Authors: Raneem Osama Salem, Ayesha Nuzhat, Fatimah Nasser Al Shehri, Nasser Al Hamdan

Abstract:

Background: Recently, most medical schools around the globe are using simulation for teaching and assessing students’ clinical skills and competence. There are many obstacles that could face students and faculty when simulation sessions are introduced into undergraduate curriculum. Objective: The aim of this study is to obtain the opinion of undergraduate medical students and our faculty regarding the role of simulation in undergraduate curriculum, the simulation modalities used, and perceived barriers in implementing stimulation sessions. Methods: To address the role of simulation, modalities used, and perceived challenges to implementation of simulation sessions, a self-administered pilot tested questionnaire with 18 items using a 5 point Likert scale was distributed. Participants included undergraduate male medical students (n=125) and female students (n=70) as well as the faculty members (n=14). Result: Various learning outcomes are achieved and improved through the technology enhanced simulation sessions such as communication skills, diagnostic skills, procedural skills, self-confidence, and integration of basic and clinical sciences. The use of high fidelity simulators, simulated patients and task trainers was more desirable by our students and faculty for teaching and learning as well as an evaluation tool. According to most of the students,' institutional support in terms of resources, staff and duration of sessions was adequate. However, motivation to participate in the sessions and provision of adequate feedback by the staff was a constraint. Conclusion: The use of simulation laboratory is of great benefit to the students and a great teaching tool for the staff to ensure students learning of the various skills.

Keywords: simulators, medical students, skills, simulated patients, performance, challenges, skill laboratory

Procedia PDF Downloads 400
3530 Dynamic Conformal Arc versus Intensity Modulated Radiotherapy for Image Guided Stereotactic Radiotherapy of Cranial Lesion

Authors: Chor Yi Ng, Christine Kong, Loretta Teo, Stephen Yau, FC Cheung, TL Poon, Francis Lee

Abstract:

Purpose: Dynamic conformal arc (DCA) and intensity modulated radiotherapy (IMRT) are two treatment techniques commonly used for stereotactic radiosurgery/radiotherapy of cranial lesions. IMRT plans usually give better dose conformity while DCA plans have better dose fall off. Rapid dose fall off is preferred for radiotherapy of cranial lesions, but dose conformity is also important. For certain lesions, DCA plans have good conformity, while for some lesions, the conformity is just unacceptable with DCA plans, and IMRT has to be used. The choice between the two may not be apparent until each plan is prepared and dose indices compared. We described a deviation index (DI) which is a measurement of the deviation of the target shape from a sphere, and test its functionality to choose between the two techniques. Method and Materials: From May 2015 to May 2017, our institute has performed stereotactic radiotherapy for 105 patients treating a total of 115 lesions (64 DCA plans and 51 IMRT plans). Patients were treated with the Varian Clinac iX with HDMLC. Brainlab Exactrac system was used for patient setup. Treatment planning was done with Brainlab iPlan RT Dose (Version 4.5.4). DCA plans were found to give better dose fall off in terms of R50% (R50% (DCA) = 4.75 Vs R50% (IMRT) = 5.242) while IMRT plans have better conformity in terms of treatment volume ratio (TVR) (TVR(DCA) = 1.273 Vs TVR(IMRT) = 1.222). Deviation Index (DI) is proposed to better facilitate the choice between the two techniques. DI is the ratio of the volume of a 1 mm shell of the PTV and the volume of a 1 mm shell of a sphere of identical volume. DI will be close to 1 for a near spherical PTV while a large DI will imply a more irregular PTV. To study the functionality of DI, 23 cases were chosen with PTV volume ranged from 1.149 cc to 29.83 cc, and DI ranged from 1.059 to 3.202. For each case, we did a nine field IMRT plan with one pass optimization and a five arc DCA plan. Then the TVR and R50% of each case were compared and correlated with the DI. Results: For the 23 cases, TVRs and R50% of the DCA and IMRT plans were examined. The conformity for IMRT plans are better than DCA plans, with majority of the TVR(DCA)/TVR(IMRT) ratios > 1, values ranging from 0.877 to1.538. While the dose fall off is better for DCA plans, with majority of the R50%(DCA)/ R50%(IMRT) ratios < 1. Their correlations with DI were also studied. A strong positive correlation was found between the ratio of TVRs and DI (correlation coefficient = 0.839), while the correlation between the ratio of R50%s and DI was insignificant (correlation coefficient = -0.190). Conclusion: The results suggest DI can be used as a guide for choosing the planning technique. For DI greater than a certain value, we can expect the conformity for DCA plans to become unacceptably great, and IMRT will be the technique of choice.

Keywords: cranial lesions, dynamic conformal arc, IMRT, image guided radiotherapy, stereotactic radiotherapy

Procedia PDF Downloads 236
3529 Simplified Modelling of Visco-Elastic Fluids for Use in Recoil Damping Systems

Authors: Prasad Pokkunuri

Abstract:

Visco-elastic materials combine the stress response properties of both solids and fluids and have found use in a variety of damping applications – both vibrational and acoustic. Defense and automotive applications, in particular, are subject to high impact and shock loading – for example: aircraft landing gear, firearms, and shock absorbers. Field responsive fluids – a class of smart materials – are the preferred choice of energy absorbents because of their controllability. These fluids’ stress response can be controlled by the application of a magnetic or electric field, in a closed loop. Their rheological properties – elasticity, plasticity, and viscosity – can be varied all the way from that of a liquid such as water to a hard solid. This work presents a simplified model to study the impulse response behavior of such fluids for use in recoil damping systems. The well-known Burger’s equation, in conjunction with various visco-elastic constitutive models, is used to represent fluid behavior. The Kelvin-Voigt, Upper Convected Maxwell (UCM), and Oldroyd-B constitutive models are implemented in this study. Using these models in a one-dimensional framework eliminates additional complexities due to geometry, pressure, body forces, and other source terms. Using a finite difference formulation to numerically solve the governing equation(s), the response to an initial impulse is studied. The disturbance is confined within the problem domain with no-inflow, no-outflow boundary conditions, and its decay characteristics studied. Visco-elastic fluids typically involve a time-dependent stress relaxation which gives rise to interesting behavior when subjected to an impulsive load. For particular values of viscous damping and elastic modulus, the fluid settles into a stable oscillatory state, absorbing and releasing energy without much decay. The simplified formulation enables a comprehensive study of different modes of system response, by varying relevant parameters. Using the insights gained from this study, extension to a more detailed multi-dimensional model is considered.

Keywords: Burgers Equation, Impulse Response, Recoil Damping Systems, Visco-elastic Fluids

Procedia PDF Downloads 287
3528 Application of Thermal Dimensioning Tools to Consider Different Strategies for the Disposal of High-Heat-Generating Waste

Authors: David Holton, Michelle Dickinson, Giovanni Carta

Abstract:

The principle of geological disposal is to isolate higher-activity radioactive wastes deep inside a suitable rock formation to ensure that no harmful quantities of radioactivity reach the surface environment. To achieve this, wastes will be placed in an engineered underground containment facility – the geological disposal facility (GDF) – which will be designed so that natural and man-made barriers work together to minimise the escape of radioactivity. Internationally, various multi-barrier concepts have been developed for the disposal of higher-activity radioactive wastes. High-heat-generating wastes (HLW, spent fuel and Pu) provide a number of different technical challenges to those associated with the disposal of low-heat-generating waste. Thermal management of the disposal system must be taken into consideration in GDF design; temperature constraints might apply to the wasteform, container, buffer and host rock. Of these, the temperature limit placed on the buffer component of the engineered barrier system (EBS) can be the most constraining factor. The heat must therefore be managed such that the properties of the buffer are not compromised to the extent that it cannot deliver the required level of safety. The maximum temperature of a buffer surrounding a container at the centre of a fixed array of heat-generating sources, arises due to heat diffusing from neighbouring heat-generating wastes, incrementally contributing to the temperature of the EBS. A range of strategies can be employed for managing heat in a GDF, including the spatial arrangements or patterns of those containers; different geometrical configurations can influence the overall thermal density in a disposal facility (or area within a facility) and therefore the maximum buffer temperature. A semi-analytical thermal dimensioning tool and methodology have been applied at a generic stage to explore a range of strategies to manage the disposal of high-heat-generating waste. A number of examples, including different geometrical layouts and chequer-boarding, have been illustrated to demonstrate how these tools can be used to consider safety margins and inform strategic disposal options when faced with uncertainty, at a generic stage of the development of a GDF.

Keywords: buffer, geological disposal facility, high-heat-generating waste, spent fuel

Procedia PDF Downloads 278
3527 Assessment of the Association between Serum Thrombospondin-1 Levels at the Time of Admission and the Severity of Neurological Deficit in Patients with Ischemic Stroke

Authors: A. Alhusban, M. Alqawasmeh, F. Alfawares

Abstract:

Introduction: Despite improvements in stroke management, it remains the leading cause of disability worldwide. It has been suggested that enhancing brain angiogenesis after stroke will improve stroke outcome. Promoting post stroke angiogenesis requires the upregulation of angiogenic factors with a simultaneous reduction of anti-angiogenic factors. Thrombospondin-1 is the main anti-angiogenic protein in the living cells. Counterintuitively, it has been shown that animals with Thrombospondin-1 knockdown will have better stroke outcome. Data about the clinical significance of Thrombspondin-1 levels at the time of admission is still lacking. The objective of this work is to assess the association between serum Thrombospondin-1 levels measured at the time of admission and baseline neurologic severity after stroke. Patients and Methods: Blood samples were collected from patients admitted to the King Abdullah University Hospital (KAUH) with ischemic stroke at the time of admission and serum Thrombopsondin-1 levels were measured using ELISA. Patients neurologic severity was evaluated using the National Institute of Health Stroke Scale (NIHSS). Results: Samples from 50 patients admitted between January 2016 and December 2016 were collected. The median age of participants was 68 years and the median NIHSS was 3. Multinomial regression identified serum Thrombospondin-1 as an independent predictor of stroke outcome (p=0.003). Baseline serum Thrombsopondin-1 was negatively associated with NIHSS at the time of admission (spearman rho correlation coefficient=0.272, p=0.032). Conclusion: Serum Thrombospondin-1 at the time of admission may be a useful marker of stroke severity that predicts more severe neurologic severity.

Keywords: thrombospondin, stroke, neuroprotection, biomarkers

Procedia PDF Downloads 130
3526 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis

Procedia PDF Downloads 321
3525 Impect of Human on Prey of Birds in North West Rajasthan

Authors: Dau Lal Bohra, Sradha Vyas

Abstract:

Bird species are already showing climate-related changes in the dates they migrate and breed, and in the timing of other key life-history events. Treats of feeding managements raptors have performed important ecological, traditional and aesthetic functions throughout the Indian subcontinent. The declines in India result from elevated adult and juvenile mortality, and low breeding success. The widespread and rapid pattern of declines, i.e. in all areas irrespective of habitat or protection status suggest that persecution through shooting or poisoning, whilst important at a local scale, are unlikely to have caused the declines. A mass killing of several species of vultures in the Indian subcontinent over the last two decades is largely blamed on the presence of a drug. Veterinary diclofenac caused an unprecedented decline in South Asia’s Gyps vulture populations, with some species declining by more than 97% between 1992 and 2007. Veterinary diclofenac causes renal failure in vultures, and killed tens of millions of such birds in the Indian sub-continent. The drug was finally banned there for veterinary purposes in 2006. This drug is now ‘a global problem’ threatening many vulnerable birds of prey. Recently, stappe eagles are also susceptible to veterinary diclofenac, effectively increasing the potential threat level, and the risks for European biodiversity. Steppe eagles are closely related with golden eagles (Aquila chrysaetus), imperial eagles (Aquila heliaca) and Spanish imperial eagles (Aquila adalberti), and all these species scavenge opportunistically on carcasses throughout their range. The Spanish imperial eagle, considered Vulnerable at global level, is now particularly at risk, due to the availability of diclofenac in Spain. These findings strengthen the case for banning veterinary diclofenac across. From year 2011 to 2014 more than 300 hundred birds dead in jorbeer, Bikaner. Now, with unequivocal evidence that this veterinary drug can cause a much wider impact on Europe´s biodiversity, it is time for action – please ban diclofenac human brand also in multi-dose vial from market.

Keywords: mortility, prey of birds, diclofenac, Rajasthan

Procedia PDF Downloads 366
3524 An Application of Self-Health Risk Assessment among Populations Living in The Vicinity of a Fiber-Cement Roofing Factory

Authors: Phayong Thepaksorn

Abstract:

The objective of this study was to assess whether living in proximity to a roofing fiber cement factory in southern Thailand was associated with physical, mental, social, and spiritual health domains measured in a self-reported health risk assessment (HRA) questionnaire. A cross-sectional study was conducted among community members divided into two groups: near population (living within 0-2 km of factory) and far population (living within 2-5 km of factory)(N=198). A greater proportion of those living far from the factory (65.34%) reported physical health problems than the near group (51.04 %)(p=0.032). This study has demonstrated that the near population group had higher proportion of participants with positive ratings on mental assessment (30.34%) and social health impacts (28.42%) than far population group (10.59% and 16.67 %, respectively) (p<0.001). The near population group (29.79%) had similar proportion of participants with positive ratings in spiritual health impacts compared with far population group (27.08%). Among females, but not males, this study demonstrated that a higher proportion of the near population had a positive summative score for the self-HRA, which included all four health domain, compared to the far population (p <0.001 for females; p=0.154 for males). In conclusion, this self-HRA of physical, mental, social, and spiritual health domains reflected the risk perceptions of populations living in the vicinity of the roofing fiber cement factory. This type of tool can bring attention to population concerns and complaints in the factory’s surrounding community. Our findings may contribute to future development of self-HRA for HIA development procedure in Thailand.

Keywords: cement dust, health impact assessment, risk assessment, walk-though survey

Procedia PDF Downloads 370
3523 Does The Implementation Of A Mindfulness Based Intervention Effect Stress and Burnout In Nursing

Authors: Jennifer Foss, DNP, RN-BC, NEA-BC

Abstract:

Stress and burnout in the bedside registered nurse have deleterious consequences for registered nurses, patients, and the hospitals that employ them. The objective of this study was to determine whether a sixty-minute mindfulness workshop was effective in reducing perceived levels of stress and decreasing mindfulness in registered nurses working in the acute care setting. Registered nurses at a community hospital in the Northeast part of the country were recruited through e-mail and flyers in breakrooms. Participants completed the Perceived Stress Scale (PSS) and Mindfulness Attention Awareness Scale (MAAS) two weeks prior to taking part in the intervention and two weeks post intervention. Of the twenty-three registered nurses who completed the baseline questionnaires, 91% were female with an average age between 30-39 years. Sixty-five percent of subjects completed the questionnaires two weeks post intervention. Two weeks post intervention, registered nurses reported a decrease in perception of stress (pre and post PSS was .133) and was not significant (t=1.293, df=14, p=.217). Likewise, an increase in mindful attention .325 was reported two-weeks post intervention and indicated a favorable tendency to enter a mindful state. This finding was also not significant (t=-1.990, df=14, p=.066). In this study, nurses reported decreases in perceived stress and increases in mindfulness after attending a sixty-minute mindfulness workshop. Further research is needed to determine the long-term impact of mindfulness-based training on nurses' stress and mindfulness skills. The results of this study add to the body of literature that supports the benefits of mindfulness-based interventions in the healthcare setting.

Keywords: Stress, burnout, nursing, acute care nursing

Procedia PDF Downloads 64
3522 Study on Aerosol Behavior in Piping Assembly under Varying Flow Conditions

Authors: Anubhav Kumar Dwivedi, Arshad Khan, S. N. Tripathi, Manish Joshi, Gaurav Mishra, Dinesh Nath, Naveen Tiwari, B. K. Sapra

Abstract:

In a nuclear reactor accident scenario, a large number of fission products may release to the piping system of the primary heat transport. The released fission products, mostly in the form of the aerosol, get deposited on the inner surface of the piping system mainly due to gravitational settling and thermophoretic deposition. The removal processes in the complex piping system are controlled to a large extent by the thermal-hydraulic conditions like temperature, pressure, and flow rates. These parameters generally vary with time and therefore must be carefully monitored to predict the aerosol behavior in the piping system. The removal process of aerosol depends on the size of particles that determines how many particles get deposit or travel across the bends and reach to the other end of the piping system. The released aerosol gets deposited onto the inner surface of the piping system by various mechanisms like gravitational settling, Brownian diffusion, thermophoretic deposition, and by other deposition mechanisms. To quantify the correct estimate of deposition, the identification and understanding of the aforementioned deposition mechanisms are of great importance. These mechanisms are significantly affected by different flow and thermodynamic conditions. Thermophoresis also plays a significant role in particle deposition. In the present study, a series of experiments were performed in the piping system of the National Aerosol Test Facility (NATF), BARC using metal aerosols (zinc) in dry environments to study the spatial distribution of particles mass and number concentration, and their depletion due to various removal mechanisms in the piping system. The experiments were performed at two different carrier gas flow rates. The commercial CFD software FLUENT is used to determine the distribution of temperature, velocity, pressure, and turbulence quantities in the piping system. In addition to the in-built models for turbulence, heat transfer and flow in the commercial CFD code (FLUENT), a new sub-model PBM (population balance model) is used to describe the coagulation process and to compute the number concentration along with the size distribution at different sections of the piping. In the sub-model coagulation kernels are incorporated through user-defined function (UDF). The experimental results are compared with the CFD modeled results. It is found that most of the Zn particles (more than 35 %) deposit near the inlet of the plenum chamber and a low deposition is obtained in piping sections. The MMAD decreases along the length of the test assembly, which shows that large particles get deposited or removed in the course of flow, and only fine particles travel to the end of the piping system. The effect of a bend is also observed, and it is found that the relative loss in mass concentration at bends is more in case of a high flow rate. The simulation results show that the thermophoresis and depositional effects are more dominating for the small and larger sizes as compared to the intermediate particles size. Both SEM and XRD analysis of the collected samples show the samples are highly agglomerated non-spherical and composed mainly of ZnO. The coupled model framed in this work could be used as an important tool for predicting size distribution and concentration of some other aerosol released during a reactor accident scenario.

Keywords: aerosol, CFD, deposition, coagulation

Procedia PDF Downloads 137
3521 Ragging and Sludging Measurement in Membrane Bioreactors

Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd

Abstract:

Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.

Keywords: clogging, membrane bioreactors, ragging, sludge

Procedia PDF Downloads 172
3520 The Value of Serum Procalcitonin in Patients with Acute Musculoskeletal Infections

Authors: Mustafa Al-Yaseen, Haider Mohammed Mahdi, Haider Ali Al–Zahid, Nazar S. Haddad

Abstract:

Background: Early diagnosis of musculoskeletal infections is of vital importance to avoid devastating complications. There is no single laboratory marker which is sensitive and specific in diagnosing these infections accurately. White blood cell count, erythrocyte sedimentation rate, and C-reactive protein are not specific as they can also be elevated in conditions other than bacterial infections. Materials Culture and sensitivity is not a true gold standard due to its varied positivity rates. Serum Procalcitonin is one of the new laboratory markers for pyogenic infections. The objective of this study is to assess the value of PCT in the diagnosis of soft tissue, bone, and joint infections. Patients and Methods: Patients of all age groups (seventy-four patients) with a diagnosis of musculoskeletal infection are prospectively included in this study. All patients were subjected to White blood cell count, erythrocyte sedimentation rate, C-reactive protein, and serum Procalcitonin measurements. A healthy non infected outpatient group (twenty-two patients) taken as a control group and underwent the same evaluation steps as the study group. Results: The study group showed mean Procalcitonin levels of 1.3 ng/ml. Procalcitonin, at 0.5 ng/ml, was (42.6%) sensitive and (95.5%) specific in diagnosing of musculoskeletal infections with (positive predictive value of 87.5% and negative predictive value of 48.3%) and (positive likelihood ratio of 9.3 and negative likelihood ratio of 0.6). Conclusion: Serum Procalcitonin, at a cut – off of 0.5 ng/ml, is a specific but not sensitive marker in the diagnosis of musculoskeletal infections, and it can be used effectively to rule in the diagnosis of infection but not to rule out it.

Keywords: procalcitonin, infection, labratory markers, musculoskeletal

Procedia PDF Downloads 157
3519 Health Hazards Among Health Care Workers and Associated Factors in Public Hospitals, Sana'a-Yemen

Authors: Makkia Ahmad Ali Al-Falahi, Abdullah Abdelaziz Muharram

Abstract:

Background: Healthcare workers (HCWs) in Yemen are exposed to a myriad of occupational health hazards, including biological, physical, ergonomic, chemical and psychosocial hazards. HCWs operate in an environment that is considered to be one of the most hazardous occupational settings. Objective: To assess the prevalence of occupational health hazards among healthcare workers and associated risk factors in public hospitals in Sana'a City, Yemen. Method: Descriptive cross-sectional design was utilized; out of 5443 totals of HCWs 396 were selected by multistage sampling technique was carried out in the public hospitals in Sana'a city, Yemen. Results: More the half (60.6%) of HCWs aged between 20-30 years, (50.8%) were males, (56.3%) were married, and (45.5%) had a diploma qualification, while (65.2%) of HCWs had less than 6 years of experience. The result showed that the highest prevalence of occupational hazards was (99%), (ergonomic hazards (93.4%), biological hazards (87.6%), psychosocial (86.65%), physical hazards (83.3%), and chemical hazards (73.5%). There were no statistically significant differences between demographic characteristics and the prevalence of occupational hazards (p >0.05). Conclusion and recommendations: The study showed the highest prevalence of occupational hazards; regarding the prevalence of biological hazards exposure to sharp-related injury, the most prevalent physical hazards were slip/trip/and fall. Ergonomic hazards had back or neck pain during work. Chemical hazards were allergic to medical gloves powder. On psychosocial hazards was suffered from verbal and physical harassment. The study concluded by raising awareness among HCWs by conducting training courses to prevent occupational hazards.

Keywords: health workers, occupational hazards, risk factors, the prevalence

Procedia PDF Downloads 71
3518 Convertible Lease, Risky Debt and Financial Structure with Growth Option

Authors: Ons Triki, Fathi Abid

Abstract:

The basic objective of this paper is twofold. It resides in designing a model for a contingent convertible lease contract that can ensure the financial stability of a company and recover the losses of the parties to the lease in the event of default. It also aims to compare the convertible lease contract on inefficiencies resulting from the debt-overhang problem and asset substitution with other financing policies. From this perspective, this paper highlights the interaction between investments and financing policies in a dynamic model with existing assets and a growth option where the investment cost is financed by a contingent convertible lease and equity. We explore the impact of the contingent convertible lease on the capital structure. We also check the reliability and effectiveness of the use of the convertible lease contract as a means of financing. Findings show that the rental convertible contract with a sufficiently high conversion ratio has less severe inefficiencies arising from risk-shifting and debt overhang than those entailed by risky debt and pure-equity financing. The problem of underinvestment pointed out by Mauer and Ott (2000) and the problem of overinvestment mentioned by Hackbarth and Mauer (2012) may be reduced under contingent convertible lease financing. Our findings predict that the firm value under contingent convertible lease financing increases globally with asset volatility instead of decreasing with business risk. The study reveals that convertible leasing contracts can stand for a reliable solution to ensure the lessee and quickly recover the counterparties of the lease upon default.

Keywords: contingent convertible lease, growth option, debt overhang, risk-shifting, capital structure

Procedia PDF Downloads 65
3517 Maize Farmers’ Perception of Sharp Practices among Agro-Input Dealers in Ibadan/Ibarapa Agricultural Zone, Oyo State

Authors: Ademola A. Ladele, Peace I. Aburime

Abstract:

Fake and substandard agricultural inputs pose a serious stumbling block to farm productivity and subsequently improved livelihood. There is, therefore, a need to pave ways for sustainable agriculture and self-sufficiency in food production by proffering solutions to this challenge. Maize farmers' perception of sharp practices among agro-input dealers in Ibadan/Ibarapa agricultural zone in Oyo state was therefore investigated. A multi-stage random sampling technique was used to select registered maize farmers in the Ibadan/Ibarapa agricultural zone of the Oyo State Agricultural Development Programme (OYSADEP). A structured questionnaire was used to collect information on the perception of sharp practices and the effects of sharp practices. A total of seventy-five maize farmers were interviewed. A focus group discussion was organized to identify ways of curbing sharp practices to complement the survey. Data were analyzed using descriptive statistics, Chi-square, and Pearson Product Moment Correlation (PPMC). Forms of sharp practices indicated were sales of expired fertilizers, expired pesticides, expired herbicides, underweight fertilizers, adulterated fertilizers, adulterated herbicides, packs containing broken seeds, infested seeds, lack of truth in labeling/wrong labels, manipulation of measuring scales, and false declaration of hecterages covered by tractor operators. The majority had unfavorable perception of agro-input dealers on sharp practices. A significant relationship was observed between respondents’ level of education and their perception of sharp practices. There were no significant relationships between respondents’ sex, marital status and religion, and their perception of sharp practices. A significant correlation exists between the forms of sharp practices and the perceived effect on agricultural production. It is concluded that the perceived effect of sharp practices was critical and the endemic culture of sharp practices prevailed in agro-input in Ibadan/Ibarapa agricultural zone. A standard regulatory system that will certify and monitor the quality of inputs should be put in place.

Keywords: agricultural productivity, agro-input dealers, maize farmers, sharp practices

Procedia PDF Downloads 190
3516 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 359