Search results for: incomplete%20count%20data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 276

Search results for: incomplete%20count%20data

66 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational

Authors: Marco Sewald

Abstract:

Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.

Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating

Procedia PDF Downloads 185
65 Influence of Surface Wettability on Imbibition Dynamics of Protein Solution in Microwells

Authors: Himani Sharma, Amit Agrawal

Abstract:

Stability of the Cassie and Wenzel wetting states depends on intrinsic contact angle and geometric features on a surface that was exploited in capturing biofluids in microwells. However, the mechanism of imbibition of biofluids in the microwells is not well implied in terms of wettability of a substrate. In this work, we experimentally demonstrated filling dynamics in hydrophilic and hydrophobic microwells by protein solutions. Towards this, we utilized lotus leaf as a mold to fabricate microwells on a Polydimethylsiloxane (PDMS) surface. Lotus leaf containing micrometer-sized blunt-conical shaped pillars with a height of 8-15 µm and diameter of 3-8 µm were transferred on to PDMS. Furthermore, PDMS surface was treated with oxygen plasma to render the hydrophilic nature. A 10µL droplets containing fluorescein isothiocyanate (FITC) - labelled bovine serum albumin (BSA) were rested on both hydrophobic (θa = 108o, where θa is the apparent contact angle) and hydrophilic (θa = 60o) PDMS surfaces. A time-dependent fluorescence microscopy was conducted on these modified PDMS surfaces by recording the fluorescent intensity over a 5 minute period. It was observed that, initially (at t=1 min) FITC-BSA was accumulated on the periphery of both hydrophilic and hydrophobic microwells due to incomplete penetration of liquid-gas meniscus. This deposition of FITC-BSA on periphery of microwell was not changed with time for hydrophobic surfaces, whereas, a complete filling was occurred in hydrophilic microwells (at t=5 mins). This attributes to a gradual movement of three-phase contact line along the vertical surface of the hydrophilic microwells as compared to stable pinning in the hydrophobic microwells as confirmed by Surface Evolver simulations. In addition, if the cavities are presented on hydrophobic surfaces, air bubbles will be trapped inside the cavities once the aqueous solution is placed over these surfaces, resulting in the Cassie-Baxter wetting state. This condition hinders trapping of proteins inside the microwells. Thus, it is necessary to impart hydrophilicity to the microwell surfaces so as to induce the Wenzel state, such that, an entire solution will be fully in contact with the walls of microwells. Imbibition of microwells by protein solutions was analyzed in terms fluorescent intensity versus time. The present work underlines the importance of geometry of microwells and surface wettability of substrate in wetting and effective capturing of solid sub-phases in biofluids.

Keywords: BSA, microwells, surface evolver, wettability

Procedia PDF Downloads 169
64 Overview of Environmental and Economic Theories of the Impact of Dams in Different Regions

Authors: Ariadne Katsouras, Andrea Chareunsy

Abstract:

The number of large hydroelectric dams in the world has increased from almost 6,000 in the 1950s to over 45,000 in 2000. Dams are often built to increase the economic development of a country. This can occur in several ways. Large dams take many years to build so the construction process employs many people for a long time and that increased production and income can flow on into other sectors of the economy. Additionally, the provision of electricity can help raise people’s living standards and if the electricity is sold to another country then the money can be used to provide other public goods for the residents of the country that own the dam. Dams are also built to control flooding and provide irrigation water. Most dams are of these types. This paper will give an overview of the environmental and economic theories of the impact of dams in different regions of the world. There is a difference in the degree of environmental and economic impacts due to the varying climates and varying social and political factors of the regions. Production of greenhouse gases from the dam’s reservoir, for instance, tends to be higher in tropical areas as opposed to Nordic environments. However, there are also common impacts due to construction of the dam itself, such as, flooding of land for the creation of the reservoir and displacement of local populations. Economically, the local population tends to benefit least from the construction of the dam. Additionally, if a foreign company owns the dam or the government subsidises the cost of electricity to businesses, then the funds from electricity production do not benefit the residents of the country the dam is built in. So, in the end, the dams can benefit a country economically, but the varying factors related to its construction and how these are dealt with, determine the level of benefit, if any, of the dam. Some of the theories or practices used to evaluate the potential value of a dam include cost-benefit analysis, environmental impacts assessments and regressions. Systems analysis is also a useful method. While these theories have value, there are also possible shortcomings. Cost-benefit analysis converts all the costs and benefits to dollar values, which can be problematic. Environmental impact assessments, likewise, can be incomplete, especially if the assessment does not include feedback effects, that is, they only consider the initial impact. Finally, regression analysis is dependent on the available data and again would not necessarily include feedbacks. Systems analysis is a method that can allow more complex modelling of the environment and the economic system. It would allow a clearer picture to emerge of the impacts and can include a long time frame.

Keywords: comparison, economics, environment, hydroelectric dams

Procedia PDF Downloads 167
63 Effects of Using a Recurrent Adverse Drug Reaction Prevention Program on Safe Use of Medicine among Patients Receiving Services at the Accident and Emergency Department of Songkhla Hospital Thailand

Authors: Thippharat Wongsilarat, Parichat tuntilanon, Chonlakan Prataksitorn

Abstract:

Recurrent adverse drug reactions are harmful to patients with mild to fatal illnesses, and affect not only patients but also their relatives, and organizations. To compare safe use of medicine among patients before and after using the recurrent adverse drug reaction prevention program . Quasi-experimental research with the target population of 598 patients with drug allergy history. Data were collected through an observation form tested for its validity by three experts (IOC = 0.87), and analyzed with a descriptive statistic (percentage). The research was conducted jointly with a multidisciplinary team to analyze and determine the weak points and strong points in the recurrent adverse drug reaction prevention system during the past three years, and 546, 329, and 498 incidences, respectively, were found. Of these, 379, 279, and 302 incidences, or 69.4; 84.80; and 60.64 percent of the patients with drug allergy history, respectively, were found to have caused by incomplete warning system. In addition, differences in practice in caring for patients with drug allergy history were found that did not cover all the steps of the patient care process, especially a lack of repeated checking, and a lack of communication between the multidisciplinary team members. Therefore, the recurrent adverse drug reaction prevention program was developed with complete warning points in the information technology system, the repeated checking step, and communication among related multidisciplinary team members starting from the hospital identity card room, patient history recording officers, nurses, physicians who prescribe the drugs, and pharmacists. Including in the system were surveillance, nursing, recording, and linking the data to referring units. There were also training concerning adverse drug reactions by pharmacists, monthly meetings to explain the process to practice personnel, creating safety culture, random checking of practice, motivational encouragement, supervising, controlling, following up, and evaluating the practice. The rate of prescribing drugs to which patients were allergic per 1,000 prescriptions was 0.08, and the incidence rate of recurrent drug reaction per 1,000 prescriptions was 0. Surveillance of recurrent adverse drug reactions covering all service providing points can ensure safe use of medicine for patients.

Keywords: recurrent drug, adverse reaction, safety, use of medicine

Procedia PDF Downloads 414
62 Tommy: Communication in Education about Disability

Authors: Karen V. Lee

Abstract:

The background and significance of this study involve communication in education by a faculty advisor exploring story and music that informs others about a disabled teacher. Social issues draw deep reflection about the emotional turmoil. As a musician becoming a teacher is a passionate yet complex endeavor, the faculty advisor shares a poetic but painful story about a disabled teacher being inducted into the teaching profession. The qualitative research method as theoretical framework draws on autoethnography of music and story where the faculty advisor approaches a professor for advice. His musicianship shifts her forward, backward, and sideways through feelings that evoke and provoke curriculum to remove communication barriers in education. They discover they do not transfer knowledge from educational method classes. Instead, the autoethnography embeds musical language as a metaphorical conduit for removing communication barriers in teacher education. Sub-themes involve communication barriers and educational technologies to ensure teachers receive social, emotional, physical, spiritual, and intervention disability resources that evoke visceral, emotional responses from the audience. Major findings of the study discover how autoethnography of music and story bring the authors to understand wider political issues of the practicum internship for teachers with disabilities. An epiphany reveals the irony of living in a culture of both uniformity and diversity. They explore the constructs of secrecy, ideology, abnormality, and marginalization by evoking visceral and emotional responses from the audience. As the voices harmonize plot, climax, characterization, and denouement, they dramatize meaning that is episodic yet incomplete to highlight the circumstances surrounding the disabled protagonist’s life. In conclusion, the qualitative research method argues for embracing storied experiences that depict communication in education. Scholarly significance embraces personal thoughts and feelings as a way of understanding social phenomena while highlighting the importance of removing communication barriers in education. The circumstance about a teacher with a disability is not uncommon in society. Thus, the authors resolve to removing barriers in education by using stories to transform the personal and cultural influences that provoke new ways of thinking about the curriculum for a disabled teacher.

Keywords: communication in education, communication barriers, autoethnography, teaching

Procedia PDF Downloads 212
61 The Missing Link in Holistic Health Care: Value-Based Medicine in Entrustable Professional Activities for Doctor-Patient Relationship

Authors: Ling-Lang Huang

Abstract:

Background: The holistic health care should ideally cover physical, mental, spiritual, and social aspects of a patient. With very constrained time in current clinical practice system, medical decisions often tip the balance in favor of evidence-based medicine (EBM) in comparison to patient's personal values. Even in the era of competence-based medical education (CBME), when scrutinizing the items of entrustable professional activities (EPAs), we found that EPAs of establishing doctor-patient relationship remained incomplete or even missing. This phenomenon prompted us to raise this project aiming at advocating value-based medicine (VBM), which emphasizes the importance of patient’s values in medical decisions. A true and effective doctor-patient communication and relationship should be a well-balanced harmony of EBM and VBM. By constructing VBM into current EPAs, we can further promote genuine shared decision making (SDM) and fix the missing link in holistic health care. Methods: In this project, we are going to find out EPA elements crucial for establishing an ideal doctor-patient relationship through three distinct pairs of doctor-patient relationships: patients with pulmonary arterial hypertension (relatively young but with grave disease), patients undergoing surgery (facing critical medical decisions), and patients with terminal diseases (facing forthcoming death). We’ll search for important EPA elements through the following steps: 1. Narrative approach to delineate patients’ values among 2. distinct groups. 3.Hermeneutics-based interview: semi-structured interview will be conducted for both patients and physicians, followed by qualitative analysis of collected information by compiling, disassembling, reassembling, interpreting, and concluding. 4. Preliminarily construct those VBM elements into EPAs for doctor-patient relationships in 3 groups. Expected Outcomes: The results of this project are going to give us invaluable information regarding the impact of patients’ values, while facing different medical situations, on the final medical decision. The competence of well-blending and -balanced both values from patients and evidence from clinical sciences is the missing link in holistic health care and should be established in future EPAs to enhance an effective SDM.

Keywords: value-based medicine, shared decision making, entrustable professional activities, holistic health care

Procedia PDF Downloads 90
60 Manufacturing and Calibration of Material Standards for Optical Microscopy in Industrial Environments

Authors: Alberto Mínguez-Martínez, Jesús De Vicente Y Oliva

Abstract:

It seems that we live in a world in which the trend in industrial environments is the miniaturization of systems and materials and the fabrication of parts at the micro-and nano-scale. The problem arises when manufacturers want to study the quality of their production. This characteristic is becoming crucial due to the evolution of the industry and the development of Industry 4.0. As Industry 4.0 is based on digital models of production and processes, having accurate measurements becomes capital. At this point, the metrology field plays an important role as it is a powerful tool to ensure more stable production to reduce scrap and the cost of non-conformities. The most extended measuring instruments that allow us to carry out accurate measurements at these scales are optical microscopes, whether they are traditional, confocal, focus variation microscopes, profile projectors, or any other similar measurement system. However, the accuracy of measurements is connected to the traceability of them to the SI unit of length (the meter). The fact of providing adequate traceability to 2D and 3D dimensional measurements at micro-and nano-scale in industrial environments is a problem that is being studied, and it does not have a unique answer. In addition, if commercial material standards for micro-and nano-scale are considered, we can find that there are two main problems. On the one hand, those material standards that could be considered complete and very interesting do not give traceability of dimensional measurements and, on the other hand, their calibration is very expensive. This situation implies that these kinds of standards will not succeed in industrial environments and, as a result, they will work in the absence of traceability. To solve this problem in industrial environments, it becomes necessary to have material standards that are easy to use, agile, adaptive to different forms, cheap to manufacture and, of course, traceable to the definition of meter with simple methods. By using these ‘customized standards’, it would be possible to adapt and design measuring procedures for each application and manufacturers will work with some traceability. It is important to note that, despite the fact that this traceability is clearly incomplete, this situation is preferable to working in the absence of it. Recently, it has been demonstrated the versatility and the utility of using laser technology and other AM technologies to manufacture customized material standards. In this paper, the authors propose to manufacture a customized material standard using an ultraviolet laser system and a method to calibrate it. To conclude, the results of the calibration carried out in an accredited dimensional metrology laboratory are presented.

Keywords: industrial environment, material standards, optical measuring instrument, traceability

Procedia PDF Downloads 94
59 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 375
58 Estimation of Morbidity Level of Industrial Labour Conditions at Zestafoni Ferroalloy Plant

Authors: M. Turmanauli, T. Todua, O. Gvaberidze, R. Javakhadze, N. Chkhaidze, N. Khatiashvili

Abstract:

Background: Mining process has the significant influence on human health and quality of life. In recent years the events in Georgia were reflected on the industry working process, especially minimal requirements of labor safety, hygiene standards of workplace and the regime of work and rest are not observed. This situation is often caused by the lack of responsibility, awareness, and knowledge both of workers and employers. The control of working conditions and its protection has been worsened in many of industries. Materials and Methods: For evaluation of the current situation the prospective epidemiological study by face to face interview method was conducted at Georgian “Manganese Zestafoni Ferroalloy Plant” in 2011-2013. 65.7% of employees (1428 bulletin) were surveyed and the incidence rates of temporary disability days were studied. Results: The average length of a temporary disability single accident was studied taking into consideration as sex groups as well as the whole cohort. According to the classes of harmfulness the following results were received: Class 2.0-10.3%; 3.1-12.4%; 3.2-35.1%; 3.3-12.1%; 3.4-17.6%; 4.0-12.5%. Among the employees 47.5% and 83.1% were tobacco and alcohol consumers respectively. According to the age groups and years of work on the base of previous experience ≥50 ages and ≥21 years of work data prevalence respectively. The obtained data revealed increased morbidity rate according to age and years of work. It was found that the bone and articulate system and connective tissue diseases, aggravation of chronic respiratory diseases, ischemic heart diseases, hypertension and cerebral blood discirculation were the leading among the other diseases. High prevalence of morbidity observed in the workplace with not satisfactory labor conditions from the hygienic point of view. Conclusion: According to received data the causes of morbidity are the followings: unsafety labor conditions; incomplete of preventive medical examinations (preliminary and periodic); lack of access to appropriate health care services; derangement of gathering, recording, and analysis of morbidity data. This epidemiological study was conducted at the JSC “Manganese Ferro Alloy Plant” according to State program “ Prevention of Occupational Diseases” (Program code is 35 03 02 05).

Keywords: occupational health, mining process, morbidity level, cerebral blood discirculation

Procedia PDF Downloads 403
57 Selection of Qualitative Research Strategy for Bullying and Harassment in Sport

Authors: J. Vveinhardt, V. B. Fominiene, L. Jeseviciute-Ufartiene

Abstract:

Relevance of Research: Qualitative research is still regarded as highly subjective and not sufficiently scientific in order to achieve objective research results. However, it is agreed that a qualitative study allows revealing the hidden motives of the research participants, creating new theories, and highlighting the field of problem. There is enough research done to reveal these qualitative research aspects. However, each research area has its own specificity, and sport is unique due to the image of its participants, who are understood as strong and invincible. Therefore, a sport participant might have personal issues to recognize himself as a victim in the context of bullying and harassment. Accordingly, researcher has a dilemma in general making to speak a victim in sport. Thus, ethical aspects of qualitative research become relevant. The plenty fields of sport make a problem determining the sample size of research. Thus, the corresponding problem of this research is which and why qualitative research strategies are the most suitable revealing the phenomenon of bullying and harassment in sport. Object of research is qualitative research strategy for bullying and harassment in sport. Purpose of the research is to analyze strategies of qualitative research selecting suitable one for bullying and harassment in sport. Methods of research were scientific research analyses of qualitative research application for bullying and harassment research. Research Results: Four mane strategies are applied in the qualitative research; inductive, deductive, retroductive, and abductive. Inductive and deductive strategies are commonly used researching bullying and harassment in sport. The inductive strategy is applied as quantitative research in order to reveal and describe the prevalence of bullying and harassment in sport. The deductive strategy is used through qualitative methods in order to explain the causes of bullying and harassment and to predict the actions of the participants of bullying and harassment in sport and the possible consequences of these actions. The most commonly used qualitative method for the research of bullying and harassment in sports is semi-structured interviews in speech and in written. However, these methods may restrict the openness of the participants in the study when recording on the dictator or collecting incomplete answers when the participant in the survey responds in writing because it is not possible to refine the answers. Qualitative researches are more prevalent in terms of technology-defined research data. For example, focus group research in a closed forum allows participants freely interact with each other because of the confidentiality of the selected participants in the study. The moderator can purposefully formulate and submit problem-solving questions to the participants. Hence, the application of intelligent technology through in-depth qualitative research can help discover new and specific information on bullying and harassment in sport. Acknowledgement: This research is funded by the European Social Fund according to the activity ‘Improvement of researchers’ qualification by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712.

Keywords: bullying, focus group, harassment, narrative, sport, qualitative research

Procedia PDF Downloads 151
56 The Concurrent Effect of Autistic and Schizotypal Traits on Convergent and Divergent Thinking

Authors: Ahmad Abu-Akel, Emilie De Montpellier, Sophie Von Bentivegni, Lyn Luechinger, Alessandro Ishii, Christine Mohr

Abstract:

Convergent and divergent thinking are two main components of creativity that have been viewed as complementary. While divergent thinking refers to the fluency and flexibility of generating new ideas, convergent thinking refers to the ability to systematically apply rules and knowledge to arrive at the optimal solution or idea. These creativity components have been shown to be susceptible to variation in subclinical expressions of autistic and schizotypal traits within the general population. Research, albeit inconclusively, mainly linked positive schizotypal traits with divergent thinking and autistic traits with convergent thinking. However, cumulative evidence suggests that these trait dimensions can co-occur in the same individual more than would be expected by chance and that their concurrent effect can be diametric and even interactive. The current study aimed at investigating the concurrent effect of these trait dimensions on tasks assessing convergent and divergent thinking abilities. We predicted that individuals with high positive schizotypal traits alone would perform particularly well on the divergent thinking task, whilst those with high autistic traits alone would perform particularly well on the convergent thinking task. Crucially, we also predicted that individuals who are high on both autistic and positive schizotypal traits would perform particularly well on both the divergent and convergent thinking tasks. This was investigated in a non-clinical sample of 142 individuals (Males = 45%; Mean age = 21.45, SD = 2.30), sufficient to minimally observe an effect size f² ≥ .10. Divergent thinking was evaluated using the Alternative Uses Task, and convergent thinking with the Anagrams Task. Autistic and schizotypal traits were respectively assessed with the Autism Quotient Questionnaire (AQ) and the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE). Regression analyses revealed that the positive association of autistic traits with convergent thinking scores was qualified with an interaction with positive schizotypal traits. Specifically, positive schizotypal traits were negatively associated with convergent thinking scores when AQ scores were relatively low, but this trend was reversed when AQ scores were high. Conversely, the positive effect of AQ scores on convergent thinking progressively increased with increasing positive schizotypal traits. The results of divergent thinking task are currently being analyzed and will be reported at the conference. The association of elevated autistic and positive schizotypal traits with convergent thinking may represent a unique profile of creative thinkers who are able to simultaneously draw on trait-specific advantages conferred by autistic and positively schizotypal traits such as local and global processing. This suggests that main-effect models can tell an incomplete story regarding the effect of autistic and positive schizotypal traits on creativity-related processes. Future creativity research should consider their interaction and the benefits conferred by their co-presence.

Keywords: autism, schizotypy, convergent thinking, divergent thinking, comorbidity

Procedia PDF Downloads 151
55 Equity, Bonds, Institutional Debt and Economic Growth: Evidence from South Africa

Authors: Ashenafi Beyene Fanta, Daniel Makina

Abstract:

Economic theory predicts that finance promotes economic growth. Although the finance-growth link is among the most researched areas in financial economics, our understanding of the link between the two is still incomplete. This is caused by, among others, wrong econometric specifications, using weak proxies of financial development, and inability to address the endogeneity problem. Studies on the finance growth link in South Africa consistently report economic growth driving financial development. Early studies found that economic growth drives financial development in South Africa, and recent studies have confirmed this using different econometric models. However, the monetary aggregate (i.e. M2) utilized used in these studies is considered a weak proxy for financial development. Furthermore, the fact that the models employed do not address the endogeneity problem in the finance-growth link casts doubt on the validity of the conclusions. For this reason, the current study examines the finance growth link in South Africa using data for the period 1990 to 2011 by employing a generalized method of moments (GMM) technique that is capable of addressing endogeneity, simultaneity and omitted variable bias problems. Unlike previous cross country and country case studies that have also used the same technique, our contribution is that we account for the development of bond markets and non-bank financial institutions rather than being limited to stock market and banking sector development. We find that bond market development affects economic growth in South Africa, and no similar effect is observed for the bank and non-bank financial intermediaries and the stock market. Our findings show that examination of individual elements of the financial system is important in understanding the unique effect of each on growth. The observation that bond markets rather than private credit and stock market development promotes economic growth in South Africa induces an intriguing question as to what unique roles bond markets play that the intermediaries and equity markets are unable to play. Crucially, our results support observations in the literature that using appropriate measures of financial development is critical for policy advice. They also support the suggestion that individual elements of the financial system need to be studied separately to consider their unique roles in advancing economic growth. We believe that our understanding of the channels through which bond market contribute to growth would be a fertile ground for future research.

Keywords: bond market, finance, financial sector, growth

Procedia PDF Downloads 388
54 A Professional Learning Model for Schools Based on School-University Research Partnering That Is Underpinned and Structured by a Micro-Credentialing Regime

Authors: David Lynch, Jake Madden

Abstract:

There exists a body of literature that reports on the many benefits of partnerships between universities and schools, especially in terms of teaching improvement and school reform. This is because such partnerships can build significant teaching capital, by deepening and expanding the skillsets and mindsets needed to create the connections that support ongoing and embedded teacher professional development and career goals. At the same time, this literature is critical of such initiatives when the partnership outcomes are short- term or one-sided, misaligned to fundamental problems, and not expressly focused on building the desired teaching capabilities. In response to this situation, research conducted by Professor David Lynch and his TeachLab research team, has begun to shed light on the strengths and limitations of school/university partnerships, via the identification of key conceptual elements that appear to act as critical partnership success factors. These elements are theorised as an inter-play between professional knowledge acquisition, readiness, talent management and organisational structure. However, knowledge of how these elements are established, and how they manifest within the school and its teaching workforce as an overall system, remains incomplete. Therefore, research designed to more clearly delineate these elements in relation to their impact on school/university partnerships is thus required. It is within this context that this paper reports on the development and testing of a Professional Learning (PL) model for schools and their teachers that incorporates school-university research partnering within a systematic, whole-of-school PL strategy that is underpinned and structured by a micro-credentialing (MC) regime. MC involves learning a narrow-focused certificate (a micro-credential) in a specific topic area (e.g., 'How to Differentiate Instruction for English as a second language Students') and embedded in the teacher’s day-to-day teaching work. The use of MC is viewed as important to the efficacy and sustainability of teacher PL because it (1) provides an evidence-based framework for teacher learning, (2) has the ability to promote teacher social capital and (3) engender lifelong learning in keeping professional skills current in an embedded and seamless to work manner. The associated research is centred on a primary school in Australia (P-6) that acted as an arena to co-develop, test/investigate and report on outcomes for teacher PL that uses MC to support a whole-of-school partnership with a university.

Keywords: teaching improvement, teacher professional learning, talent management, education partnerships, school-university research

Procedia PDF Downloads 50
53 Evaluation of the Role of Simulation and Virtual Reality as High-Yield Adjuncts to Paediatric Education

Authors: Alexandra Shipley

Abstract:

Background: Undergraduate paediatric teaching must overcome two major challenges: 1) balancing patient safety with active student engagement and 2) exposing students to a comprehensive range of pathologies within a relatively short clinical placement. Whilst lectures and shadowing on paediatric wards constitute the mainstay of learning, Simulation and Virtual Reality (VR) are emerging as effective teaching tools, which - immune to the unpredictability and seasonal variation of hospital presentations - could expose students to the entire syllabus more reliably, efficiently, and independently. We aim to evaluate the potential utility of Simulation and VR in addressing gaps within the traditional paediatric curriculum from the perspective of medical students. Summary of Work: Exposure to and perceived utility of various learning opportunities within the Paediatric and Emergency Medicine courses were assessed through a questionnaire completed by 5th year medical students (n=23). Summary of Results: Students reported limited exposure to several common acute paediatric presentations, such as bronchiolitis (41%), croup (32%) or pneumonia (14%), and to clinical emergencies, including cardiac/respiratory arrests or trauma calls (27%). Across all conditions, average self-reported confidence in assessment and management to the level expected of an FY1 is greater amongst those who observed at least one case (e.g. 7.6/10 compared with 3.6/10 for croup). Students rated exposure through Simulation or VR to be of similar utility to witnessing a clinical scenario on the ward. In free text responses, students unanimously favoured being ‘challenged’ through ‘hands-on’ patient interaction over passive shadowing, where it is ‘easy to zone out.’ In recognition of the fact that such independence is only appropriate in certain clinical situations, many students reported wanting more Simulation and VR teaching. Importantly, students raised the necessity of ‘proper debriefs’ after these sessions to maximise educational value. Discussion and Conclusion: Our questionnaire elicited several student-perceived challenges in paediatric education, including incomplete exposure to common pathologies and limited opportunities for active involvement in patient care. Indeed, these experiences seem to be important predictors of confidence. Quantitative and qualitative feedback suggests that VR and Simulation satisfy students’ self-reported appetite for independent engagement with authentic clinical scenarios. Take-aways: Our findings endorse further development of VR and Simulation as high-yield adjuncts to paediatric education.

Keywords: paediatric emergency education, simulation, virtual reality, medical education

Procedia PDF Downloads 48
52 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning

Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz

Abstract:

Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.

Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics

Procedia PDF Downloads 86
51 Emergency Surgery in the Elderly, What Particularities

Authors: Mekroud Amel

Abstract:

Introduction The rate of use by the elderly of emergency departments, operating rooms and intensive care units has increased worldwide. Emergency surgery is a context where evaluation is often insufficient, with incomplete information gathering. The aim of this work is to shed light on the frequent use of emergency surgeries by the elderly and their characteristics, as well as on the lack of geriatric assessment scores in the emergency room. Material : Prospective, observational and descriptive, monocentric study. Patients aged 65 and over, admitted for emergency surgery in the operating room, were counted. Emergency operating room including visceral surgery, urology, traumatology and neurosurgery. Parameters studied: Patient characteristics, degree of autonomy, type of surgical pathology, operative management times, preoperative evaluation, postoperative outcome Results : 192 patients were identified over 12 months, from 09.01.2017 to 08.31.2018 Age from 65 to 101 years, 79.81 years +/- 8.38. With predominance of the age group between [65-75 years] 41.1% Female predominance, Sexratio = 0.81 Elderly subjects with total motor autonomy are in the majority at 57.8% Subjects without pathological ATCD represent 12.5% of cases Those who are on only one type of medication or without any treatment are at 36.9% Discussion : The emergency operative care of the elderly patient for a surgical or traumatological pathology is characterized by many specificities linked first to the emergency context, where the evaluation is often insufficient, besides the fact that the elderly patient has particularities requiring reception in centers with experience in the care of this category of patient, or, failing that, a center which uses the minimum of geriatric evaluation scores which are simplified for the emergency departments. In our hospital, we have not yet made this evaluation routine in the emergency room and this delay in the introduction of these scores can be directly attributed to the covid 19 pandemic. Besides the standard preoperative assessment, only 43.2% of patients were assessed in the preoperative period by an anesthesiologist. Traumatological emergencies come first 68.2% followed by visceral emergencies 19.2% (including proctological, urological emergencies), neurosurgical emergencies 7.8% and finally peripheral emergency surgery all acts combined 4.7%. Hospital stay at 9.6 +/- 16.8 days, average operability time of 4.5 +/- 3 days. Death rate at 7.29% Conclusion This work has demonstrated the major impact of emergency surgery, which remains curable for the most part, on the elderly patient despite total motor and cognitive autonomy preoperatively. The improvement of the preoperative evaluation, the reduction of the operating time and enhanced recovery after surgery, with personalized protocols, are the only guarantee for the resumption of preoperative autonomy in these patients.

Keywords: emergency surgery, elderly patients, preoperative geriatric scores, curable emergency surgical pathologies

Procedia PDF Downloads 47
50 Ending the Gender Gap in Educational Leadership: A U.S. Goal for a Balanced Administration by 2030

Authors: S. Dodd

Abstract:

This presentation examines the gender gap in leadership positions at colleges and universities within the United States. Despite the fact that women now outnumber men in earning doctorate degrees, women continue to hold far fewer positions of educational leadership, and still, earn less money than men do at every level. Considering the lack of female representation in positions of leadership, there are clearly outside variables preventing women from attaining these positions, despite their educational attainment. Following this study, the American Council on Education (ACE) set a goal to achieve an equal percentage of females holding college presidency positions by the year 2030. This goal is particularly ambitious, especially when considering the gender disparity at all ranks in higher education. Men still hold nearly 70% of all full professorships at degree-granting institutions. Even when women are equally represented in numbers, men typically hold a higher rank and are more likely to be tenured. Across all four-year colleges and universities in the United States, men earn more money than women at every rank and in every discipline. There are over twice as many men than women represented on governing boards, who help formed and uphold campus policies. The fact that the low percentage of female presidents has remained static for many years deepens the challenge for the ACE. Although emphasizing the need to create greater opportunities for women in educational administration is admirable, it is difficult to simplify the social forces that create and uphold the status quo of male leadership. When aiming to ensure 'women' hold 50% of all college presidency positions, it is important to consider how the intersections of race, social class, and other factors also correlate with lower job status. This presentation explores how gendered notions of leadership begin in a child’s early years and are carried into future careers, and how these conceptualizations impact the creation and upholding of educational policies at every academic level. Current research that emphasizes the importance establishing a bottom-up approach to a gender equity infrastructure for children early in their educational careers will be discussed. A top-down approach starting with female college presidents is incomplete and insufficient if the mindsets of the youth who will one day be entering those institutions of higher education are not also taken into consideration. Although ACE has established this lofty goal for female college presidencies by the year 2030, a road map for this will ensue, has not yet been provided. The talent pool of women who are educated and experienced for such positions is vast, but acknowledging the social barriers existing for women in these positions will be crucial to making the changes necessary for these leadership opportunities to be long lasting and successful.

Keywords: equity, higher education, leadership, women

Procedia PDF Downloads 156
49 Deep Learning for Image Correction in Sparse-View Computed Tomography

Authors: Shubham Gogri, Lucia Florescu

Abstract:

Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.

Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net

Procedia PDF Downloads 115
48 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution

Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda

Abstract:

This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.

Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation

Procedia PDF Downloads 116
47 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners

Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani

Abstract:

Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.

Keywords: accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks

Procedia PDF Downloads 281
46 Renewable Natural Gas Production from Biomass and Applications in Industry

Authors: Sarah Alamolhoda, Kevin J. Smith, Xiaotao Bi, Naoko Ellis

Abstract:

For millennials, biomass has been the most important source of fuel used to produce energy. Energy derived from biomass is renewable by re-growth of biomass. Various technologies are used to convert biomass to potential renewable products including combustion, gasification, pyrolysis and fermentation. Gasification is the incomplete combustion of biomass in a controlled environment that results in valuable products such as syngas, biooil and biochar. Syngas is a combustible gas consisting of hydrogen (H₂), carbon monoxide (CO), carbon dioxide (CO₂), and traces of methane (CH₄) and nitrogen (N₂). Cleaned syngas can be used as a turbine fuel to generate electricity, raw material for hydrogen and synthetic natural gas production, or as the anode gas of solid oxide fuel cells. In this work, syngas as a product of woody biomass gasification in British Columbia, Canada, was introduced to two consecutive fixed bed reactors to perform a catalytic water gas shift reaction followed by a catalytic methanation reaction. The water gas shift reaction is a well-established industrial process and used to increase the hydrogen content of the syngas before the methanation process. Catalysts were used in the process since both reactions are reversible exothermic, and thermodynamically preferred at lower temperatures while kinetically favored at elevated temperatures. The water gas shift reactor and the methanation reactor were packed with Cu-based catalyst and Ni-based catalyst, respectively. Simulated syngas with different percentages of CO, H₂, CH₄, and CO₂ were fed to the reactors to investigate the effect of operating conditions in the unit. The water gas shift reaction experiments were done in the temperature of 150 ˚C to 200 ˚C, and the pressure of 550 kPa to 830 kPa. Similarly, methanation experiments were run in the temperature of 300 ˚C to 400 ˚C, and the pressure of 2340 kPa to 3450 kPa. The Methanation reaction reached 98% of CO conversion at 340 ˚C and 3450 kPa, in which more than half of CO was converted to CH₄. Increasing the reaction temperature caused reduction in the CO conversion and increase in the CH₄ selectivity. The process was designed to be renewable and release low greenhouse gas emissions. Syngas is a clean burning fuel, however by going through water gas shift reaction, toxic CO was removed, and hydrogen as a green fuel was produced. Moreover, in the methanation process, the syngas energy was transformed to a fuel with higher energy density (per volume) leading to reduction in the amount of required fuel that flows through the equipment and improvement in the process efficiency. Natural gas is about 3.5 times more efficient (energy/ volume) than hydrogen and easier to store and transport. When modification of existing infrastructure is not practical, the partial conversion of renewable hydrogen to natural gas (with up to 15% hydrogen content), the efficiency would be preserved while greenhouse gas emission footprint is eliminated.

Keywords: renewable natural gas, methane, hydrogen, gasification, syngas, catalysis, fuel

Procedia PDF Downloads 73
45 Pharmacovigilance in Hospitals: Retrospective Study at the Pharmacovigilance Service of UHE-Oran, Algeria

Authors: Nadjet Mekaouche, Hanane Zitouni, Fatma Boudia, Habiba Fetati, A. Saleh, A. Lardjam, H. Geniaux, A. Coubret, H. Toumi

Abstract:

Medicines have undeniably played a major role in prolonging shelf life and improving quality. The absolute efficacy of the drug remains a lever for innovation, its benefit/risk balance is not always assured and it does not always have the expected effects. Prior to marketing, knowledge about adverse drug reactions is incomplete. Once on the market, phase IV drug studies begin. For years, the drug was prescribed with less care to a large number of very heterogeneous patients and often in combination with other drugs. It is at this point that previously unknown adverse effects may appear, hence the need for the implementation of a pharmacovigilance system. Pharmacovigilance represents all methods for detecting, evaluating, informing and preventing the risks of adverse drug reactions. The most severe adverse events occur frequently in hospital and that a significant proportion of adverse events result in hospitalizations. In addition, the consequences of hospital adverse events in terms of length of stay, mortality and costs are considerable. It, therefore, appears necessary to develop ‘hospital pharmacovigilance’ aimed at reducing the incidence of adverse reactions in hospitals. The most widely used monitoring method in pharmacovigilance is spontaneous notification. However, underreporting of adverse drug reactions is common in many countries and is a major obstacle to pharmacovigilance assessment. It is in this context that this study aims to describe the experience of the pharmacovigilance service at the University Hospital of Oran (EHUO). This is a retrospective study extending from 2011 to 2017, carried out on archived records of declarations collected at the level of the EHUO Pharmacovigilance Department. Reporting was collected by two methods: ‘spontaneous notification’ and ‘active pharmacovigilance’ targeting certain clinical services. We counted 217 statements. It involved 56% female patients and 46% male patients. Age ranged from 5 to 78 years with an average of 46 years. The most common adverse reaction was drug toxidermy. For the drugs in question, they were essentially according to the ATC classification of anti-infectives followed by anticancer drugs. As regards the evolution of declarations by year, a low rate of notification was noted in 2011. That is why we decided to set up an active approach at the level of some services where a resident of reference attended the staffs every week. This has resulted in an increase in the number of reports. The declarations came essentially from the services where the active approach was installed. This highlights the need for ongoing communication between all relevant health actors to stimulate reporting and secure drug treatments.

Keywords: adverse drug reactions, hospital, pharmacovigilance, spontaneous notification

Procedia PDF Downloads 140
44 The One, the Many, and the Doctrine of Divine Simplicity: Variations on Simplicity in Essentialist and Existentialist Metaphysics

Authors: Mark Wiebe

Abstract:

One of the tasks contemporary analytic philosophers have focused on (e.g., Wolterstorff, Alston, Plantinga, Hasker, and Crisp) is the analysis of certain medieval metaphysical frameworks. This growing body of scholarship has helped clarify and prevent distorted readings of medieval and ancient writers. However, as scholars like Dolezal, Duby, and Brower have pointed out, these analyses have been incomplete or inaccurate in some instances, e.g., with regard to analogical speech or the doctrine of divine simplicity (DDS). Additionally, contributors to this work frequently express opposing claims or fail to note substantial differences between ancient and medieval thinkers. This is the case regarding the comparison between Thomas Aquinas and others. Anton Pegis and Étienne Gilson have argued along this line that Thomas’ metaphysical framework represents a fundamental shift. Gilson describes Thomas’ metaphysics as a turn from a form of “essentialism” to “existentialism.” One should argue that this shift distinguishes Thomas from many Analytic philosophers as well as from other classical defenders of the DDS. Moreover, many of the objections Analytic Philosophers make against Thomas presume the same metaphysical principles undergirding the above-mentioned form of essentialism. This weakens their force against Thomas’ positions. In order to demonstrate these claims, it will be helpful to consider Thomas’ metaphysical outlook alongside that of two other prominent figures: Augustine and Ockham. One area of their thinking which brings their differences to the surface has to do with how each relates to Platonic and Neo-Platonic thought. More specifically, it is illuminating to consider whether and how each distinguishes or conceives essence and existence. It is also useful to see how each approaches the Platonic conflicts between essence and individuality, unity and intelligibility. In both of these areas, Thomas stands out from Augustine and Ockham. Although Augustine and Ockham diverge in many ways, both ultimately identify being with particularity and pit particularity against both unity and intelligibility. Contrastingly, Thomas argues that being is distinct from and prior to essence. Being (i.e., Being in itself) rather than essence or form must therefore serve as the ground and ultimate principle for the existence of everything in which being and essence are distinct. Additionally, since change, movement, and addition improve and give definition to finite being, multitude and distinction are, therefore, principles of being rather than non-being. Consequently, each creature imitates and participates in God’s perfect Being in its own way; the perfection of each genus exists pre-eminently in God without being at odds with God’s simplicity, God has knowledge, power, and will, and these and the many other terms assigned to God refer truly to the being of God without being either meaningless or synonymous. The existentialist outlook at work in these claims distinguishes Thomas in a noteworthy way from his contemporaries and predecessors as much as it does from many of the analytic philosophers who have objected to his thought. This suggests that at least these kinds of objections do not apply to Thomas’ thought.

Keywords: theology, philosophy of religion, metaphysics, philosophy

Procedia PDF Downloads 43
43 Analytical Tools for Multi-Residue Analysis of Some Oxygenated Metabolites of PAHs (Hydroxylated, Quinones) in Sediments

Authors: I. Berger, N. Machour, F. Portet-Koltalo

Abstract:

Polycyclic aromatic hydrocarbons (PAHs) are toxic and carcinogenic pollutants produced in majority by incomplete combustion processes in industrialized and urbanized areas. After being emitted in atmosphere, these persistent contaminants are deposited to soils or sediments. Even if persistent, some can be partially degraded (photodegradation, biodegradation, chemical oxidation) and they lead to oxygenated metabolites (oxy-PAHs) which can be more toxic than their parent PAH. Oxy-PAHs are less measured than PAHs in sediments and this study aims to compare different analytical tools in order to extract and quantify a mixture of four hydroxylated PAHs (OH-PAHs) and four carbonyl PAHs (quinones) in sediments. Methodologies: Two analytical systems – HPLC with on-line UV and fluorescence detectors (HPLC-UV-FLD) and GC coupled to a mass spectrometer (GC-MS) – were compared to separate and quantify oxy-PAHs. Microwave assisted extraction (MAE) was optimized to extract oxy-PAHs from sediments. Results: First OH-PAHs and quinones were analyzed in HPLC with on-line UV and fluorimetric detectors. OH-PAHs were detected with the sensitive FLD, but the non-fluorescent quinones were detected with UV. The limits of detection (LOD)s obtained were in the range (2-3)×10-4 mg/L for OH-PAHs and (2-3)×10-3 mg/L for quinones. Second, even if GC-MS is not well adapted to the analysis of the thermodegradable OH-PAHs and quinones without any derivatization step, it was used because of the advantages of the detector in terms of identification and of GC in terms of efficiency. Without derivatization, only two of the four quinones were detected in the range 1-10 mg/L (LODs=0.3-1.2 mg/L) and LODs were neither very satisfying for the four OH-PAHs (0.18-0.6 mg/L). So two derivatization processes were optimized, comparing to literature: one for silylation of OH-PAHs, one for acetylation of quinones. Silylation using BSTFA/TCMS 99/1 was enhanced using a mixture of catalyst solvents (pyridine/ethyle acetate) and finding the appropriate reaction duration (5-60 minutes). Acetylation was optimized at different steps of the process, including the initial volume of compounds to derivatize, the added amounts of Zn (0.1-0.25 g), the nature of the derivatization product (acetic anhydride, heptafluorobutyric acid…) and the liquid/liquid extraction at the end of the process. After derivatization, LODs were decreased by a factor 3 for OH-PAHs and by a factor 4 for quinones, all the quinones being now detected. Thereafter, quinones and OH-PAHs were extracted from spiked sediments using microwave assisted extraction (MAE) followed by GC-MS analysis. Several mixtures of solvents of different volumes (10-25 mL) and using different extraction temperatures (80-120°C) were tested to obtain the best recovery yields. Satisfactory recoveries could be obtained for quinones (70-96%) and for OH-PAHs (70-104%). Temperature was a critical factor which had to be controlled to avoid oxy-PAHs degradation during the MAE extraction process. Conclusion: Even if MAE-GC-MS was satisfactory to analyze these oxy-PAHs, MAE optimization has to be carried on to obtain a most appropriate extraction solvent mixture, allowing a direct injection in the HPLC-UV-FLD system, which is more sensitive than GC-MS and does not necessitate a previous long derivatization step.

Keywords: derivatizations for GC-MS, microwave assisted extraction, on-line HPLC-UV-FLD, oxygenated PAHs, polluted sediments

Procedia PDF Downloads 259
42 Liquid Food Sterilization Using Pulsed Electric Field

Authors: Tanmaya Pradhan, K. Midhun, M. Joy Thomas

Abstract:

Increasing the shelf life and improving the quality are important objectives for the success of packaged liquid food industry. One of the methods by which this can be achieved is by deactivating the micro-organisms present in the liquid food through pasteurization. Pasteurization is done by heating, but some serious disadvantages such as the reduction in food quality, flavour, taste, colour, etc. were observed because of heat treatment, which leads to the development of newer methods instead of pasteurization such as treatment using UV radiation, high pressure, nuclear irradiation, pulsed electric field, etc. In recent years the use of the pulsed electric field (PEF) for inactivation of the microbial content in the food is gaining popularity. PEF uses a very high electric field for a short time for the inactivation of microorganisms, for which we require a high voltage pulsed power source. Pulsed power sources used for PEF treatments are usually in the range of 5kV to 50kV. Different pulse shapes are used, such as exponentially decaying and square wave pulses. Exponentially decaying pulses are generated by high power switches with only turn-on capacity and, therefore, discharge the total energy stored in the capacitor bank. These pulses have a sudden onset and, therefore, a high rate of rising but have a very slow decay, which yields extra heat, which is ineffective in microbial inactivation. Square pulses can be produced by an incomplete discharge of a capacitor with the help of a switch having both on/off control or by using a pulse forming network. In this work, a pulsed power-based system is designed with the help of high voltage capacitors and solid-state switches (IGBT) for the inactivation of pathogenic micro-organism in liquid food such as fruit juices. The high voltage generator is based on the Marx generator topology, which can produce variable amplitude, frequency, and pulse width according to the requirements. Liquid food is treated in a chamber where pulsed electric field is produced between stainless steel electrodes using the pulsed output voltage of the supply. Preliminary bacterial inactivation tests were performed by subjecting orange juice inoculated with Escherichia Coli bacteria. With the help of the developed pulsed power source and the chamber, the inoculated orange has been PEF treated. The voltage was varied to get a peak electric field up to 15kV/cm. For a total treatment time of 200µs, a 30% reduction in the bacterial count has been observed. The detailed results and analysis will be presented in the final paper.

Keywords: Escherichia coli bacteria, high voltage generator, microbial inactivation, pulsed electric field, pulsed forming line, solid-state switch

Procedia PDF Downloads 149
41 Direct Assessment of Cellular Immune Responses to Ovalbumin with a Secreted Luciferase Transgenic Reporter Mouse Strain IFNγ-Lucia

Authors: Martyna Chotomska, Aleksandra Studzinska, Marta Lisowska, Justyna Szubert, Aleksandra Tabis, Jacek Bania, Arkadiusz Miazek

Abstract:

Objectives: Assessing antigen-specific T cell responses is of utmost importance for the pre-clinical testing of prototype vaccines against intracellular pathogens and tumor antigens. Mainly two types of in vitro assays are used for this purpose 1) enzyme-linked immunospot (ELISpot) and 2) intracellular cytokine staining (ICS). Both are time-consuming, relatively expensive, and require manual dexterity. Here, we assess if a straightforward detection of luciferase activity in blood samples of transgenic reporter mice expressing a secreted Lucia luciferase under the transcriptional control of IFN-γ promoter parallels the sensitivity of IFNγ ELISpot assay. Methods: IFN-γ-LUCIA mouse strain carrying multiple copies of Lucia luciferase transgene under the transcriptional control of IFNγ minimal promoter were generated by pronuclear injection of linear DNA. The specificity of transgene expression and mobilization was assessed in vitro using transgenic splenocytes exposed to various mitogens. The IFN-γ-LUCIA mice were immunized with 50mg of ovalbumin (OVA) emulsified in incomplete Freund’s adjuvant three times every two weeks by subcutaneous injections. Blood samples were collected before and five days after each immunization. Luciferase activity was assessed in blood serum. Peripheral blood mononuclear cells were separated and assessed for frequencies of OVA-specific IFNγ-secreting T cells. Results: We show that in vitro cultured splenocytes of IFN-γ-LUCIA mice respond by 2 and 3 fold increase in secreted luciferase activity to T cell mitogens concanavalin A and phorbol myristate acetate, respectively but fail to respond to B cell-stimulating E.coli lipopolysaccharide. Immunization of IFN-γ-LUCIA mice with OVA leads to over 4 fold increase in luciferase activity in blood serum five days post-immunization with a barely detectable increase in OVA-specific, IFNγ-secreting T cells by ELISpot. Second and third immunizations, further increase the luciferase activity and coincidently also increase the frequencies of OVA-specific T cells by ELISpot. Conclusions: We conclude that minimally invasive monitoring of luciferase secretions in blood serum of IFN-γ-LUCIA mice constitutes a sensitive method for evaluating primary and memory Th1 responses to protein antigens. As such, this method may complement existing methods for rapid immunogenicity assessment of prototype vaccines.

Keywords: ELISpot, immunogenicity, interferon-gamma, reporter mice, vaccines

Procedia PDF Downloads 139
40 Development of Intellectual Property Information Services in Zimbabwe’s University Libraries: Assessing the Current Status and Mapping the Future Direction

Authors: Jonathan Munyoro, Takawira Machimbidza, Stephen Mutula

Abstract:

The study investigates the current status of Intellectual Property (IP) information services in Zimbabwe's university libraries. Specifically, the study assesses the current IP information services offered in Zimbabwe’s university libraries, identifies challenges to the development of comprehensive IP information services in Zimbabwe’s university libraries, and suggests solutions for the development of IP information services in Zimbabwe’s university libraries. The study is born out of a realisation that research on IP information services in university libraries has received little attention, especially in developing country contexts, despite the fact that there are calls for heightened participation of university libraries in IP information services. In Zimbabwe, the launch of the National Intellectual Property Policy and Implementation Strategy 2018-2022 and the introduction of the Education 5.0 concept are set to significantly change the IP landscape in the country. Education 5.0 places more emphasis on innovation and industrialisation (in addition to teaching, community service, and research), and has the potential to shift the focus and level of IP output produced in higher and tertiary education institutions beyond copyrights and more towards commercially exploited patents, utility models, and industrial designs. The growing importance of IP commercialisation in universities creates a need for appropriate IP information services to assist students, academics, researchers, administrators, start-ups, entrepreneurs, and inventors. The critical challenge for university libraries is to reposition themselves and remain relevant in the new trajectory. Designing specialised information services to support increased IP generation and commercialisation appears to be an opportunity for university libraries to stay relevant in the knowledge economy. However, IP information services in Zimbabwe’s universities appear to be incomplete and focused mostly on assisting with research publications and copyright-related activities. Research on the existing status of IP services in university libraries in Zimbabwe is therefore necessary to help identify gaps and provide solutions in order to stimulate the growth of new forms of such services. The study employed a quantitative approach. An online questionnaire was administered to 57 academic librarians from 15 university libraries. Findings show that the current focus of the surveyed institutions is on providing scientific research support services (15); disseminating/sharing university research output (14); and copyright activities (12). More specialised IP information services such as IP education and training, patent information services, IP consulting services, IP online service platforms, and web-based IP information services are largely unavailable in Zimbabwean university libraries. Results reveal that the underlying challenge in the development of IP information services in Zimbabwe's university libraries is insufficient IP knowledge among academic librarians, which is exacerbated by inadequate IP management frameworks in university institutions. The study proposes a framework for the entrenchment of IP information services in Zimbabwe's university libraries.

Keywords: academic libraries, information services, intellectual property, IP knowledge, university libraries, Zimbabwe

Procedia PDF Downloads 116
39 Effect of Internet Addiction on Dietary Behavior and Lifestyle Characteristics among University Students

Authors: Hafsa Kamran, Asma Afreen, Zaheer Ahmed

Abstract:

Internet addiction, an emerging mental health disorder from last two decades, is manifested by the inability in the controlled use of internet leading to academics, social, physiological and/or psychological difficulties. The present study aimed to assess the levels of internet addiction among university students in Lahore and to explore the effects of internet addiction on their dietary behavior and lifestyle. It was an analytical cross-sectional study. Data was collected from October to December 2016 from students of four universities selected through two-stage sampling method. The numbers of participants were 500 and 13 questionnaires were rejected due to incomplete information. Levels of Internet Addiction (IA) were calculated using Young Internet Addiction Test (YIAT). Data was also collected on students’ demographics, lifestyle factors and dietary behavior using self-reported questionnaire. Data was analyzed using SPSS (version 21). Chi-square test was applied to evaluate the relationship between variables. Results of the study revealed that 10% of the population had severe internet addiction while moderate Internet Addiction was present in 42%. High prevalence was found among males (11% vs. 8%), private sector university students (p = 0.008) and engineering students (p = 0.000). The lifestyle habits of internet addicts were significantly of poorer quality than normal users (p = 0.05). Internet addiction was found associated with lesser physically activity (p = 0.025), had shorter duration of physical activity (p = 0.016), had more disorganized sleep pattern (p = 0.023), had less duration of sleep (p = 0.019), reported being more tired and sleepy in class (p = 0.033) and spending more time on internet as compared to normal users. Severe and moderate internet addicts also found to be more overweight and obese than normal users (p = 0.000). The dietary behavior of internet addicts was significantly poorer than normal users. Internet addicts were found to skip breakfast more than a normal user (p = 0.039). Common reasons for meal skipping were lack of time and snacking between meals (p = 0.000). They also had increased meal size (p = 0.05) and habit of snacking while using the internet (p = 0.027). Fast food (p = 0.016) and fried items (p = 0.05) were most consumed snacks, while carbonated beverages (p = 0.019) were most consumed beverages among internet addicts. Internet Addicts were found to consume less than recommended daily servings of dairy (p = 0.008) and fruits (p = 0.000) and more servings of meat group (p = 0.025) than their no internet addict counterparts. In conclusion, in this study, it was demonstrated that internet addicts have unhealthy dietary behavior and inappropriate lifestyle habits. University students should be educated regarding the importance of balanced diet and healthy lifestyle, which are critical for effectual primary prevention of numerous chronic degenerative diseases. Furthermore, it is necessary to raise awareness concerning adverse effects of internet addiction among youth and their parents.

Keywords: dietary behavior, internet addiction, lifestyle, university students

Procedia PDF Downloads 175
38 Making Meaning, Authenticity, and Redefining a Future in Former Refugees and Asylum Seekers Detained in Australia

Authors: Lynne McCormack, Andrew Digges

Abstract:

Since 2013, the Australian government has enforced mandatory detention of anyone arriving in Australia without a valid visa, including those subsequently identified as a refugee or seeking asylum. While consistent with the increased use of immigration detention internationally, Australia’s use of offshore processing facilities both during and subsequent to refugee status determination processing has until recently remained a unique feature of Australia’s program of deterrence. The commonplace detention of refugees and asylum seekers following displacement is a significant and independent source of trauma and a contributory factor in adverse psychological outcomes. Officially, these individuals have no prospect of resettlement in Australia, are barred from applying for substantive visas, and are frequently and indefinitely detained in closed facilities such as immigration detention centres, or alternative places of detention, including hotels. It is also important to note that the limited access to Australia’s immigration detention population made available to researchers often means that data available for secondary analysis may be incomplete or delayed in its release. Further, studies into the lived experience of refugees and asylum seekers are typically cross-sectional and convenience sampled, employing a variety of designs and research methodologies that limit comparability and focused on the immediacy of the individual’s experience. Consequently, how former detainees make sense of their experience, redefine their future trajectory upon release, and recover a sense of authenticity and purpose, is unknown. As such, the present study sought the positive and negative subjective interpretations of 6 participants in Australia regarding their lived experiences as refugees and asylum seekers within Australia’s immigration detention system and its impact on their future sense of self. It made use of interpretative phenomenological analysis (IPA), a qualitative research methodology that is interested in how individuals make sense of, and ascribe meaning to, their unique lived experiences of phenomena. Underpinned by phenomenology, hermeneutics, and critical realism, this idiographic study aimed to explore both positive and negative subjective interpretations of former refugees and asylum seekers held in detention in Australia. It sought to understand how they make sense of their experiences, how detention has impacted their overall journey as displaced persons, and how they have moved forward in the aftermath of protracted detention in Australia. Examining the unique lived experiences of previously detained refugees and asylum seekers may inform the future development of theoretical models of posttraumatic growth among this vulnerable population, thereby informing the delivery of future mental health and resettlement services.

Keywords: mandatory detention, refugee, asylum seeker, authenticity, Interpretative phenomenological analysis

Procedia PDF Downloads 65
37 Monitoring of Wound Healing Through Structural and Functional Mechanisms Using Photoacoustic Imaging Modality

Authors: Souradip Paul, Arijit Paramanick, M. Suheshkumar Singh

Abstract:

Traumatic injury is the leading worldwide health problem. Annually, millions of surgical wounds are created for the sake of routine medical care. The healing of these unintended injuries is always monitored based on visual inspection. The maximal restoration of tissue functionality remains a significant concern of clinical care. Although minor injuries heal well with proper care and medical treatment, large injuries negatively influence various factors (vasculature insufficiency, tissue coagulation) and cause poor healing. Demographically, the number of people suffering from severe wounds and impaired healing conditions is burdensome for both human health and the economy. An incomplete understanding of the functional and molecular mechanism of tissue healing often leads to a lack of proper therapies and treatment. Hence, strong and promising medical guidance is necessary for monitoring the tissue regeneration processes. Photoacoustic imaging (PAI), is a non-invasive, hybrid imaging modality that can provide a suitable solution in this regard. Light combined with sound offers structural, functional and molecular information from the higher penetration depth. Therefore, molecular and structural mechanisms of tissue repair will be readily observable in PAI from the superficial layer and in the deep tissue region. Blood vessel formation and its growth is an essential tissue-repairing components. These vessels supply nutrition and oxygen to the cell in the wound region. Angiogenesis (formation of new capillaries from existing blood vessels) contributes to new blood vessel formation during tissue repair. The betterment of tissue healing directly depends on angiogenesis. Other optical microscopy techniques can visualize angiogenesis in micron-scale penetration depth but are unable to provide deep tissue information. PAI overcomes this barrier due to its unique capability. It is ideally suited for deep tissue imaging and provides the rich optical contrast generated by hemoglobin in blood vessels. Hence, an early angiogenesis detection method provided by PAI leads to monitoring the medical treatment of the wound. Along with functional property, mechanical property also plays a key role in tissue regeneration. The wound heals through a dynamic series of physiological events like coagulation, granulation tissue formation, and extracellular matrix (ECM) remodeling. Therefore tissue elasticity changes, can be identified using non-contact photoacoustic elastography (PAE). In a nutshell, angiogenesis and biomechanical properties are both critical parameters for tissue healing and these can be characterized in a single imaging modality (PAI).

Keywords: PAT, wound healing, tissue coagulation, angiogenesis

Procedia PDF Downloads 73