Search results for: word association task
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4974

Search results for: word association task

714 Risk of Fatal and Non-Fatal Coronary Heart Disease and Stroke Events among Adult Patients with Hypertension: Basic Markov Model Inputs for Evaluating Cost-Effectiveness of Hypertension Treatment: Systematic Review of Cohort Studies

Authors: Mende Mensa Sorato, Majid Davari, Abbas Kebriaeezadeh, Nizal Sarrafzadegan, Tamiru Shibru, Behzad Fatemi

Abstract:

Markov model, like cardiovascular disease (CVD) policy model based simulation, is being used for evaluating the cost-effectiveness of hypertension treatment. Stroke, angina, myocardial infarction (MI), cardiac arrest, and all-cause mortality were included in this model. Hypertension is a risk factor for a number of vascular and cardiac complications and CVD outcomes. Objective: This systematic review was conducted to evaluate the comprehensiveness of this model across different regions globally. Methods: We searched articles written in the English language from PubMed/Medline, Ovid/Medline, Embase, Scopus, Web of Science, and Google scholar with a systematic search query. Results: Thirteen cohort studies involving a total of 2,165,770 (1,666,554 hypertensive adult population and 499,226 adults with treatment-resistant hypertension) were included in this scoping review. Hypertension is clearly associated with coronary heart disease (CHD) and stroke mortality, unstable angina, stable angina, MI, heart failure (HF), sudden cardiac death, transient ischemic attack, ischemic stroke, subarachnoid hemorrhage, intracranial hemorrhage, peripheral arterial disease (PAD), and abdominal aortic aneurism (AAA). Association between HF and hypertension is variable across regions. Treatment resistant hypertension is associated with a higher relative risk of developing major cardiovascular events and all-cause mortality when compared with non-resistant hypertension. However, it is not included in the previous CVD policy model. Conclusion: The CVD policy model used can be used in most regions for the evaluation of the cost-effectiveness of hypertension treatment. However, hypertension is highly associated with HF in Latin America, the Caribbean, Eastern Europe, and Sub-Saharan Africa. Therefore, it is important to consider HF in the CVD policy model for evaluating the cost-effectiveness of hypertension treatment in these regions. We do not suggest the inclusion of PAD and AAA in the CVD policy model for evaluating the cost-effectiveness of hypertension treatment due to a lack of sufficient evidence. Researchers should consider the effect of treatment-resistant hypertension either by including it in the basic model or during setting the model assumptions.

Keywords: cardiovascular disease policy model, cost-effectiveness analysis, hypertension, systematic review, twelve major cardiovascular events

Procedia PDF Downloads 70
713 Factors Affecting Cesarean Section among Women in Qatar Using Multiple Indicator Cluster Survey Database

Authors: Sahar Elsaleh, Ghada Farhat, Shaikha Al-Derham, Fasih Alam

Abstract:

Background: Cesarean section (CS) delivery is one of the major concerns both in developing and developed countries. The rate of CS deliveries are on the rise globally, and especially in Qatar. Many socio-economic, demographic, clinical and institutional factors play an important role for cesarean sections. This study aims to investigate factors affecting the prevalence of CS among women in Qatar using the UNICEF’s Multiple Indicator Cluster Survey (MICS) 2012 database. Methods: The study has focused on the women’s questionnaire of the MICS, which was successfully distributed to 5699 participants. Following study inclusion and exclusion criteria, a final sample of 761 women aged 19- 49 years who had at least one delivery of giving birth in their lifetime before the survey were included. A number of socio-economic, demographic, clinical and institutional factors, identified through literature review and available in the data, were considered for the analyses. Bivariate and multivariate logistic regression models, along with a multi-level modeling to investigate clustering effect, were undertaken to identify the factors that affect CS prevalence in Qatar. Results: From the bivariate analyses the study has shown that, a number of categorical factors are statistically significantly associated with the dependent variable (CS). When identifying the factors from a multivariate logistic regression, the study found that only three categorical factors -‘age of women’, ‘place at delivery’ and ‘baby weight’ appeared to be significantly affecting the CS among women in Qatar. Although the MICS dataset is based on a cluster survey, an exploratory multi-level analysis did not show any clustering effect, i.e. no significant variation in results at higher level (households), suggesting that all analyses at lower level (individual respondent) are valid without any significant bias in results. Conclusion: The study found a statistically significant association between the dependent variable (CS delivery) and age of women, frequency of TV watching, assistance at birth and place of birth. These results need to be interpreted cautiously; however, it can be used as evidence-base for further research on cesarean section delivery in Qatar.

Keywords: cesarean section, factors, multiple indicator cluster survey, MICS database, Qatar

Procedia PDF Downloads 116
712 Targetting T6SS of Klebsiella pneumoniae for Assessment of Immune Response in Mice for Therapeutic Lead Development

Authors: Sweta Pandey, Samridhi Dhyani, Susmita Chaudhuri

Abstract:

Klebsiella pneumoniae bacteria is a global threat to human health due to an increase in multi-drug resistance among strains. The hypervirulent strains of Klebsiella pneumoniae is a major trouble due to their association with life-threatening infections in a healthy population. One of the major virulence factors of hyper virulent strains of Klebsiella pneumoniae is the T6SS (Type six secretary system) which is majorly involved in microbial antagonism and causes interaction with the host eukaryotic cells during infections. T6SS mediates some of the crucial factors for establishing infection by the bacteria, such as cell adherence, invasion, and subsequent in vivo colonisation. The antibacterial activity and the cell invasion property of the T6SS system is a major requirement for the establishment of K. pneumoniae infections within the gut. The T6SS can be an appropriate target for developing therapeutics. The T6SS consists of an inner tube comprising hexamers of Hcp (Haemolysin -regulated protein) protein, and at the top of this tube sits VgrG (Valine glycine repeat protein G); the tip of the machinery consists of PAAR domain containing proteins which act as a delivery system for bacterial effectors. For this study, immune response to recombinant VgrG protein was generated to establish this protein as a potential immunogen for the development of therapeutic leads. The immunogenicity of the selected protein was determined by predicting the B cell epitopes by the BCEP analysis tool. The gene sequence for multiple domains of VgrG protein (phage_base_V, T6SS_Vgr, DUF2345) was selected and cloned in pMAL vector in E. coli. The construct was subcloned and expressed as a fusion protein of 203 residue protein with mannose binding protein tag (MBP) to enhance solubility and purification of this protein. The purified recombinant VgrG fusion protein was used for mice immunisation. The antiserum showed reactivity with the recombinant VgrG in ELISA and western blot. The immunised mice were challenged with K. pneumoniae bacteria and showed bacterial clearance in immunised mice. The recombinant VgrG protein can further be used for studying downstream signalling of VgrG protein in mice during infection and for therapeutic MAb development to eradicate K. pneumoniae infections.

Keywords: immune response, Klebsiella pneumoniae, multi-drug resistance, recombinant protein expression, T6SS, VgrG

Procedia PDF Downloads 102
711 Music Responsiveness and Cultural Practice: Tarok Ethnic Group of Plateau State in Focus

Authors: Johnson-Egemba Helen Amaka

Abstract:

Music is emotional in the sense that it controls people’s feelings. The way and manner people react to music at a point in time depend on the type of music that is playing. Music can make someone to march or dance, to cry or laugh, to be happy or sad, to fight or make peace and so on. It therefore makes someone o exhibit some kind of behaviours, either positive or negative. Even dangerous animals have been found to be controlled by music. In the psychiatric homes, mad people are always found to be dancing to music. During funeral ceremony, music singing and dancing are sources of comfort to the bereaved. As a background to the study, Tarok ethnic group in Plateau State was used. The Tarok comprise of Langtang North and South Local Government Areas. The ethnic group of Tarok integrates music in almost all the activities of their lives. A total of six (6) types of folk songs were identified. These songs range from marriages, funeral, royalty, togetherness, war, rituals, festivals, and farming. This paper points out the significance of basic responsiveness of the Tarok people towards the folk songs, their reaction generally whether positive or negative. The methods of data collection employed in this work include oral interview approach, recording of various types of Tarok folk songs, consulting of journals, magazines and textbooks. The researcher used oral interview as her primary source of information which is found to be the most effective procedure in carrying out this task. The songs were textually analyzed with a view to unveiling their meanings, thought processes, and conveying their direction and functions within the context of their rendition. The major findings of the study are that music in Tarok culture covers the physical, mental, emotional and social experiences. The physical aspect is the motor skills, which include dancing and demonstration of the songs. The mental experiences are intellectual levels which include construction and manufacturing of musical instruments, composing songs, teaching and learning etc. Furthermore, this research provided in addition to musical activities, the literature, history and culture of the Tarok communities.

Keywords: cultural, music, practice, responsiveness

Procedia PDF Downloads 296
710 Project Work with Design Thinking and Blended Learning: A Practical Report from Teaching in Higher Education

Authors: C. Vogeler

Abstract:

Change processes such as individualization and digitalization have an impact on higher education. Graduates are expected to cooperate in creative work processes in their professional life. During their studies, they need to be prepared accordingly. This includes modern learning scenarios that integrate the benefits of digital media. Therefore, design thinking and blended learning have been combined in the project-based seminar conception introduced here. The presented seminar conception has been realized and evaluated with students of information sciences since September 2017. Within the seminar, the students learn to work on a project. They apply the methods in a problem-based learning scenario. Task of the case study is to arrange a conference on the topic gaming in libraries. In order to collaborative develop creative possibilities of realization within the group of students the design thinking method has been chosen. Design thinking is a method, used to create user-centric, problem-solving and need-driven innovation through creative collaboration in multidisciplinary teams. Central characteristics are the openness of this approach to work results and the visualization of ideas. This approach is now also accepted in the field of higher education. Especially in problem-based learning scenarios, the method offers clearly defined process steps for creative ideas and their realization. The creative process can be supported by digital media, such as search engines and tools for the documentation of brainstorming, creation of mind maps, project management etc. Because the students have to do two-thirds of the workload in their private study, design thinking has been combined with a blended learning approach. This supports students’ preparation and follow-up of the joint work in workshops (flipped classroom scenario) as well as the communication and collaboration during the entire project work phase. For this purpose, learning materials are provided on a Moodle-based learning platform as well as various tools that supported the design thinking process as described above. In this paper, the seminar conception with a combination of design thinking and blended learning is described and the potentials and limitations of the chosen strategy for the development of a course with a multimedia approach in higher education are reflected.

Keywords: blended learning, design thinking, digital media tools and methods, flipped classroom

Procedia PDF Downloads 197
709 Distribution of Dynamical and Energy Parameters in Axisymmetric Air Plasma Jet

Authors: Vitas Valinčius, Rolandas Uscila, Viktorija Grigaitienė, Žydrūnas Kavaliauskas, Romualdas Kėželis

Abstract:

Determination of integral dynamical and energy characteristics of high-temperature gas flows is a very important task of gas-dynamic for hazardous substances destruction systems. They are also always necessary for the investigation of high-temperature turbulent flow dynamics, heat and mass transfer. It is well known that distribution of dynamical and thermal characteristics of high-temperature flows and jets is strongly related to heat flux variation over an imposed area of heating. As is visible from numerous experiments and theoretical considerations, the fundamental properties of an isothermal jet are well investigated. However, the establishment of regularities in high-temperature conditions meets certain specific behavior comparing with moderate-temperature jets and flows. Their structures have not been thoroughly studied yet, especially in the cases of plasma ambient. It is well known that the distribution of local plasma jet parameters in high temperature and isothermal jets and flows may significantly differ. High temperature axisymmetric air jet generated by atmospheric pressure DC arc plasma torch was investigated employing enthalpy probe 3.8∙10-3 m of diameter. Distribution of velocities and temperatures were established in different cross-sections of the plasma jet outflowing from 42∙10-3 m diameter pipe at the average mean velocity of 700 m∙s-1, and averaged temperature of 4000 K. It has been found that gas heating fractionally influences shape and values of a dimensionless profile of velocity and temperature in the main zone of plasma jet and has a significant influence in the initial zone of the plasma jet. The width of the initial zone of the plasma jet has been found to be lesser than in the case of isothermal flow. The relation between dynamical thickness and turbulent number of Prandtl has been established along jet axis. Experimental results were generalized in dimensionless form. The presence of convective heating shows that heat transfer in a moving high-temperature jet also occurs due to heat transfer by moving particles of the jet. In this case, the intensity of convective heat transfer is proportional to the instantaneous value of the flow velocity at a given point in space. Consequently, the configuration of the temperature field in moving jets and flows essentially depends on the configuration of the velocity field.

Keywords: plasma jet, plasma torch, heat transfer, enthalpy probe, turbulent number of Prandtl

Procedia PDF Downloads 182
708 Television, Internet, and Internet Social Media Direct-To-Consumer Prescription Medication Advertisements: Intention and Behavior to Seek Additional Prescription Medication Information

Authors: Joshua Fogel, Rivka Herzog

Abstract:

Although direct-to-consumer prescription medication advertisements (DTCA) are viewed or heard in many venues, there does not appear to be any research for internet social media DTCA. We study the association of traditional media DTCA and digital media DTCA including internet social media of YouTube, Facebook, and Twitter with three different outcomes. There was one intentions outcome and two different behavior outcomes. The intentions outcome was the agreement level for seeking additional information about a prescription medication after seeing a DTCA. One behavior outcome was the agreement level for obtaining additional information about a prescription medication after seeing a DTCA. The other behavior outcome was the frequency level for obtaining additional information about a prescription medication after seeing a DTCA. Surveys were completed by 635 college students. Predictors included demographic variables, theory of planned behavior variables, health variables, and advertisements seen or heard. Also, in the behavior analyses, additional predictors of intentions and sources for seeking additional prescription drug information were included. Multivariate linear regression analyses were conducted. We found that increased age was associated with increased behavior, women were associated with increased intentions, and Hispanic race/ethnicity was associated with decreased behavior. For the theory of planned behavior variables, increased attitudes were associated with increased intentions, increased social norms were associated with increased intentions and behavior, and increased intentions were associated with increased behavior. Very good perceived health was associated with increased intentions. Advertisements seen in spam mail were associated with decreased intentions. Advertisements seen on traditional or cable television were associated with decreased behavior. Advertisements seen on television watched on the internet were associated with increased behavior. The source of seeking additional information of reading internet print content was associated with increased behavior. No internet social media advertisements were associated with either intentions or behavior. In conclusion, pharmaceutical brand managers and marketers should consider these findings when tailoring their DTCA advertising campaigns and directing their DTCA advertising budget towards young adults such as college students. They need to reconsider the current approach for traditional television DTCA and also consider dedicating a larger advertising budget toward internet television DTCA. Although internet social media is a popular place to advertise, the financial expenditures do not appear worthwhile for DTCA when targeting young adults such as college students.

Keywords: brand managers, direct-to-consumer advertising, internet, social media

Procedia PDF Downloads 265
707 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing

Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou

Abstract:

The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.

Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation

Procedia PDF Downloads 118
706 NFTs, between Opportunities and Absence of Legislation: A Study on the Effect of the Rulings of the OpenSea Case

Authors: Andrea Ando

Abstract:

The development of the blockchain has been a major innovation in the technology field. It opened the door to the creation of novel cyberassets and currencies. In more recent times, the non-fungible tokens have started to be at the centre of media attention. Their popularity has been increasing since 2021, and they represent the latest in the world of distributed ledger technologies and cryptocurrencies. It seems more and more likely that NFTs will play a more important role in our online interactions. They are indeed increasingly taking part in the arts and technology sectors. Their impact on society and the market is still very difficult to define, but it is very likely that there will be a turning point in the world of digital assets. There are some examples of their peculiar behaviour and effect in our contemporary tech-market: the former CEO of the famous social media site Twitter sold an NFT of his first tweet for around £2,1 million ($2,5 million), or the National Basketball Association has created a platform to sale unique moment and memorabilia from the history of basketball through the non-fungible token technology. Their growth, as imaginable, paved the way for civil disputes, mostly regarding their position under the current intellectual property law in each jurisdiction. In April 2022, the High Court of England and Wales ruled in the OpenSea case that non-fungible tokens can be considered properties. The judge, indeed, concluded that the cryptoasset had all the indicia of property under common law (National Provincial Bank v. Ainsworth). The research has demonstrated that the ruling of the High Court is not providing enough answers to the dilemma of whether minting an NFT is a violation or not of intellectual property and/or property rights. Indeed, if, on the one hand, the technology follows the framework set by the case law (e.g., the 4 criteria of Ainsworth), on the other hand, the question that arises is what is effectively protected and owned by both the creator and the purchaser. Then the question that arises is whether a person has ownership of the cryptographed code, that it is indeed definable, identifiable, intangible, distinct, and has a degree of permanence, or what is attached to this block-chain, hence even a physical object or piece of art. Indeed, a simple code would not have any financial importance if it were not attached to something that is widely recognised as valuable. This was demonstrated first through the analysis of the expectations of intellectual property law. Then, after having laid the foundation, the paper examined the OpenSea case, and finally, it analysed whether the expectations were met or not.

Keywords: technology, technology law, digital law, cryptoassets, NFTs, NFT, property law, intellectual property law, copyright law

Procedia PDF Downloads 89
705 A Corpus-Based Analysis of "MeToo" Discourse in South Korea: Coverage Representation in Korean Newspapers

Authors: Sun-Hee Lee, Amanda Kraley

Abstract:

The “MeToo” movement is a social movement against sexual abuse and harassment. Though the hashtag went viral in 2017 following different cultural flashpoints in different countries, the initial response was quiet in South Korea. This radically changed in January 2018, when a high-ranking senior prosecutor, Seo Ji-hyun, gave a televised interview discussing being sexually assaulted by a colleague. Acknowledging public anger, particularly among women, on the long-existing problems of sexual harassment and abuse, the South Korean media have focused on several high-profile cases. Analyzing the media representation of these cases is a window into the evolving South Korean discourse around “MeToo.” This study presents a linguistic analysis of “MeToo” discourse in South Korea by utilizing a corpus-based approach. The term corpus (pl. corpora) is used to refer to electronic language data, that is, any collection of recorded instances of spoken or written language. A “MeToo” corpus has been collected by extracting newspaper articles containing the keyword “MeToo” from BIGKinds, big data analysis, and service and Nexis Uni, an online academic database search engine, to conduct this language analysis. The corpus analysis explores how Korean media represent accusers and the accused, victims and perpetrators. The extracted data includes 5,885 articles from four broadsheet newspapers (Chosun, JoongAng, Hangyore, and Kyunghyang) and 88 articles from two Korea-based English newspapers (Korea Times and Korea Herald) between January 2017 and November 2020. The information includes basic data analysis with respect to keyword frequency and network analysis and adds refined examinations of select corpus samples through naming strategies, semantic relations, and pragmatic properties. Along with the exponential increase of the number of articles containing the keyword “MeToo” from 104 articles in 2017 to 3,546 articles in 2018, the network and keyword analysis highlights ‘US,’ ‘Harvey Weinstein’, and ‘Hollywood,’ as keywords for 2017, with articles in 2018 highlighting ‘Seo Ji-Hyun, ‘politics,’ ‘President Moon,’ ‘An Ui-Jeong, ‘Lee Yoon-taek’ (the names of perpetrators), and ‘(Korean) society.’ This outcome demonstrates the shift of media focus from international affairs to domestic cases. Another crucial finding is that word ‘defamation’ is widely distributed in the “MeToo” corpus. This relates to the South Korean legal system, in which a person who defames another by publicly alleging information detrimental to their reputation—factual or fabricated—is punishable by law (Article 307 of the Criminal Act of Korea). If the defamation occurs on the internet, it is subject to aggravated punishment under the Act on Promotion of Information and Communications Network Utilization and Information Protection. These laws, in particular, have been used against accusers who have publicly come forward in the wake of “MeToo” in South Korea, adding an extra dimension of risk. This corpus analysis of “MeToo” newspaper articles contributes to the analysis of the media representation of the “MeToo” movement and sheds light on the shifting landscape of gender relations in the public sphere in South Korea.

Keywords: corpus linguistics, MeToo, newspapers, South Korea

Procedia PDF Downloads 223
704 Assessment of the State of Hygiene in a Tunisian Hospital Kitchen: Interest of Mycological and Parasitological Samples from Food Handlers and Environment

Authors: Bouchekoua Myriam, Aloui Dorsaf, Trabelsi Sonia

Abstract:

Introduction Food hygiene in hospitals is important, particularly among patients who could be more vulnerable than healthy subjects to microbiological and nutritional risks. The consumption of contaminated food may be responsible for foodborne diseases, which can be severe among hospitalized patients, especially those immunocompromised. The aim of our study was to assess the state of hygiene in the internal catering department of a Tunisian hospital. Methodology and major results: A prospective study was conducted for one year in the Parasitology-Mycology laboratory of Charles Nicolle Hospital. Samples were taken from the kitchen staff, worktops, and cooking utensils used in the internal catering department. Thirty one employees have benefited from stool exams and scotch tape in order to evaluate the degree of infestation of parasites. 35% of stool exams were positive. Protozoa were the only parasites detected. Blastocystis sp was the species mostly found in nine food handlers. Its role as a human pathogen is still controversial. Pathogenic protozoa were detected in two food handlers (Giardia intestinalis in one person and Dientamoeba fragilis in the other one. Non-pathogenic protozoa were found in two cases; among them, only one had digestive symptoms without a statistically significant association with the carriage of intestinal parasites. Moreover, samples were performed from the hands of the staff in order to search for a fungal carriage. Thus, 25 employees (81%) were colonized by fungi, including molds. Besides, mycological examination among food handlers with a suspected dermatomycosis for diagnostic confirmation concluded foot onychomycosis in 32% of cases and interdigital intertrigo in 26%. Only one person had hand onychomycosis. Among the 17 samples taken from worktops and kitchen utensils, fungal contamination was detected in 13 sites. Hot and cold equipment were the most contaminated. Molds were mainly identified as belonging to five different genera. Cladosporium sp was predominant. Conclusion: In the view of the importance of intestinal parasites among food handlers, the intensity of fungi hand carriage among these employees, and the high level of fungal contamination in worktops and kitchen utensils, a reinforcement of hygiene measures is more than essential in order to minimize the alimentary contamination-risk.

Keywords: hospital kitchen, environment, intestinal parasitosis, fungal carriage, fungal contamination

Procedia PDF Downloads 116
703 Financial Policies in the Process of Global Crisis: Case Study Kosovo, Case Kosovo

Authors: Shpetim Rezniqi

Abstract:

Financial Policies in the process of global crisis the current crisis has swept the world with special emphasis, most developed countries, those countries which have most gross -product world and you have a high level of living.Even those who are not experts can describe the consequences of the crisis to see the reality that is seen, but how far will it go this crisis is impossible to predict. Even the biggest experts have conjecture and large divergence, but agree on one thing: - The devastating effects of this crisis will be more severe than ever before and can not be predicted.Long time, the world was dominated economic theory of free market laws. With the belief that the market is the regulator of all economic problems. The market, as river water will flow to find the best and will find the necessary solution best. Therefore much less state market barriers, less state intervention and market itself is an economic self-regulation. Free market economy became the model of global economic development and progress, it transcends national barriers and became the law of the development of the entire world economy. Globalization and global market freedom were principles of development and international cooperation. All international organizations like the World Bank, states powerful economic, development and cooperation principles laid free market economy and the elimination of state intervention. The less state intervention much more freedom of action was this market- leading international principle. We live in an era of financial tragic. Financial markets and banking in particular economies are in a state of thy good, US stock markets fell about 40%, in other words, this time, was one of the darkest moments 5 since 1920. Prior to her rank can only "collapse" of the stock of Wall Street in 1929, technological collapse of 2000, the crisis of 1973 after the Yom Kippur war, while the price of oil quadrupled and famous collapse of 1937 / '38, when Europe was beginning World war II In 2000, even though it seems like the end of the world was the corner, the world economy survived almost intact. Of course, that was small recessions in the United States, Europe, or Japan. Much more difficult the situation was at crisis 30s, or 70s, however, succeeded the world. Regarding the recent financial crisis, it has all the signs to be much sharper and with more consequences. The decline in stock prices is more a byproduct of what is really happening. Financial markets began dance of death with the credit crisis, which came as a result of the large increase in real estate prices and household debt. It is these last two phenomena can be matched very well with the gains of the '20s, a period during which people spent fists as if there was no tomorrow. All is not away from the mouth of the word recession, that fact no longer a sudden and abrupt. But as much as the financial markets melt, the greater is the risk of a problematic economy for years to come. Thus, for example, the banking crisis in Japan proved to be much more severe than initially expected, partly because the assets which were based more loans had, especially the land that falling in value. The price of land in Japan is about 15 years that continues to fall. (ADRI Nurellari-Published in the newspaper "Classifieds"). At this moment, it is still difficult to çmosh to what extent the crisis has affected the economy and what would be the consequences of the crisis. What we know is that many banks will need more time to reduce the award of credit, but banks have this primary function, this means huge loss.

Keywords: globalisation, finance, crisis, recomandation, bank, credits

Procedia PDF Downloads 389
702 A Systematic Review on Factors/Predictors and Outcomes of Parental Distress in Childhood Acute Lymphoblastic Leukemia

Authors: Ana Ferraz, Martim Santos, M. Graça Pereira

Abstract:

Distress among parents of children with acute lymphoblastic leukemia (ALL) is common during treatment and can persist several years post-diagnosis, impacting the adjustment of children and parents themselves. Current evidence is needed to examine the scope and nature of parental distress in childhood ALL. This review focused on associated variables, predictors, and outcomes of parental distress following their ALL diagnosis of their child. PubMed, Web of Science, and PsycINFO databases were searched for English and Spanish papers published from 1983 to 2021. PRISMA statement was followed, and papers were evaluated through a standardized methodological quality assessment tool (NHLBI). Of the 28 papers included, 16 were evaluated as fair, eight as good, and four as poor. Regarding results, 11 papers reported subgroup differences, and 15 found potential predictors of parental distress, including sociodemographic, psychosocial, psychological, family, health, and ALL-specific variables. Significant correlations were found between parental distress, social support, illness cognitions, and resilience, as well as contradictory results regarding the impact of sociodemographic variables on parental distress. Family cohesion and caregiver burden were associated with distress, and the use of healthy coping strategies was associated with less anxiety. Caregiver strain contributed to distress, and the overall impact of illness positively predicted anxiety in mothers and somatization in fathers. Differences in parental distress were found regarding group risk, time since diagnosis, and treatment phases. Thirteen papers explored the outcomes of parental distress on psychological, family, health, and social/education outcomes. Parental distress was the most important predictor of family strain. Significant correlations were found between parental distress at diagnosis and further psychological adjustment of parents themselves and their children. Most papers reported correlations between parental distress on children’s adjustment and quality of life, although few studies reported no association. Correlations between maternal depression and child participation in education and social life were also found. Longitudinal studies are needed to better understand parental distress and its consequences on health outcomes, in particular. Future interventions should focus mainly on parents on distress reduction and psychological adjustment, both in parents and children over time.

Keywords: childhood acute lymphoblastic leukemia, family, parental distress, psychological adjustment, quality of life

Procedia PDF Downloads 108
701 Time and Energy Saving Kitchen Layout

Authors: Poonam Magu, Kumud Khanna, Premavathy Seetharaman

Abstract:

The two important resources of any worker performing any type of work at any workplace are time and energy. These are important inputs of the worker and need to be utilised in the best possible manner. The kitchen is an important workplace where the homemaker performs many essential activities. Its layout should be so designed that optimum use of her resources can be achieved.Ideally, the shape of the kitchen, as determined by the physical space enclosed by the four walls, can be square, rectangular or irregular. But it is the shape of the arrangement of counter that one normally refers to while talking of the layout of the kitchen. The arrangement can be along a single wall, along two opposite walls, L shape, U shape or even island. A study was conducted in 50 kitchens belonging to middle income group families. These were DDA built kitchens located in North, South, East and West Delhi.The study was conducted in three phases. In the first phase, 510 non working homemakers were interviewed. The data related to personal characteristics of the homemakers was collected. Additional information was also collected regarding the kitchens-the size, shape , etc. The homemakers were also questioned about various aspects related to meal preparation-people performing the task, number of items cooked, areas used for meal preparation , etc. In the second phase, a suitable technique was designed for conducting time and motion study in the kitchen while the meal was being prepared. This technique was called Path Process Chart. The final phase was carried out in 50 kitchens. The criterion for selection was that all items for a meal should be cooked at the same time. All the meals were cooked by the homemakers in their own kitchens. The meal preparation was studied using the Path Process Chart technique. The data collected was analysed and conclusions drawn. It was found that of all the shapes, it was the kitchen with L shape arrangement in which, on an average a homemaker spent minimum time on meal preparation and also travelled the minimum distance. Thus, the average distance travelled in a L shaped layout was 131.1 mts as compared to 181.2 mts in an U shaped layout. Similarly, 48 minutes was the average time spent on meal preparation in L shaped layout as compared to 53 minutes in U shaped layout. Thus, the L shaped layout was more time and energy saving layout as compared to U shaped.

Keywords: kitchen layout, meal preparation, path process chart technique, workplace

Procedia PDF Downloads 206
700 Code Mixing and Code-Switching Patterns in Kannada-English Bilingual Children and Adults Who Stutter

Authors: Vasupradaa Manivannan, Santosh Maruthy

Abstract:

Background/Aims: Preliminary evidence suggests that code-switching and code-mixing may act as one of the voluntary coping behavior to avoid the stuttering characteristics in children and adults; however, less is known about the types and patterns of code-mixing (CM) and code-switching (CS). Further, it is not known how it is different between children to adults who stutter. This study aimed to identify and compare the CM and CS patterns between Kannada-English bilingual children and adults who stutter. Method: A standard group comparison was made between five children who stutter (CWS) in the age range of 9-13 years and five adults who stutter (AWS) in the age range of 20-25 years. The participants who are proficient in Kannada (first language- L1) and English (second language- L2) were considered for the study. There were two tasks given to both the groups, a) General conversation (GC) with 10 random questions, b) Narration task (NAR) (Story / General Topic, for example., A Memorable Life Event) in three different conditions {Mono Kannada (MK), Mono English (ME), and Bilingual (BIL) Condition}. The children and adults were assessed online (via Zoom session) with a high-quality internet connection. The audio and video samples of the full assessment session were auto-recorded and manually transcribed. The recorded samples were analyzed for the percentage of dysfluencies using SSI-4 and CM, and CS exhibited in each participant using Matrix Language Frame (MLF) model parameters. The obtained data were analyzed using the Statistical Package for the Social Sciences (SPSS) software package (Version 20.0). Results: The mean, median, and standard deviation values were obtained for the percentage of dysfluencies (%SS) and frequency of CM and CS in Kannada-English bilingual children and adults who stutter for various parameters obtained through the MLF model. The inferential results indicated that %SS significantly varied between population (AWS vs CWS), languages (L1 vs L2), and tasks (GC vs NAR) but not across free (BIL) and bound (MK, ME) conditions. It was also found that the frequency of CM and CS patterns varies between CWS and AWS. The AWS had a lesser %SS but greater use of CS patterns than CWS, which is due to their excessive coping skills. The language mixing patterns were more observed in L1 than L2, and it was significant in most of the MLF parameters. However, there was a significantly higher (P<0.05) %SS in L2 than L1. The CS and CS patterns were more in conditions 1 and 3 than 2, which may be due to the higher proficiency of L2 than L1. Conclusion: The findings highlight the importance of assessing the CM and CS behaviors, their patterns, and the frequency of CM and CS between CWS and AWS on MLF parameters in two different tasks across three conditions. The results help us to understand CM and CS strategies in bilingual persons who stutter.

Keywords: bilinguals, code mixing, code switching, stuttering

Procedia PDF Downloads 78
699 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 178
698 Central Line Stock and Use Audit in Adult Patients: A Quality Improvement Project on Central Venous Catheter Standardisation Across Hospital Departments

Authors: Gregor Moncrieff, Ursula Bahlmann

Abstract:

A number of incident reports were filed from the intensive care unit with regards to adult patients admitted following operations who had a central venous catheter inserted of the incorrect length for the relevant anatomical site and catheters not compatible with pressurised injection inserted whilst in theatre. Incorrect catheter length can lead to a variety of complications and pressurised injection is a requirement for contrast enhanced computerised tomography scans. This led to several patients having a repeat procedure to insert a catheter of the correct length and also compatible with pressurised injection. This project aimed to identify the types of central venous catheters used in theatres and ensure the correct equipment would be stocked and used in future cases in accordance the existing Association of Anaesthetics of Great Britain and Northern Ireland guidelines. A questionnaire was sent out to all of the anaesthetic department in our hospital aiming to determine what types of central venous catheters were preferably used by anaesthetists and why these had been chosen. We also explored any concerns regarding introduction of standardised, pressure injectable central venous catheters to the theatre department which were already in use in other parts of the hospital and in keeping with national guidance. A total of 56 responses were collected. 64% of respondents routinely used a central venous catheter which was significantly shorter than the national recommended guidance with a further 4 different types of central venous catheters used which were different to other areas of the hospital and not pressure injectable. 75% of respondents were in agreement to standardised introduction of the pressure injectable catheters of the recommended length in accordance with national guidance. Reasons why 25% respondents were opposed to introduction of these catheters were explored and discussed. We were successfully able to introduce the standardised central catheters to the theatre department following presentation at the local anaesthetic quality and safety meeting. Reasons against introduction of the catheters were discussed and a compromise was reached that the existing catheters would continue to be stocked but would only be available on request, with a focus on encouraging use of the standardised catheters. Additional changes achieved included removing redundant catheters from the theatre stock. Ongoing data is being collected to analyse positive and negative feedback from use of the introduced catheters.

Keywords: central venous catheter, medical equipment, medical safety, quality improvement

Procedia PDF Downloads 117
697 Cadmium Accumulation and Depuration Characteristics through Food Source of Cage-Cultivated Fish after Accidental Pollution in Longjiang River

Authors: Qianli Ma, Xuemin Zhao, Lingai Yao, Zhencheng Xu, Li Wang

Abstract:

Heavy metal pollution accidents, frequently happened in this decade in China, severely threaten aquatic ecosystem and economy. In January 2012, a basin-scale accidental Cd pollution happened in Longjiang River in southwest China. Although water quality was recovered in short period by emergency treatment with flocculants, a large amount of contaminated cage-cultivated fish were left with the task of preventing or mitigating Cd contamination of fish. In this study, unpolluted Ctenopharyngodon idellus were fed by Cd-contaminated macrophytes for assessing the effect of Cd accumulation through food exposure, and the contaminated C. idellus were fed with Cd-free macrophytes for assessing the ability of Cd depuration. The on-site cultivation experiments were done in two sites of Lalang (S1, accidental Cd pollution originated) and Sancha (S2, a large amount of flocculants were added to accelerate Cd precipitation) in Longjiang river. Results showed that Cd content in fish muscle presented an increasing trend in the accumulation experiment. In S1, Cd content of fish muscle rose sharply from day 8 to day 18 with higher average Cd content in macrophytes and sediment, and kept in the range of 0.208-0.308 mg/kg afterward. In S2, Cd content of fish muscle rose gradually throughout the experiment and reached the maximum level of 0.285 mg/kg on day 76. The results of the depuration experiment showed that Cd content in fish muscle decreased and significant changes were observed in the first half time of the experiment. Meanwhile, fish with lower initial Cd content presented higher elimination constant. In S1, Cd content of fish significantly decreased from 0.713 to 0.304 mg/kg in 18 days and kept decreasing to 0.110 mg/kg in the end, and 84.6% of Cd content was eliminated. While in S2, there was a sharp decrease of Cd content of fish in 0-8 days from 0.355 mg/kg to 0.069 mg/kg. The total elimination percentage was 93.8% and 80.6% of which appeared in day 0-8. The elimination constant of fish in S2 was 0.03 which was higher than 0.02 in S1. Collectively, our results showed Cd could be absorbed through food exposure and accumulate in fish muscle, and the accumulated Cd in fish muscle can be excreted after isolated from the polluted food sources. This knowledge allows managers to assess health risk of Cd contaminated fish and minimize aquaculture loss when considering fish cultivation after accidental pollution.

Keywords: accidental pollution, cadmium accumulation and depuration, cage-cultivated fish, environmental management, river

Procedia PDF Downloads 253
696 Non Destructive Ultrasound Testing for the Determination of Elastic Characteristics of AlSi7Zn3Cu2Mg Foundry Alloy

Authors: A. Hakem, Y. Bouafia

Abstract:

Characterization of materials used for various mechanical components is of great importance in their design. Several studies were conducted by various authors in order to improve their physical and/or chemical properties in general and mechanical or metallurgical properties in particular. The foundry alloy AlSi7Zn3Cu2Mg is one of the main components constituting the various mechanisms for the implementation of applications and various industrial projects. Obtaining a reliable product is not an easy task; several results proposed by different authors show sometimes results that can contradictory. Due to their high mechanical characteristics, these alloys are widely used in engineering. Silicon improves casting properties and magnesium allows heat treatment. It is thus possible to obtain various degrees of hardening and therefore interesting compromise between tensile strength and yield strength, on one hand, and elongation, on the other hand. These mechanical characteristics can be further enhanced by a series of mechanical treatments or heat treatments. Their light weight coupled with high mechanical characteristics, aluminum alloys are very much used in cars and aircraft industry. The present study is focused on the influence of heat treatments which cause significant micro structural changes, usually hardening by variation of annealing temperatures by increments of 10°C and 20°C on the evolution of the main elastic characteristics, the resistance, the ductility and the structural characteristics of AlSi7Zn3Cu2Mg foundry alloy cast in sand by gravity. These elastic properties are determined in three directions for each specimen of dimensions 200x150x20 mm³ by the ultrasonic method based on acoustic or elastic waves. The hardness, the micro hardness and the structural characteristics are evaluated by a non-destructive method. The aim of this work is to study the hardening ability of AlSi7Zn3Cu2Mg alloy by considering ten states. To improve the mechanical properties obtained with the raw casting, one should use heat treatment for structural hardening; the addition of magnesium is necessary to increase the sensitivity to this specific heat treatment: Treatment followed by homogenization which generates a diffusion of atoms in a substitution solid solution inside a hardening furnace at 500°C during 8h, followed immediately by quenching in water at room temperature 20 to 25°C, then an ageing process for 17h at room temperature and at different annealing temperature (150, 160, 170, 180, 190, 240, 200, 220 and 240°C) for 20h in an annealing oven. The specimens were allowed to cool inside the oven.

Keywords: aluminum, foundry alloy, magnesium, mechanical characteristics, silicon

Procedia PDF Downloads 264
695 Particle Deflection in a PDMS Microchannel Caused by a Plane Travelling Surface Acoustic Wave

Authors: Florian Keipert, Hagen Schmitd

Abstract:

The size selective separation of different species in a microfluidic system is an actual task in biological or medical research. Former works dealt with the utilisation of the acoustic radiation force (ARF) caused by a plane travelling Surface Acoustic Wave (tSAW). In literature the ARF is described by a dimensionless parameter κ, depending on the wavelength and the particle diameter. To our knowledge research was done for values 0.2 < κ < 5.8 showing that the ARF is dominating the acoustic streaming force (ASF) for κ > 1.2. As a consequence the particle separation is limited by κ. In addition the dependence on the electrical power level was examined but only for κ > 1 pointing out an increased particle deflection for higher electrical power levels. Nevertheless a detailed study on the ASF and ARF especially for κ < 1 is still missing. In our setup we used a tSAW with a wavelength λ = 90 µm and 3 µm PS particles corresponding to κ = 0.3. Herewith the influence of the applied electrical power level on the particle deflection in a polydimethylsiloxan micro channel was investigated. Our results show an increased particle deflection for an increased electrical power level, which coincides with the reported results for κ > 1. Therefore particle separation is in contrast to literature also possible for lower κ values. Thereby the experimental setup can be generally simplified by a coordinated electrical power level for the specific particle size. Furthermore this raises the question of whether this particle deflection is caused only by the ARF as adopted so far or by the ASF or the sum of both forces. To investigate this fact a 0% - 24% saline solution was used and thus the mismatch between the compressibility of the PS particle and the working fluid could be changed. Therefore it is possible to change the relative strength between ARF and ASF and consequently the particle deflection. We observed a decreasing in the particle deflection for an increased NaCl content up to a 12% saline solution and subsequently an increasing of the particle deflection. Our observation could be explained by the acoustic contrast factor Φ, which depends on the compressibility mismatch. The compressibility of water is increased by the NaCl and the range of a 0% - 24% saline solution covers the PS particle compressibility. Hence the particle deflection reaches a minimum value for the accordance between compressibility of PS particle and saline solution. This minimum value can be estimated as the particle deflection only caused by the ASF. Knowing the particle deflection due to the ASF the particle deflection caused by the ARF can be calculated and thus finally the relation between both forces. Concluding, the particle deflection and therefore the size selective particle separation generated by a tSAW can be achieved for values κ < 1, simplifying actual setups by adjusting the electrical power level. Beyond we studied for the first time the relative strength between ARF and ASF to characterise the particle deflection in a microchannel.

Keywords: ARF, ASF, particle separation, saline solution, tSAW

Procedia PDF Downloads 258
694 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 76
693 A Local Tensor Clustering Algorithm to Annotate Uncharacterized Genes with Many Biological Networks

Authors: Paul Shize Li, Frank Alber

Abstract:

A fundamental task of clinical genomics is to unravel the functions of genes and their associations with disorders. Although experimental biology has made efforts to discover and elucidate the molecular mechanisms of individual genes in the past decades, still about 40% of human genes have unknown functions, not to mention the diseases they may be related to. For those biologists who are interested in a particular gene with unknown functions, a powerful computational method tailored for inferring the functions and disease relevance of uncharacterized genes is strongly needed. Studies have shown that genes strongly linked to each other in multiple biological networks are more likely to have similar functions. This indicates that the densely connected subgraphs in multiple biological networks are useful in the functional and phenotypic annotation of uncharacterized genes. Therefore, in this work, we have developed an integrative network approach to identify the frequent local clusters, which are defined as those densely connected subgraphs that frequently occur in multiple biological networks and consist of the query gene that has few or no disease or function annotations. This is a local clustering algorithm that models multiple biological networks sharing the same gene set as a three-dimensional matrix, the so-called tensor, and employs the tensor-based optimization method to efficiently find the frequent local clusters. Specifically, massive public gene expression data sets that comprehensively cover dynamic, physiological, and environmental conditions are used to generate hundreds of gene co-expression networks. By integrating these gene co-expression networks, for a given uncharacterized gene that is of biologist’s interest, the proposed method can be applied to identify the frequent local clusters that consist of this uncharacterized gene. Finally, those frequent local clusters are used for function and disease annotation of this uncharacterized gene. This local tensor clustering algorithm outperformed the competing tensor-based algorithm in both module discovery and running time. We also demonstrated the use of the proposed method on real data of hundreds of gene co-expression data and showed that it can comprehensively characterize the query gene. Therefore, this study provides a new tool for annotating the uncharacterized genes and has great potential to assist clinical genomic diagnostics.

Keywords: local tensor clustering, query gene, gene co-expression network, gene annotation

Procedia PDF Downloads 168
692 Assessing the Lifestyle Factors, Nutritional and Socioeconomic Status Associated with Peptic Ulcer Disease: A Cross-Sectional Study among Patients at the Tema General Hospital of Ghana

Authors: Marina Aferiba Tandoh, Elsie Odei

Abstract:

Peptic Ulcer Disease (PUD) is amongst the commonest gastrointestinal problems that require emergency treatment in order to preserve life. The prevalence of PUD is increasing within the Ghanaian population, deepening the need to identify factors that are associated with its occurrence. This cross-sectional study assessed the nutritional status, socioeconomic and lifestyle factors associated with PUD among patients attending the Out-Patient Department of the Tema General Hospital of Ghana. A food frequency questionnaire and a three-day, 24-hour recall were used to assess the dietary intakes of study participants. A standardized questionnaire was used to obtain information on the participants’ socio-demographic characteristics, lifestyle as well as medical history. The data was analyzed using SPSS version 22. The mean age of study participants was 32.8±15.41years. Females were significantly higher (61.4%) than males (38.6%) (p < 0.001). All participants had received some form of education, with tertiary education being the highest (52.6%). The majority of them managed their condition with medications only (86%), while 10.5% managed it with a combination of medications and diet. The rest were either by dietary counseling only (1.8%), or surgery only (1.8%). or herbal medicines (29.3%), which were made from home (7.2%) or bought from a medical store (10.8%). Most of the participants experienced a recurrence of the disease (42.1%). For those who had ever experienced recurrences of the disease, it happened when they ate acidic foods (1.8%), ate bigger portions (1.8%), starved themselves (1.8%), or were stressed (1.8%). Others also had triggers when they took certain medications (1.8%) or ate too much pepper (1.8%). About 49% of the participants were either overweight or obese with a recurrence of PUD (p>0.05). Obese patients had the highest rate of PUD recurrences (41%). Drinking alcohol was significantly associated with the recurrence of PUD (χ2= 5.243, p=0.026). Other lifestyles, such as weed smoking, fasting, and use of herbal medicine and NSAIDs did not have any significant association with the disease recurrence. There was no significant correlation between the various dietary patterns and anthropometric parameters except dietary pattern one (salty snacks, regular soft drinks, milk, sweetened yogurt, ice cream, and cooked vegetables), which had a positive correlation with weight (p=0.002) and BMI (p=0.038). PUD patients should target weight reduction actions and reduce alcohol intake as measures to control the recurrence of the disease. Nutrition Education among this population must be promoted to minimize the recurrence of PUD.

Keywords: Dietary patterns, lifestyle factors, nutritional status, peptic ulcer disease

Procedia PDF Downloads 81
691 Predicting Aggregation Propensity from Low-Temperature Conformational Fluctuations

Authors: Hamza Javar Magnier, Robin Curtis

Abstract:

There have been rapid advances in the upstream processing of protein therapeutics, which has shifted the bottleneck to downstream purification and formulation. Finding liquid formulations with shelf lives of up to two years is increasingly difficult for some of the newer therapeutics, which have been engineered for activity, but their formulations are often viscous, can phase separate, and have a high propensity for irreversible aggregation1. We explore means to develop improved predictive ability from a better understanding of how protein-protein interactions on formulation conditions (pH, ionic strength, buffer type, presence of excipients) and how these impact upon the initial steps in protein self-association and aggregation. In this work, we study the initial steps in the aggregation pathways using a minimal protein model based on square-well potentials and discontinuous molecular dynamics. The effect of model parameters, including range of interaction, stiffness, chain length, and chain sequence, implies that protein models fold according to various pathways. By reducing the range of interactions, the folding- and collapse- transition come together, and follow a single-step folding pathway from the denatured to the native state2. After parameterizing the model interaction-parameters, we developed an understanding of low-temperature conformational properties and fluctuations, and the correlation to the folding transition of proteins in isolation. The model fluctuations increase with temperature. We observe a low-temperature point, below which large fluctuations are frozen out. This implies that fluctuations at low-temperature can be correlated to the folding transition at the melting temperature. Because proteins “breath” at low temperatures, defining a native-state as a single structure with conserved contacts and a fixed three-dimensional structure is misleading. Rather, we introduce a new definition of a native-state ensemble based on our understanding of the core conservation, which takes into account the native fluctuations at low temperatures. This approach permits the study of a large range of length and time scales needed to link the molecular interactions to the macroscopically observed behaviour. In addition, these models studied are parameterized by fitting to experimentally observed protein-protein interactions characterized in terms of osmotic second virial coefficients.

Keywords: protein folding, native-ensemble, conformational fluctuation, aggregation

Procedia PDF Downloads 361
690 An Experimental Exploration of the Interaction between Consumer Ethics Perceptions, Legality Evaluations, and Mind-Sets

Authors: Daphne Sobolev, Niklas Voege

Abstract:

During the last three decades, consumer ethics perceptions have attracted the attention of a large number of researchers. Nevertheless, little is known about the effect of the cognitive and situational contexts of the decision on ethics judgments. In this paper, the interrelationship between consumers’ ethics perceptions, legality evaluations and mind-sets are explored. Legality evaluations represent the cognitive context of the ethical judgments, whereas mind-sets represent their situational context. Drawing on moral development theories and priming theories, it is hypothesized that both factors are significantly related to consumer ethics perceptions. To test this hypothesis, 289 participants were allocated to three mind-set experimental conditions and a control group. Participants in the mind-set conditions were primed for aggressiveness, politeness or awareness to the negative legal consequences of breaking the law. Mind-sets were induced using a sentence-unscrambling task, in which target words were included. Ethics and legality judgments were assessed using consumer ethics and internet ethics questionnaires. All participants were asked to rate the ethicality and legality of consumer actions described in the questionnaires. The results showed that consumer ethics and legality perceptions were significantly correlated. Moreover, including legality evaluations as a variable in ethics judgment models increased the predictive power of the models. In addition, inducing aggressiveness in participants reduced their sensitivity to ethical issues; priming awareness to negative legal consequences increased their sensitivity to ethics when uncertainty about the legality of the judged scenario was high. Furthermore, the correlation between ethics and legality judgments was significant overall mind-set conditions. However, the results revealed conflicts between ethics and legality perceptions: consumers considered 10%-14% of the presented behaviors unethical and legal, or ethical and illegal. In 10-23% of the questions, participants indicated that they did not know whether the described action was legal or not. In addition, an asymmetry between the effects of aggressiveness and politeness priming was found. The results show that the legality judgments and mind-sets interact with consumer ethics perceptions. Thus, they portray consumer ethical judgments as dynamical processes which are inseparable from other cognitive processes and situational variables. They highlight that legal and ethical education, as well as adequate situational cues at the service place, could have a positive effect on consumer ethics perceptions. Theoretical contribution is discussed.

Keywords: consumer ethics, legality judgments, mind-set, priming, aggressiveness

Procedia PDF Downloads 297
689 Investigating Suicide Cases in Attica, Greece: Insight from an Autopsy-Based Study

Authors: Ioannis N. Sergentanis, Stavroula Papadodima, Maria Tsellou, Dimitrios Vlachodimitropoulos, Sotirios Athanaselis, Chara Spiliopoulou

Abstract:

Introduction: The aim of this study is the investigation of characteristics of suicide, as documented in autopsies during a five-year interval in the greater area of Attica, including the city of Athens. This could reveal possible protective or aggravating factors for suicide risk during a period strongly associated with the Greek debt crisis. Materials and Methods: Data was obtained following registration of suicide cases among autopsies performed in the Forensic Medicine and Toxicology Department, School of Medicine, National and Kapodistrian University of Athens, Greece, during the time interval from January 2011 to December 2015. Anonymity and medical secret were respected. A series of demographic and social factors in addition to special characteristics of suicide were entered into a specially established pre-coded database. These factors include social data as well as psychiatric background and certain autopsy characteristics. Data analysis was performed using descriptive statistics and Fisher’s exact test. The software used was STATA/SE 13 (Stata Corp., College Station, TX, USA). Results: A total of 162 cases were studied, 128 men and 34 women. Age ranged from 14 to 97 years old with an average of 53 years, presenting two peaks around 40 and 60 years. A 56% of cases were single/ divorced/ widowed. 25% of cases occurred during the weekend, and 66% of cases occurred in the house. A predominance of hanging as the leading method of suicide (41.4%) followed by jumping from a height (22.8%) and firearms (19.1%) was noted. Statistical analysis showed an association was found between suicide method and gender (P < 0.001, Fisher’s exact test); specifically, no woman used a firearm while only one man used medication overdose (against four women). Discussion: Greece has historically been one of the countries with the lowest suicide rates in Europe. Given a possible change in suicide trends during the financial crisis, further research seems necessary in order to establish risk factors. According to our study, suicide is more frequent in men who are not married, inside their house. Gender seems to be a factor affecting the method of suicide. These results seem in accordance with the international literature. Stronger than expected predominance in male suicide can be associated with failure to live up to social and family expectations for financial reasons.

Keywords: autopsy, Greece, risk factors, suicide

Procedia PDF Downloads 220
688 Children’s Perception of Conversational Agents and Their Attention When Learning from Dialogic TV

Authors: Katherine Karayianis

Abstract:

Children with Attention Deficit Hyperactivity Disorder (ADHD) have trouble learning in traditional classrooms. These children miss out on important developmental opportunities in school, which leads to challenges starting in early childhood, and these problems persist throughout their adult lives. Despite receiving supplemental support in school, children with ADHD still perform below their non-ADHD peers. Thus, there is a great need to find better ways of facilitating learning in children with ADHD. Evidence has shown that children with ADHD learn best through interactive engagement, but this is not always possible in schools, given classroom restraints and the large student-to-teacher ratio. Redesigning classrooms may not be feasible, so informal learning opportunities provide a possible alternative. One popular informal learning opportunity is educational TV shows like Sesame Street. These types of educational shows can teach children foundational skills taught in pre-K and early elementary school. One downside to these shows is the lack of interactive dialogue between the TV characters and the child viewers. Pseudo-interaction is often deployed, but the benefits are limited if the characters can neither understand nor contingently respond to the child. AI technology has become extremely advanced and is now popular in many electronic devices that both children and adults have access to. AI has been successfully used to create interactive dialogue in children’s educational TV shows, and results show that this enhances children’s learning and engagement, especially when children perceive the character as a reliable teacher. It is likely that children with ADHD, whose minds may otherwise wander, may especially benefit from this type of interactive technology, possibly to a greater extent depending on their perception of the animated dialogic agent. To investigate this issue, I have begun examining the moderating role of inattention among children’s learning from an educational TV show with different types of dialogic interactions. Preliminary results have shown that when character interactions are neither immediate nor accurate, children who are more easily distracted will have greater difficulty learning from the show, but contingent interactions with a TV character seem to buffer these negative effects of distractibility by keeping the child engaged. To extend this line of work, the moderating role of the child’s perception of the dialogic agent as a reliable teacher will be examined in the association between children’s attention and the type of dialogic interaction in the TV show. As such, the current study will investigate this moderated moderation.

Keywords: attention, dialogic TV, informal learning, educational TV, perception of teacher

Procedia PDF Downloads 84
687 An Examination of the Moderating Effect of Team Identification on Attitude and Buying Intention of Jersey Sponsorship

Authors: Young Ik Suh, Taewook Chung, Glaucio Scremin, Tywan Martin

Abstract:

In May of 2016, the Philadelphia 76ers announced that StubHub, the ticket resale company, will have advertising on the team’s jerseys beginning in the 2017-18 season. The 76ers and National Basketball Association (NBA) became the first team and league which embraced jersey sponsorships in the four major U.S. professional sports. Even though many professional teams and leagues in Europe, Asia, Africa, and South America have adopted jersey sponsorship actively, this phenomenon is relatively new in America. While the jersey sponsorship provides economic gains for the professional leagues and franchises, sport fans can have different points of view for the phenomenon of jersey sponsorship. For instance, since many sport fans in U.S. are not familiar with ads on jerseys, this movement can possibly cause negative reaction such as the decrease in ticket and merchandise sales. They also concern the small size of ads on jersey become bigger ads, like in the English Premier League (EPL). However, some sport fans seem they do not mind too much about jersey sponsorship because the ads on jersey will not affect their loyalty and fanship. Therefore, the assumption of this study was that the sport fans’ reaction about jersey sponsorship can be possibly different, especially based on different levels of the sport fans’ team identification and various sizes of ads on jersey. Unlike general sponsorship in sport industry, jersey sponsorship has received little attention regarding its potential impact on sport fans attitudes and buying intentions. Thus, the current study sought to identify how the various levels of team identification influence brand attitude and buying intention in terms of jersey sponsorship. In particular, this study examined the effect of team identification on brand attitude and buying intention when there are no ads, small size ads, and large size ads on jersey. 3 (large, small, and no ads) X 3 (Team Identification: high, moderate, low) between subject factorial design was conducted on attitude toward the brand and buying intention of jersey sponsorship. The ads on Philadelphia 76ers jersey were used. The sample of this study was selected from message board users provided by different sports websites (i.e., forums.realgm.com and phillysportscentral.com). A total of 275 respondents participated in this study by responding to an online survey questionnaire. The results showed that there were significant differences between fans with high identification and fans with low identification. The findings of this study are expected to have many theoretical and practical contributions and implications by extending the research and literature pertaining to the relationship between team identification and brand strategy based upon different levels of team identification.

Keywords: brand attitude, buying intention, Jersey sponsorship, team identification

Procedia PDF Downloads 248
686 Tumour-Associated Tissue Eosinophilia as a Prognosticator in Oral Squamous Cell Carcinoma

Authors: Karen Boaz, C. R. Charan

Abstract:

Background: The infiltration of tumour stroma by eosinophils, Tumor-Associated Tissue Eosinophilia (TATE), is known to modulate the progression of Oral Squamous Cell Carcinoma (OSCC). Eosinophils have direct tumoricidal activity by release of cytotoxic proteins and indirectly they enhance permeability into tumor cells enabling penetration of tumoricidal cytokines. Also, eosinophils may promote tumor angiogenesis by production of several angiogenic factors. Identification of eosinophils in the inflammatory stroma has been proven to be an important prognosticator in cancers of mouth, oesophagus, larynx, pharynx, breast, lung, and intestine. Therefore, the study aimed to correlate TATE with clinical and histopathological variables, and blood eosinophil count to assess the role of TATE as a prognosticator in Oral Squamous Cell Carcinoma (OSCC). Methods: Seventy two biopsy-proven cases of OSCC formed the study cohort. Blood eosinophil counts and TNM stage were obtained from the medical records. Tissue sections (5µm thick) were stained with Haematoxylin and Eosin. The eosinophils were quantified at invasive tumour front (ITF) in 10HPF (40x magnification) with an ocular grid. Bryne’s grading of ITF was also performed. A subset of thirty cases was also assessed for association of TATE with recurrence, involvement of lymph nodes and surgical margins. Results: 1) No statistically significant correlation was found between TATE and TNM stage, blood eosinophil counts and most parameters of Bryne’s grading system. 2) Statistically significant relation of intense degree of TATE was associated with the absence of distant metastasis, increased lympho-plasmacytic response and increased survival (diseasefree and overall) of OSCC patients. 3) In the subset of 30 cases, tissue eosinophil counts were higher in cases with lymph node involvement, decreased survival, without margin involvement and in cases that did not recur. Conclusion: While the role of eosinophils in mediating immune responses seems ambiguous as eosinophils support cell-mediated tumour immunity in early stages while inhibiting the same in advanced stages, TATE may be used as a surrogate marker for determination of prognosis in oral squamous cell carcinoma.

Keywords: tumour-associated tissue eosinophilia, oral squamous cell carcinoma, prognosticator, tumoral immunity

Procedia PDF Downloads 250
685 Influence of Pretreatment Magnetic Resonance Imaging on Local Therapy Decisions in Intermediate-Risk Prostate Cancer Patients

Authors: Christian Skowronski, Andrew Shanholtzer, Brent Yelton, Muayad Almahariq, Daniel J. Krauss

Abstract:

Prostate cancer has the third highest incidence rate and is the second leading cause of cancer death for men in the United States. Of the diagnostic tools available for intermediate-risk prostate cancer, magnetic resonance imaging (MRI) provides superior soft tissue delineation serving as a valuable tool for both diagnosis and treatment planning. Currently, there is minimal data regarding the practical utility of MRI for evaluation of intermediate-risk prostate cancer. As such, the National Comprehensive Cancer Network’s guidelines indicate MRI as optional in intermediate-risk prostate cancer evaluation. This project aims to elucidate whether MRI affects radiation treatment decisions for intermediate-risk prostate cancer. This was a retrospective study evaluating 210 patients with intermediate-risk prostate cancer, treated with definitive radiotherapy at our institution between 2019-2020. NCCN risk stratification criteria were used to define intermediate-risk prostate cancer. Patients were divided into two groups: those with pretreatment prostate MRI, and those without pretreatment prostate MRI. We compared the use of external beam radiotherapy, brachytherapy alone, brachytherapy boost, and androgen depravation therapy between the two groups. Inverse probability of treatment weighting was used to match the two groups for age, comorbidity index, American Urologic Association symptoms index, pretreatment PSA, grade group, and percent core involvement on prostate biopsy. Wilcoxon Rank Sum and Chi-squared tests were used to compare continuous and categorical variables. Of the patients who met the study’s eligibility criteria, 133 had a prostate MRI and 77 did not. Following propensity matching, there were no differences between baseline characteristics between the two groups. There were no statistically significant differences in treatments pursued between the two groups: 42% vs 47% were treated with brachytherapy alone, 40% vs 42% were treated with external beam radiotherapy alone, 18% vs 12% were treated with external beam radiotherapy with a brachytherapy boost, and 24% vs 17% received androgen deprivation therapy in the non-MRI and MRI groups, respectively. This analysis suggests that pretreatment MRI does not significantly impact radiation therapy or androgen deprivation therapy decisions in patients with intermediate-risk prostate cancer. Obtaining a pretreatment prostate MRI should be used judiciously and pursued only to answer a specific question, for which the answer is likely to impact treatment decision. Further follow up is needed to correlate MRI findings with their impacts on specific oncologic outcomes.

Keywords: magnetic resonance imaging, prostate cancer, definitive radiotherapy, gleason score 7

Procedia PDF Downloads 89