Search results for: common bile duct exploration
457 Active Development of Tacit Knowledge: Knowledge Management, High Impact Practices and Experiential Learning
Authors: John Zanetich
Abstract:
Due to their positive associations with student learning and retention, certain undergraduate opportunities are designated ‘high-impact.’ High-Impact Practices (HIPs) such as, learning communities, community based projects, research, internships, study abroad and culminating senior experience, share several traits bin common: they demand considerable time and effort, learning occurs outside of the classroom, and they require meaningful interactions between faculty and students, they encourage collaboration with diverse others, and they provide frequent and substantive feedback. As a result of experiential learning in these practices, participation in these practices can be life changing. High impact learning helps individuals locate tacit knowledge, and build mental models that support the accumulation of knowledge. On-going learning from experience and knowledge conversion provides the individual with a way to implicitly organize knowledge and share knowledge over a lifetime. Knowledge conversion is a knowledge management component which focuses on the explication of the tacit knowledge that exists in the minds of students and that knowledge which is embedded in the process and relationships of the classroom educational experience. Knowledge conversion is required when working with tacit knowledge and the demand for a learner to align deeply held beliefs with the cognitive dissonance created by new information. Knowledge conversion and tacit knowledge result from the fact that an individual's way of knowing, that is, their core belief structure, is considered generalized and tacit instead of explicit and specific. As a phenomenon, tacit knowledge is not readily available to the learner for explicit description unless evoked by an external source. The development of knowledge–related capabilities such as Aggressive Development of Tacit Knowledge (ADTK) can be used in experiential educational programs to enhance knowledge, foster behavioral change, improve decision making, and overall performance. ADTK allows the student in HIPs to use their existing knowledge in a way that allows them to evaluate and make any necessary modifications to their core construct of reality in order to amalgamate new information. Based on the Lewin/Schein Change Theory, the learner will reach for tacit knowledge as a stabilizing mechanism when they are challenged by new information that puts them slightly off balance. As in word association drills, the important concept is the first thought. The reactionary outpouring to an experience is the programmed or tacit memory and knowledge of their core belief structure. ADTK is a way to help teachers design their own methods and activities to unfreeze, create new learning, and then refreeze the core constructs upon which future learning in a subject area is built. This paper will explore the use of ADTK as a technique for knowledge conversion in the classroom in general and in HIP programs specifically. It will focus on knowledge conversion in curriculum development and propose the use of one-time educational experiences, multi-session experiences and sequential program experiences focusing on tacit knowledge in educational programs.Keywords: tacit knowledge, knowledge management, college programs, experiential learning
Procedia PDF Downloads 263456 The Role of Structural Poverty in the Know-How and Moral Economy of Doctors in Africa: An Anthropological Perspective
Authors: Isabelle Gobatto
Abstract:
Based on an anthropological approach, this paper explores the medical profession and the construction of medical practices by considering the multiform articulations between structural poverty and the production of care from a low-resource francophone West African country, Burkina Faso. This country is considered in its exemplary dimension of culturally differentiated countries of the African continent that share the same situation of structural poverty. The objective is to expose the effects of structural poverty on the ways of constructing professional knowledge and thinking about the sense of the medical profession. If doctors are trained to have the same capacities in South and West countries, which are to treat and save lives whatever the cultural contexts of the practice of medicine, the ways of investing their role and of dealing with this context of action fracture the homogenization of the medical profession. In the line of anthropology of biomedicine, this paper outlines the complex effects of structural poverty on health care, care relations, and the moral economy of doctors. The materials analyzed are based on an ethnography including two temporalities located thirty years apart (1990-1994 and 2020-2021), based on long-term observations of care practices conducted in healthcare institutions, interviews coupled with the life histories of physicians. The findings reveal that disabilities faced by doctors to deliver care are interpreted as policy gaps, but they are also considered by physicians as constitutive of the social and cultural characteristics of patients, making their capacities and incapacities in terms of accompanying caregivers in the production of care. These perceptions have effects on know-how, structured around the need to act even when diagnoses are not made so as not to see patients desert health structures if the costs of care are too high for them. But these interpretations of highly individualizing dimensions of these difficulties place part of the blame on patients for the difficulties in using learned knowledge and delivering effective care. These situations challenge the ethics of caregivers but also of ethnologists. Firstly because the interpretations of disabilities prevent caregivers from considering vulnerabilities of care as constituting a common condition shared with their patients in these health systems, affecting them in an identical way although in different places in the production of care. Correlatively, these results underline that these professional conceptions prevent the emergence of a figure of victim, which could be shared between patients and caregivers who, together, undergo working and care conditions at the limit of the acceptable. This dimension directly involves politics. Secondly, structural poverty and its effects on care challenge the ethics of the anthropologist who observes caregivers producing, without intent to arm, experiences of care marked by an ordinary violence, by not giving them the care they need. It is worth asking how anthropologists could get doctors to think in this light in west-African societies.Keywords: Africa, care, ethics, poverty
Procedia PDF Downloads 69455 The Incidence of Inferior Alveolar Nerve Dysfunction Following Bilateral Sagittal Split Osteotomies: A Single Centre Retrospective Audit in the United Kingdom
Authors: Krupali Mukeshkumar, Jinesh Shah
Abstract:
Background: Bilateral Sagittal Split Osteotomy (BSSO), used for the correction of mandibular deformities, is a common oral and maxillofacial surgical procedure. Inferior alveolar nerve dysfunction is commonly reported post-operatively by patients as paresthesia or anesthesia. The current literature lacks a consensus on the incidence of inferior alveolar nerve dysfunction as patients are not routinely assessed pre and post-operatively with an objective assessment. The range of incidence varies from 9% to 85% of patients, with some authors arguing that 100% of patients experience nerve dysfunction immediately post-surgery. Systematic reviews have shown a difference between incidence rates at different follow-up periods using objective and subjective methods. Aim: To identify the incidence of inferior alveolar nerve dysfunction following BSSO. Gold standard: Nerve dysfunction incidence rates similar or lower than current literature of 83% day one post-operatively and 18.4% at one year follow up. Setting: A retrospective cross-sectional audit of patients treated between 2017-2019 at the Royal Stoke University Hospital, Maxillofacial and Orthodontic departments. Sample: All patients who underwent a BSSO (with or without le fort one osteotomy) between 2017–2019 were identified from the database. Patients with pre-existing neurosensory disturbance, those who had a genioplasty at the same time and those with no follow-up were excluded. The sample consisted of 121 patients, 37 males and 84 females between the ages of 17-50 years at the time of surgery. Methods: Clinical records of 121 cases were reviewed to assess the age, sex, type of mandibular osteotomy, status of the nerve during the surgical procedure, type of bony split and incidence of nerve dysfunction at follow-up appointments. The surgical procedure was carried out by three Maxillo-facial surgeons and follow-up appointments were carried out in the Orthodontic and Oral and Maxillo-facial departments. Results: 120 patients were treated to correct the mandibular facial deformity and 1 patient was treated for sleep apnoea. Seventeen patients had a mandibular setback and 104 patients had mandibular advancement. 68 patients reported inferior alveolar nerve dysfunction at one week following their surgery. Seventy-six patients had temporary paresthesia present between 2 weeks and 12 months post-surgery. 13 patients had persistent nerve dysfunction at 12 months, of which 1 had a bad bony split during the BSSO. The incidence of nerve dysfunction postoperatively was 6.6% after 1 day, 56.1% at 1 week, 62.8% at 2 weeks, 59.5% between 3-6 weeks, 43.0% between 8-16 weeks and 10.7% at 1 year. Conclusions: The results of this audit show a similar incidence rate to the research gold standard at the one-year follow-up. Future Recommendations: No changes to surgical procedure or technique are indicated, but a need for improved documentation and a standardized approach for assessment of post-operative nerve dysfunction would be beneficial.Keywords: bilateral sagittal split osteotomy, inferior alveolar nerve, mandible, nerve dysfunction
Procedia PDF Downloads 239454 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ
Authors: Lalita, Niladri Sarkar, Subhasis Ghosh
Abstract:
Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal
Procedia PDF Downloads 62453 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 171452 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂
Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine
Abstract:
Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).Keywords: devulcanization, recycling, rubber, waste
Procedia PDF Downloads 389451 Impact of Material Chemistry and Morphology on Attrition Behavior of Excipients during Blending
Authors: Sri Sharath Kulkarni, Pauline Janssen, Alberto Berardi, Bastiaan Dickhoff, Sander van Gessel
Abstract:
Blending is a common process in the production of pharmaceutical dosage forms where the high shear is used to obtain a homogenous dosage. The shear required can lead to uncontrolled attrition of excipients and affect API’s. This has an impact on the performance of the formulation as this can alter the structure of the mixture. Therefore, it is important to understand the driving mechanisms for attrition. The aim of this study was to increase the fundamental understanding of the attrition behavior of excipients. Attrition behavior of the excipients was evaluated using a high shear blender (Procept Form-8, Zele, Belgium). Twelve pure excipients are tested, with morphologies varying from crystalline (sieved), granulated to spray dried (round to fibrous). Furthermore, materials include lactose, microcrystalline cellulose (MCC), di-calcium phosphate (DCP), and mannitol. The rotational speed of the blender was set at 1370 rpm to have the highest shear with a Froude (Fr) number 9. Varying blending times of 2-10 min were used. Subsequently, after blending, the excipients were analyzed for changes in particle size distribution (PSD). This was determined (n = 3) by dry laser diffraction (Helos/KR, Sympatec, Germany). Attrition was found to be a surface phenomenon which occurs in the first minutes of the high shear blending process. An increase of blending time above 2 mins showed no change in particle size distribution. Material chemistry was identified as a key driver for differences in the attrition behavior between different excipients. This is mainly related to the proneness to fragmentation, which is known to be higher for materials such as DCP and mannitol compared to lactose and MCC. Secondly, morphology also was identified as a driver of the degree of attrition. Granular products consisting of irregular surfaces showed the highest reduction in particle size. This is due to the weak solid bonds created between the primary particles during the granulation process. Granular DCP and mannitol show a reduction of 80-90% in x10(µm) compared to a 20-30% drop for granular lactose (monohydrate and anhydrous). Apart from the granular lactose, all the remaining morphologies of lactose (spray dried-round, sieved-tomahawk, milled) show little change in particle size. Similar observations have been made for spray-dried fibrous MCC. All these morphologies have little irregular or sharp surfaces and thereby are less prone to fragmentation. Therefore, products containing brittle materials such as mannitol and DCP are more prone to fragmentation when exposed to shear. Granular products with irregular surfaces lead to an increase in attrition. While spherical, crystalline, or fibrous morphologies show reduced impact during high shear blending. These changes in size will affect the functionality attributes of the formulation, such as flow, API homogeneity, tableting, formation of dust, etc. Hence it is important for formulators to fully understand the excipients to make the right choices.Keywords: attrition, blending, continuous manufacturing, excipients, lactose, microcrystalline cellulose, shear
Procedia PDF Downloads 112450 Identification of Text Domains and Register Variation through the Analysis of Lexical Distribution in a Bangla Mass Media Text Corpus
Authors: Mahul Bhattacharyya, Niladri Sekhar Dash
Abstract:
The present research paper is an experimental attempt to investigate the nature of variation in the register in three major text domains, namely, social, cultural, and political texts collected from the corpus of Bangla printed mass media texts. This present study uses a corpus of a moderate amount of Bangla mass media text that contains nearly one million words collected from different media sources like newspapers, magazines, advertisements, periodicals, etc. The analysis of corpus data reveals that each text has certain lexical properties that not only control their identity but also mark their uniqueness across the domains. At first, the subject domains of the texts are classified into two parameters namely, ‘Genre' and 'Text Type'. Next, some empirical investigations are made to understand how the domains vary from each other in terms of lexical properties like both function and content words. Here the method of comparative-cum-contrastive matching of lexical load across domains is invoked through word frequency count to track how domain-specific words and terms may be marked as decisive indicators in the act of specifying the textual contexts and subject domains. The study shows that the common lexical stock that percolates across all text domains are quite dicey in nature as their lexicological identity does not have any bearing in the act of specifying subject domains. Therefore, it becomes necessary for language users to anchor upon certain domain-specific lexical items to recognize a text that belongs to a specific text domain. The eventual findings of this study confirm that texts belonging to different subject domains in Bangla news text corpus clearly differ on the parameters of lexical load, lexical choice, lexical clustering, lexical collocation. In fact, based on these parameters, along with some statistical calculations, it is possible to classify mass media texts into different types to mark their relation with regard to the domains they should actually belong. The advantage of this analysis lies in the proper identification of the linguistic factors which will give language users a better insight into the method they employ in text comprehension, as well as construct a systemic frame for designing text identification strategy for language learners. The availability of huge amount of Bangla media text data is useful for achieving accurate conclusions with a certain amount of reliability and authenticity. This kind of corpus-based analysis is quite relevant for a resource-poor language like Bangla, as no attempt has ever been made to understand how the structure and texture of Bangla mass media texts vary due to certain linguistic and extra-linguistic constraints that are actively operational to specific text domains. Since mass media language is assumed to be the most 'recent representation' of the actual use of the language, this study is expected to show how the Bangla news texts reflect the thoughts of the society and how they leave a strong impact on the thought process of the speech community.Keywords: Bangla, corpus, discourse, domains, lexical choice, mass media, register, variation
Procedia PDF Downloads 174449 The Relationship between Wasting and Stunting in Young Children: A Systematic Review
Authors: Susan Thurstans, Natalie Sessions, Carmel Dolan, Kate Sadler, Bernardette Cichon, Shelia Isanaka, Dominique Roberfroid, Heather Stobagh, Patrick Webb, Tanya Khara
Abstract:
For many years, wasting and stunting have been viewed as separate conditions without clear evidence supporting this distinction. In 2014, the Emergency Nutrition Network (ENN) examined the relationship between wasting and stunting and published a report highlighting the evidence for linkages between the two forms of undernutrition. This systematic review aimed to update the evidence generated since this 2014 report to better understand the implications for improving child nutrition, health and survival. Following PRISMA guidelines, this review was conducted using search terms to describe the relationship between wasting and stunting. Studies related to children under five from low- and middle-income countries that assessed both ponderal growth/wasting and linear growth/stunting, as well as the association between the two, were included. Risk of bias was assessed in all included studies using SIGN checklists. 45 studies met the inclusion criteria- 39 peer reviewed studies, 1 manual chapter, 3 pre-print publications and 2 published reports. The review found that there is a strong association between the two conditions whereby episodes of wasting contribute to stunting and, to a lesser extent, stunting leads to wasting. Possible interconnected physiological processes and common risk factors drive an accumulation of vulnerabilities. Peak incidence of both wasting and stunting was found to be between birth and three months. A significant proportion of children experience concurrent wasting and stunting- Country level data suggests that up to 8% of children under 5 may be both wasted and stunted at the same time, global estimates translate to around 16 million children. Children with concurrent wasting and stunting have an elevated risk of mortality when compared to children with one deficit alone. These children should therefore be considered a high-risk group in the targeting of treatment. Wasting, stunting and concurrent wasting and stunting appear to be more prevalent in boys than girls and it appears that concurrent wasting and stunting peaks between 12- 30 months of age with younger children being the most affected. Seasonal patterns in prevalence of both wasting and stunting are seen in longitudinal and cross sectional data and in particular season of birth has been shown to have an impact on a child’s subsequent experience of wasting and stunting. Evidence suggests that the use of mid-upper-arm circumference combined with weight-for-age Z-score might effectively identify children most at risk of near-term mortality, including those concurrently wasted and stunted. Wasting and stunting frequently occur in the same child, either simultaneously or at different moments through their life course. Evidence suggests there is a process of accumulation of nutritional deficits and therefore risk over the life course of a child demonstrates the need for a more integrated approach to prevention and treatment strategies to interrupt this process. To achieve this, undernutrition policies, programmes, financing and research must become more unified.Keywords: Concurrent wasting and stunting, Review, Risk factors, Undernutrition
Procedia PDF Downloads 127448 Coordinative Remote Sensing Observation Technology for a High Altitude Barrier Lake
Authors: Zhang Xin
Abstract:
Barrier lakes are lakes formed by storing water in valleys, river valleys or riverbeds after being blocked by landslide, earthquake, debris flow, and other factors. They have great potential safety hazards. When the water is stored to a certain extent, it may burst in case of strong earthquake or rainstorm, and the lake water overflows, resulting in large-scale flood disasters. In order to ensure the safety of people's lives and property in the downstream, it is very necessary to monitor the barrier lake. However, it is very difficult and time-consuming to manually monitor the barrier lake in high altitude areas due to the harsh climate and steep terrain. With the development of earth observation technology, remote sensing monitoring has become one of the main ways to obtain observation data. Compared with a single satellite, multi-satellite remote sensing cooperative observation has more advantages; its spatial coverage is extensive, observation time is continuous, imaging types and bands are abundant, it can monitor and respond quickly to emergencies, and complete complex monitoring tasks. Monitoring with multi-temporal and multi-platform remote sensing satellites can obtain a variety of observation data in time, acquire key information such as water level and water storage capacity of the barrier lake, scientifically judge the situation of the barrier lake and reasonably predict its future development trend. In this study, The Sarez Lake, which formed on February 18, 1911, in the central part of the Pamir as a result of blockage of the Murgab River valley by a landslide triggered by a strong earthquake with magnitude of 7.4 and intensity of 9, is selected as the research area. Since the formation of Lake Sarez, it has aroused widespread international concern about its safety. At present, the use of mechanical methods in the international analysis of the safety of Lake Sarez is more common, and remote sensing methods are seldom used. This study combines remote sensing data with field observation data, and uses the 'space-air-ground' joint observation technology to study the changes in water level and water storage capacity of Lake Sarez in recent decades, and evaluate its safety. The situation of the collapse is simulated, and the future development trend of Lake Sarez is predicted. The results show that: 1) in recent decades, the water level of Lake Sarez has not changed much and remained at a stable level; 2) unless there is a strong earthquake or heavy rain, it is less likely that the Lake Sarez will be broken under normal conditions, 3) lake Sarez will remain stable in the future, but it is necessary to establish an early warning system in the Lake Sarez area for remote sensing of the area, 4) the coordinative remote sensing observation technology is feasible for the high altitude barrier lake of Sarez.Keywords: coordinative observation, disaster, remote sensing, geographic information system, GIS
Procedia PDF Downloads 128447 The Role of Building Information Modeling as a Design Teaching Method in Architecture, Engineering and Construction Schools in Brazil
Authors: Aline V. Arroteia, Gustavo G. Do Amaral, Simone Z. Kikuti, Norberto C. S. Moura, Silvio B. Melhado
Abstract:
Despite the significant advances made by the construction industry in recent years, the crystalized absence of integration between the design and construction phases is still an evident and costly problem in building construction. Globally, the construction industry has sought to adopt collaborative practices through new technologies to mitigate impacts of this fragmented process and to optimize its production. In this new technological business environment, professionals are required to develop new methodologies based on the notion of collaboration and integration of information throughout the building lifecycle. This scenario also represents the industry’s reality in developing nations, and the increasing need for overall efficiency has demanded new educational alternatives at the undergraduate and post-graduate levels. In countries like Brazil, it is the common understanding that Architecture, Engineering and Building Construction educational programs are being required to review the traditional design pedagogical processes to promote a comprehensive notion about integration and simultaneity between the phases of the project. In this context, the coherent inclusion of computation design to all segments of the educational programs of construction related professionals represents a significant research topic that, in fact, can affect the industry practice. Thus, the main objective of the present study was to comparatively measure the effectiveness of the Building Information Modeling courses offered by the University of Sao Paulo, the most important academic institution in Brazil, at the Schools of Architecture and Civil Engineering and the courses offered in well recognized BIM research institutions, such as the School of Design in the College of Architecture of the Georgia Institute of Technology, USA, to evaluate the dissemination of BIM knowledge amongst students in post graduate level. The qualitative research methodology was developed based on the analysis of the program and activities proposed by two BIM courses offered in each of the above-mentioned institutions, which were used as case studies. The data collection instruments were a student questionnaire, semi-structured interviews, participatory evaluation and pedagogical practices. The found results have detected a broad heterogeneity of the students regarding their professional experience, hours dedicated to training, and especially in relation to their general knowledge of BIM technology and its applications. The research observed that BIM is mostly understood as an operational tool and not as methodological project development approach, relevant to the whole building life cycle. The present research offers in its conclusion an assessment about the importance of the incorporation of BIM, with efficiency and in its totality, as a teaching method in undergraduate and graduate courses in the Brazilian architecture, engineering and building construction schools.Keywords: building information modeling (BIM), BIM education, BIM process, design teaching
Procedia PDF Downloads 154446 Prevention of Preterm Birth and Management of Uterine Contractions with Traditional Korean Medicine: Integrative Approach
Authors: Eun-Seop Kim, Eun-Ha Jang, Rana R. Kim, Sae-Byul Jang
Abstract:
Objective: Preterm labor is the most common antecedent of preterm birth(PTB), which is characterized by regular uterine contraction before 37 weeks of pregnancy and cervical change. In acute preterm labor, tocolytics are administered as the first-line medication to suppress uterine contractions but rarely delay pregnancy to 37 weeks of gestation. On the other hand, according to the Korean Traditional Medicine, PTB is caused by the deficiency of Qi and unnecessary energy in the body of the mother. The aim of this study was to demonstrate the benefit of Traditional Korean Medicine as an adjuvant therapy in management of early uterine contractions and the prevention of PTB. Methods: It is a case report of a 38-year-old woman (0-0-6-0) hospitalized for irregular uterine contractions and cervical change at 33+3/7 weeks of gestation. Past history includes chemical pregnancies achieved by Artificial Rroductive Technology(ART), one stillbirth (at 7 weeks) and a laparoscopic surgery for endometriosis. After seven trials of IVF and articificial insemination, she had succeeded in conception via in-vitro fertilization (IVF) with help of Traditional Korean Medicine (TKM) treatments. Due to irregular uterine contractions and cervical changes, 2 TKM were prescribed: Gami-Dangguisan, and Antae-eum, known to nourish blood and clear away heat. 120ml of Gami-Dangguisan was given twice a day monring and evening along with same amount of Antae-eum once a day from 31 August 2013 to 28 November 2013. Tocolytics (Ritodrine) was administered as a first aid for maintenance of pregnancy. Information regarding progress until the delivery was collected during the patient’s visit. Results: On admission, the cervix of 15mm in length and cervical os with 0.5cm-dilated were observed via ultrasonography. 50% cervical effacement was also detected in physical examination. Tocolysis had been temporarily maintained. As a supportive therapy, TKM herbal preparations(gami-dangguisan and Antae-eum) were concomitantly given. As of 34+2/7 weeks of gestation, however intermittent uterine contractions appeared (5-12min) on cardiotocography and vaginal bleeding was also smeared at 34+3/7 weeks. However, enhanced tocolytics and continuous administration of herbal medicine sustained the pregnancy to term. At 37+2/7 weeks, no sign of labor with restored cervical length was confirmed. The woman gave a term birth to a healthy infant via vaginal delivery at 39+3/7 gestational weeks. Conclusions: This is the first successful case report about a preter labor patient administered with conventional tocolytic agents as well as TKM herbal decoctions, delaying delivery to term. This case deserves attention considering it is rare to maintain gestation to term only with tocolytic intervention. Our report implies the potential of herbal medicine as an adjuvant therapy for preterm labor treatment. Further studies are needed to assess the safety and efficacy of TKM herbal medicine as a therapeutic alternative for curing preterm birth.Keywords: preterm labor, traditional Korean medicine, herbal medicine, integrative treatment, complementary and alternative medicine
Procedia PDF Downloads 372445 Enhancing Students' Utilization of Written Corrective Feedback through Teacher-Student Writing Conferences: A Case Study in English Writing Instruction
Authors: Tsao Jui-Jung
Abstract:
Previous research findings have shown that most students do not fully utilize the written corrective feedback provided by teachers (Stone, 2014). This common phenomenon results in the ineffective utilization of teachers' written corrective feedback. As Ellis (2010) points out, the effectiveness of written corrective feedback depends on the level of student engagement with it. Therefore, it is crucial to understand how students utilize the written corrective feedback from their teachers. Previous studies have confirmed the positive impact of teacher-student writing conferences on students' engagement in the writing process and their writing abilities (Hum, 2021; Nosratinia & Nikpanjeh, 2019; Wong, 1996; Yeh, 2016, 2019). However, due to practical constraints such as time limitations, this instructional activity is not fully utilized in writing classrooms (Alfalagg, 2020). Therefore, to address this research gap, the purpose of this study was to explore several aspects of teacher-student writing conferences, including the frequency of meaning negotiation (i.e., comprehension checks, confirmation checks, and clarification checks) and teacher scaffolding techniques (i.e., feedback, prompts, guidance, explanations, and demonstrations) in teacher-student writing conferences, examining students’ self-assessment of their writing strengths and weaknesses in post-conference journals and their experiences with teacher-student writing conferences (i.e., interaction styles, communication levels, how teachers addressed errors, and overall perspectives on the conferences), and gathering insights from their responses to open-ended questions in the final stage of the study (i.e., their preferences and reasons for different written corrective feedback techniques used by teachers and their perspectives and suggestions on teacher-student writing conferences). Data collection methods included transcripts of audio recordings of teacher-student writing conferences, students’ post-conference journals, and open-ended questionnaires. The participants of this study were sophomore students enrolled in an English writing course for a duration of one school year. Key research findings are as follows: Firstly, in terms of meaning negotiation, students attempted to clearly understand the corrective feedback provided by the teacher-researcher twice as often as the teacher-researcher attempted to clearly understand the students' writing content. Secondly, the most commonly used scaffolding technique in the conferences was prompting (indirect feedback). Thirdly, the majority of participants believed that teacher-student writing conferences had a positive impact on their writing abilities. Fourthly, most students preferred direct feedback from the teacher-research as it directly pointed out their errors and saved them time in revision. However, some students still preferred indirect feedback, as they believed it encouraged them to think and self-correct. Based on the research findings, this study proposes effective teaching recommendations for English writing instruction aimed at optimizing teaching strategies and enhancing students' writing abilities.Keywords: written corrective feedback, student engagement, teacher-student writing conferences, action research
Procedia PDF Downloads 79444 Examining the Drivers of Engagement in Social Media Brand Communities
Authors: Rania S. Hussein
Abstract:
This research mainly focuses on examining engagement in social media brand communities. Engagement in social media has become a main focus in literature affirming that the role of social media in our daily lives is growing. (Akman and Mishra, 2017;Prado-Gascó et al., 2017). Social media has also become a key medium for brand communication and brand building relationships(Frimpong and McLean,2018;Dimitriu and Guesalaga, 2017). Engagement on social media has become a main focus of many researchers who tried to understand this concept further and draw a link between engagement and various social media activities (Cvijikj and Michahelles;2013), Andre,2015; Wang et al., 2015). According to Felix et al. (2017), the internet and social media have provided better digital resources to improve brand loyalty and customer interactions, thus leading to social media engagement within brand communities. The aim of this research is to highlight the importance of social media and why it is important to maintain engagement within social media. While the term ‘engagement’ is widely used in scholarly literature, there isn’t a common consensus about what the term exactly entails, according to Kidd, (2011). On one hand, it was seen as something that includes factors such as participation, activation, empowerment, devotion, trust, and productivity (Zhang et al, andBenyoucef, M. (2016), ). Other scholars held different viewpoints. For example, Lim et al. (2015) has chosen to break down engagement into three types: operational engagement, emotional engagement, and relational engagement. Chandler and Lusch (2015) further studied engagement as a means to measure commitment to a brand. Fernandes&Remelhe (2016) had a more technical view, measuring engagement through comments, following, subscribing, sharing, enjoying, writing, etc., in the social media context. ustomer engagement has become a research focus for understanding how consumer relationships are developed, retained, and improved within a digital context. Based on previous literature, it is evident that many customer engagement related studies are limited to the interaction between firms and consumers on social media. There is a clear gap in the literature regarding consumer-to-consumer interaction and user-generated content and its significance. While some researchers, such as Alversia et al. (2016), touched upon the importance of customer-based engagement, a gap still remains: there is no consistent and well-tested method for defining the factors that affect consumer interaction. Moreover, few scholarly research papers such as (Case, 2019; Riley, 2020;Habibi, 2014) provided to assist businesses understand their customers' interaction habits as well as the best ways to develop customer loyalty. Additionally, the majority of research on brand pages concentrated on the drivers of Consumer engagement, with just a few studies example, Lamberton, Cc(2016), Poorrezaei, (2016). (Jayasingh, 2019), looking into the implications. This study focuses on understanding the concept of engagement and its importance, specifically engagement within social media brand communities. It examines drivers as well as consequences of engagement, including brand knowledge, brand trust, entertainment, and brand page interactivity. Brand engagement is also expected to affect brand loyalty and word of the mouth.Keywords: engagement, social media, brand communities, drivers
Procedia PDF Downloads 161443 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life
Authors: Sandra Young
Abstract:
The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics
Procedia PDF Downloads 138442 Role of Empirical Evidence in Law-Making: Case Study from India
Authors: Kaushiki Sanyal, Rajesh Chakrabarti
Abstract:
In India, on average, about 60 Bills are passed every year in both Houses of Parliament – Lok Sabha and Rajya Sabha (calculated from information on websites of both Houses). These are debated in both Lok Sabha (House of Commons) and Rajya Sabha (Council of States) before they are passed. However, lawmakers rarely use empirical evidence to make a case for a law. Most of the time, they support a law on the basis of anecdote, intuition, and common sense. While these do play a role in law-making, without the necessary empirical evidence, laws often fail to achieve their desired results. The quality of legislative debates is an indicator of the efficacy of the legislative process through which a Bill is enacted. However, the study of legislative debates has not received much attention either in India or worldwide due to the difficulty of objectively measuring the quality of a debate. Broadly, three approaches have emerged in the study of legislative debates. The rational-choice or formal approach shows that speeches vary based on different institutional arrangements, intra-party politics, and the political culture of a country. The discourse approach focuses on the underlying rules and conventions and how they impact the content of the debates. The deliberative approach posits that legislative speech can be reasoned, respectful, and informed. This paper aims to (a) develop a framework to judge the quality of debates by using the deliberative approach; (b) examine the legislative debates of three Bills passed in different periods as a demonstration of the framework, and (c) examine the broader structural issues that disincentive MPs from scrutinizing Bills. The framework would include qualitative and quantitative indicators to judge a debate. The idea is that the framework would provide useful insights into the legislators’ knowledge of the subject, the depth of their scrutiny of Bills, and their inclination toward evidence-based research. The three Bills that the paper plans to examine are as follows: 1. The Narcotics Drugs and Psychotropic Substances Act, 1985: This act was passed to curb drug trafficking and abuse. However, it mostly failed to fulfill its purpose. Consequently, it was amended thrice but without much impact on the ground. 2. The Criminal Laws (Amendment) Act, 2013: This act amended the Indian Penal Code to add a section on human trafficking. The purpose was to curb trafficking and penalise traffickers, pimps, and middlemen. However, the crime rate remains high while the conviction rate is low. 3. The Surrogacy (Regulation) Act, 2021: This act bans commercial surrogacy allowing only relatives to act as surrogates as long as there is no monetary payment. Experts fear that instead of preventing commercial surrogacy, it would drive the activity underground. The consequences would be borne by the surrogate, who would not be protected by law. The purpose of the paper is to objectively analyse the quality of parliamentary debates, get insights into how MPs understand the evidence and deliberate on steps to incentivise them to use empirical evidence.Keywords: legislature, debates, empirical, India
Procedia PDF Downloads 88441 Women's Perceptions of Zika Virus Prevention Recommendations: A Tale of Two Cities within Fortaleza, Brazil
Authors: Jeni Stolow, Lina Moses, Carl Kendall
Abstract:
Zika virus (ZIKV) reemerged as a global threat in 2015 with Brazil at its epicenter. Brazilians have a long history of combatting Aedes aegypti mosquitos as it is a common vector for dengue, chikungunya, and yellow fever. As a response to the epidemic, public health authorities promoted ZIKV prevention behaviors such as mosquito bite prevention, reproductive counseling for women who are pregnant or contemplating pregnancy, pregnancy avoidance, and condom use. Most prevention efforts from Brazil focused on the mosquito vector- utilizing recycled dengue approaches without acknowledging the context in which women were able to adhere to these prevention messages. This study used qualitative methods to explore how women in Fortaleza, Brazil perceive ZIKV, the Brazilian authorities’ ZIKV prevention recommendations, and the feasibility of adhering to these recommendations. A core study aim was to look at how women perceive their physical, social, and natural environment as it impacts women’s ability to adhere to ZIKV prevention behaviors. A Rapid Anthropological Assessment (RAA) containing observations, informational interviews, and semi-structured in-depth interviews were utilized for data collection. The study utilized Grounded Theory as the systematic inductive method of analyzing the data collected. Interviews were conducted with 35 women of reproductive age (15-39 years old), who primarily utilize the public health system. It was found that women’s self-identified economic class was associated with how strongly women felt they could prevent ZIKV. All women interviewed technically belong to the C-class, the middle economic class. Although all members of the same economic class, there was a divide amongst participants as to who perceived themselves as higher C-class versus lower C-class. How women saw their economic status was dictated by how they perceived their physical, social, and natural environment. Women further associated their environment and their economic class to their likelihood of contracting ZIKV, their options for preventing ZIKV, their ability to prevent ZIKV, and their willingness to attempt to prevent ZIKV. Women’s perceived economic status was found to relate to their structural environment (housing quality, sewage, and locations to supplies), social environment (family and peer norms), and natural environment (wetland areas, natural mosquito breeding sites, and cyclical nature of vectors). Findings from this study suggest that women’s perceived environment and economic status impact their perceived feasibility and desire to attempt behaviors to prevent ZIKV. Although ZIKV has depleted from epidemic to endemic status, it is suggested that the virus will return as cyclical outbreaks like that seen with similar arboviruses such as dengue and chikungunya. As the next ZIKV epidemic approaches it is essential to understand how women perceive themselves, their abilities, and their environments to best aid the prevention of ZIKV.Keywords: Aedes aegypti, environment, prevention, qualitative, zika
Procedia PDF Downloads 134440 The Role of Intraluminal Endoscopy in the Diagnosis and Treatment of Fluid Collections in Patients With Acute Pancreatitis
Authors: A. Askerov, Y. Teterin, P. Yartcev, S. Novikov
Abstract:
Introduction: Acute pancreatitis (AP) is a socially significant problem for public health and continues to be one of the most common causes of hospitalization of patients with pathology of the gastrointestinal tract. It is characterized by high mortality rates, which reaches 62-65% in infected pancreatic necrosis. Aims & Methods: The study group included 63 patients who underwent transluminal drainage (TLD) fluid collection (FC). All patients were performed transabdominal ultrasound, computer tomography of the abdominal cavity and retroperitoneal organs and endoscopic ultrasound (EUS) of the pancreatobiliary zone. The EUS was used as a final diagnostic method to determine the characteristics of FC. The indications for TLD were: the distance between the wall of the hollow organ and the FC was not more than 1 cm, the absence of large vessels on the puncture trajectory (more than 3 mm), and the size of the formation was more than 5 cm. When a homogeneous cavity with clear, even contours was detected, a plastic stent with rounded ends (“double pig tail”) was installed. The indication for the installation of a fully covered self-expanding stent was the detection of nonhomogeneous anechoic FC with hyperechoic inclusions and cloudy purulent contents. In patients with necrotic forms after drainage of the purulent cavity, a cystonasal drainage with a diameter of 7Fr was installed in its lumen under X-ray control to sanitize the cavity with a 0.05% aqueous solution of chlorhexidine. Endoscopic necrectomy was performed every 24-48 hours. The plastic stent was removed in 6 month, the fully covered self-expanding stent - in 1 month after the patient was discharged from the hospital. Results: Endoscopic TLD was performed in 63 patients. The FC corresponding to interstitial edematous pancreatitis was detected in 39 (62%) patients who underwent TLD with the installation of a plastic stent with rounded ends. In 24 (38%) patients with necrotic forms of FC, a fully covered self-expanding stent was placed. Communication with the ductal system of the pancreas was found in 5 (7.9%) patients. They underwent pancreaticoduodenal stenting. A complicated postoperative period was noted in 4 (6.3%) cases and was manifested by bleeding from the zone of pancreatogenic destruction. In 2 (3.1%) cases, this required angiography and endovascular embolization a. gastroduodenalis, in 1 (1.6%) case, endoscopic hemostasis was performed by filling the cavity with 4 ml of Hemoblock hemostatic solution. The combination of both methods was used in 1 (1.6%) patient. There was no evidence of recurrent bleeding in these patients. Lethal outcome occurred in 4 patients (6.3%). In 3 (4.7%) patients, the cause of death was multiple organ failure, in 1 (1.6%) - severe nosocomial pneumonia that developed on the 32nd day after drainage. Conclusions: 1. EUS is not only the most important method for diagnosing FC in AP, but also allows you to determine further tactics for their intraluminal drainage.2. Endoscopic intraluminal drainage of fluid zones in 45.8% of cases is the final minimally invasive method of surgical treatment of large-focal pancreatic necrosis. Disclosure: Nothing to disclose.Keywords: acute pancreatitis, fluid collection, endoscopy surgery, necrectomy, transluminal drainage
Procedia PDF Downloads 110439 Assessment of Occupational Health and Safety Conditions of Health Care Workers in Barangay Health Centers in a Selected City in Metro Manila
Authors: Deinzel R. Uezono, Vivien Fe F. Fadrilan-Camacho, Bianca Margarita L. Medina, Antonio Domingo R. Reario, Trisha M. Salcedo, Luke Wesley P. Borromeo
Abstract:
The environment of health care workers is considered one of the most hazardous settings due to the nature of their work. In developing countries especially, the Philippines, this continues to be overlooked in terms of programs and services on occupational health and safety (OHS). One possible reason for this is the existing information gap on OHS which limits data comparability and impairs effective monitoring and assessment of interventions. To address this gap, there is a need to determine the current conditions of Filipino health care workers in their workplace. This descriptive cross-sectional study assessed the occupational health and safety conditions of health care workers in barangay health centers in a selected city in Metro Manila, Philippines by: (1) determining the hazards present in the workplace; (2) determining the most common self-reported medical problems; and (3) describing the elements of an OHS system based on the six building blocks of health system. Assessment was done through walkthrough survey, self-administered questionnaire, and key informant interview. Data analysis was done using Epi Info 7 and NVivo 11. Results revealed different health hazards present in the workplace particularly biological hazards (exposure to sick patients and infectious specimens), physical hazards (inadequate space and/or lighting), chemical hazards (toxic reagents and flammable chemicals), and ergonomic hazards (activities requiring repetitive motion and awkward posture). Additionally, safety hazards (improper capping of syringe and lack of fire safety provisions) were also observed. Meanwhile, the most commonly self-reported chronic diseases among health care workers (N=336) were hypertension (20.24%, n=68) and diabetes (12.50%, n=42). Top commonly self-reported symptoms were colds (66.07%, n=222), coughs (63.10%, n=212), headache (55.65%, n=187), and muscle pain (50.60%, n=170) while other diseases were influenza (16.96%, n=57) and UTI (15.48%, n=52). In terms of the elements of the OHS system, a general policy on occupational health and safety was found to be lacking and in effect, an absence of health and safety committee overseeing the implementing and monitoring of the policy. No separate budget specific for OHS programs and services was also found to be a limitation. As a result, no OHS personnel and trainings/seminar were identified. No established information system for OHS was in place. In conclusion, health and safety hazards were observed to be present across the barangay health centers visited in a selected city in Metro Manila. Medical conditions identified as most commonly self-reported were hypertension and diabetes for chronic diseases; colds, coughs, headache, and muscle pain for medical symptoms; and influenza and UTI for other diseases. As for the elements of the occupational health and safety system, there was a lack in the general components of the six building blocks of the health system.Keywords: health hazards, occupational health and safety, occupational health and safety system, safety hazards
Procedia PDF Downloads 187438 Time Travel Testing: A Mechanism for Improving Renewal Experience
Authors: Aritra Majumdar
Abstract:
While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas
Procedia PDF Downloads 160437 The Readaptation of the Subscale 3 of the NLit-IT (Nutrition Literacy Assessment Instrument for Italian Subjects)
Authors: Virginia Vettori, Chiara Lorini, Vieri Lastrucci, Giulia Di Pisa, Alessia De Blasi, Sara Giuggioli, Guglielmo Bonaccorsi
Abstract:
The design of the Nutrition Literacy Assessment Instrument (NLit) responds to the need to provide a tool to adequately assess the construct of nutrition literacy (NL), which is strictly connected to the quality of the diet and nutritional health status. The NLit was originally developed and validated in the US context, and it was recently validated for Italian people too (NLit-IT), involving a sample of N = 74 adults. The results of the cross-cultural adaptation of the tool confirmed its validity since it was established that the level of NL contributed to predicting the level of adherence to the Mediterranean Diet (convergent validity). Additionally, results obtained proved that Internal Consistency and reliability of the NLit-IT were good (Cronbach’s alpha (ρT) = 0.78; 95% CI, 0.69–0.84; Intraclass Correlation Coefficient (ICC) = 0.68, 95% CI, 0.46–0.85). However, the Subscale 3 of the NLit-IT “Household Food Measurement” showed lower values of ρT and ICC (ρT = 0.27; 95% CI, 0.1–0.55; ICC = 0.19, 95% CI, 0.01–0.63) than the entire instrument. Subscale 3 includes nine items which are constituted by written questions and the corresponding pictures of the meals. In particular, items 2, 3, and 8 of Subscale 3 had the lowest level of correct answers. The purpose of the present study was to identify the factors that influenced the Internal Consistency and reliability of Subscale 3 of NLit-IT using the methodology of a focus group. A panel of seven experts was formed, involving professionals in the field of public health nutrition, dietetics, and health promotion and all of them were trained on the concepts of nutrition literacy and food appearance. A member of the group drove the discussion, which was oriented in the identification of the reasons for the low levels of reliability and Internal Consistency. The members of the group discussed the level of comprehension of the items and how they could be readapted. From the discussion, it emerges that the written questions were clear and easy to understand, but it was observed that the representations of the meal needed to be improved. Firstly, it has been decided to introduce a fork or a spoon as a reference dimension to better understand the dimension of the food portion (items 1, 4 and 8). Additionally, the flat plate of items 3 and 5 should be substituted with a soup plate because, in the Italian national context, it is common to eat pasta or rice on this kind of plate. Secondly, specific measures should be considered for some kind of foods such as the brick of yogurt instead of a cup of yogurt (items 1 and 4). Lastly, it has been decided to redo the photos of the meals basing on professional photographic techniques. In conclusion, we noted that the graphical representation of the items strictly influenced the level of participants’ comprehension of the questions; moreover, the research group agreed that the level of knowledge about nutrition and food portion size is low in the general population.Keywords: nutritional literacy, cross cultural adaptation, misinformation, food design
Procedia PDF Downloads 172436 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections
Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz
Abstract:
In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process
Procedia PDF Downloads 214435 Generic Early Warning Signals for Program Student Withdrawals: A Complexity Perspective Based on Critical Transitions and Fractals
Authors: Sami Houry
Abstract:
Complex systems exhibit universal characteristics as they near a tipping point. Among them are common generic early warning signals which precede critical transitions. These signals include: critical slowing down in which the rate of recovery from perturbations decreases over time; an increase in the variance of the state variable; an increase in the skewness of the state variable; an increase in the autocorrelations of the state variable; flickering between different states; and an increase in spatial correlations over time. The presence of the signals has management implications, as the identification of the signals near the tipping point could allow management to identify intervention points. Despite the applications of the generic early warning signals in various scientific fields, such as fisheries, ecology and finance, a review of literature did not identify any applications that address the program student withdrawal problem at the undergraduate distance universities. This area could benefit from the application of generic early warning signals as the program withdrawal rate amongst distance students is higher than the program withdrawal rate at face-to-face conventional universities. This research specifically assessed the generic early warning signals through an intensive case study of undergraduate program student withdrawal at a Canadian distance university. The university is non-cohort based due to its system of continuous course enrollment where students can enroll in a course at the beginning of every month. The assessment of the signals was achieved through the comparison of the incidences of generic early warning signals among students who withdrew or simply became inactive in their undergraduate program of study, the true positives, to the incidences of the generic early warning signals among graduates, the false positives. This was achieved through significance testing. Research findings showed support for the signal pertaining to the rise in flickering which is represented in the increase in the student’s non-pass rates prior to withdrawing from a program; moderate support for the signals of critical slowing down as reflected in the increase in the time a student spends in a course; and moderate support for the signals on increase in autocorrelation and increase in variance in the grade variable. The findings did not support the signal on the increase in skewness of the grade variable. The research also proposes a new signal based on the fractal-like characteristic of student behavior. The research also sought to extend knowledge by investigating whether the emergence of a program withdrawal status is self-similar or fractal-like at multiple levels of observation, specifically the program level and the course level. In other words, whether the act of withdrawal at the program level is also present at the course level. The findings moderately supported self-similarity as a potential signal. Overall, the assessment of the signals suggests that the signals, with the exception with the increase of skewness, could be utilized as a predictive management tool and potentially add one more tool, the fractal-like characteristic of withdrawal, as an additional signal in addressing the student program withdrawal problem.Keywords: critical transitions, fractals, generic early warning signals, program student withdrawal
Procedia PDF Downloads 185434 Management of Dysphagia after Supra Glottic Laryngectomy
Authors: Premalatha B. S., Shenoy A. M.
Abstract:
Background: Rehabilitation of swallowing is as vital as speech in surgically treated head and neck cancer patients to maintain nutritional support, enhance wound healing and improve quality of life. Aspiration following supraglottic laryngectomy is very common, and rehabilitation of the same is crucial which requires involvement of speech therapist in close contact with head and neck surgeon. Objectives: To examine the functions of swallowing outcomes after intensive therapy in supraglottic laryngectomy. Materials: Thirty-nine supra glottic laryngectomees were participated in the study. Of them, 36 subjects were males and 3 were females, in the age range of 32-68 years. Eighteen subjects had undergone standard supra glottis laryngectomy (Group1) for supraglottic lesions where as 21 of them for extended supraglottic laryngectomy (Group 2) for base tongue and lateral pharyngeal wall lesion. Prior to surgery visit by speech pathologist was mandatory to assess the sutability for surgery and rehabilitation. Dysphagia rehabilitation started after decannulation of tracheostoma by focusing on orientation about anatomy, physiological variation before and after surgery, which was tailor made for each individual based on their type and extent of surgery. Supraglottic diet - Soft solid with supraglottic swallow method was advocated to prevent aspiration. The success of intervention was documented as number of sessions taken to swallow different food consistency and also percentage of subjects who achieved satisfactory swallow in terms of number of weeks in both the groups. Results: Statistical data was computed in two ways in both the groups 1) to calculate percentage (%) of subjects who swallowed satisfactorily in the time frame of less than 3 weeks to more than 6 weeks, 2) number of sessions taken to swallow without aspiration as far as food consistency was concerned. The study indicated that in group 1 subjects of standard supraglottic laryngectomy, 61% (n=11) of them were successfully rehabilitated but their swallowing normalcy was delayed by an average 29th post operative day (3-6 weeks). Thirty three percentages (33%) (n=6) of the subjects could swallow satisfactorily without aspiration even before 3 weeks and only 5 % (n=1) of the needed more than 6 weeks to achieve normal swallowing ability. Group 2 subjects of extended SGL only 47 %( n=10) of them could achieved satisfactory swallow by 3-6 weeks and 24% (n=5) of them of them achieved normal swallowing ability before 3 weeks. Around 4% (n=1) needed more than 6 weeks and as high as 24 % (n=5) of them continued to be supplemented with naso gastric feeding even after 8-10 months post operative as they exhibited severe aspiration. As far as type of food consistencies were concerned group 1 subject could able to swallow all types without aspiration much earlier than group 2 subjects. Group 1 needed only 8 swallowing therapy sessions for thickened soft solid and 15 sessions for liquids whereas group 2 required 14 sessions for soft solid and 17 sessions for liquids to achieve swallowing normalcy without aspiration. Conclusion: The study highlights the importance of dysphagia intervention in supraglottic laryngectomees by speech pathologist.Keywords: dysphagia management, supraglotic diet, supraglottic laryngectomy, supraglottic swallow
Procedia PDF Downloads 232433 A Mother’s Silent Adversary: A Case of Pregnant Woman with Cervical Cancer
Authors: Paola Millare, Nelinda Catherine Pangilinan
Abstract:
Background and Aim: Cervical cancer is the most commonly diagnosed gynecological malignancy during pregnancy. Owing to the rarity of the disease, and the complexity of all factors that have to be taken into consideration, standardization of treatment is very difficult. Cervical cancer is the second most common malignancy among women. The treatment of cancer during pregnancy is most challenging in the case of cervical cancer, since the pregnant uterus itself is affected. This report aims to present a case of cervical cancer in a pregnant woman and how to manage this case and several issues accompanied with it. Methods: This is a case of a 28 year-old, Gravida 4 Para 2 (1111), who presented with watery to mucoid, whitish, non-foul smelling and increasing in amount. Internal examination revealed normal external genitalia, parous outlet, cervix was transformed into a fungating mass measuring 5x4 cm, with left parametrial involvement, body of uterus was enlarged to 24 weeks size, no adnexal mass or tenderness. She had cervical punch biopsy, which revealed, adenocarcinoma, well-differentiated cervical tissue. Standard management for cases with stage 2B cervical carcinoma was to start radiation or radical hysterectomy. In the case of patients diagnosed with cervical cancer and currently pregnant, these kind of management will result to fetal loss. The patient still declined the said management and opted to delay the treatment and wait for her baby to reach at least term and proceed to cesarean section as route of delivery. Results: The patient underwent an elective cesarean section at 37th weeks age of gestation, with an outcome of a term, live baby boy APGAR score 7,9 birthweight 2600 grams. One month postpartum, the patient followed up and completed radiotherapy, chemotherapy and brachytherapy. She was advised to go back after 6 months for monitoring. On her last check up, an internal examination was done which revealed normal external genitalia, vagina admits 2 fingers with ease, there is a palpable fungating mass at the cervix measuring 2x2 cm. A repeat gynecologic oncologic ultrasound was done revealing cervical mass, endophytic, grade 1 color score with stromal invasion 35% post radiation reactive lymph nodes with intact paracolpium, pericervical, and parametrial involvement. The patient was then advised to undergo pelvic boost and for close monitoring of the cervical mass. Conclusion: Cervical cancer in pregnancy is rare but is a dilemma for women and their physicians. Treatment should be multidisciplinary and individualized following careful counseling. In this case, the treatment was clearly on the side of preventing the progression of cervical cancer while she is pregnant, however due to ethical reasons, the management deviates on the right of the patient to decide for her own health and her unborn child. The collaborative collection of data relating to treatment and outcome is strongly encouraged.Keywords: cancer, cervical, ethical, pregnancy
Procedia PDF Downloads 245432 Seismic Assessment of Flat Slab and Conventional Slab System for Irregular Building Equipped with Shear Wall
Authors: Muhammad Aji Fajari, Ririt Aprilin Sumarsono
Abstract:
Particular instability of structural building under lateral load (e.g earthquake) will rise due to irregularity in vertical and horizontal direction as stated in SNI 03-1762-2012. The conventional slab has been considered for its less contribution in increasing the stability of the structure, except special slab system such as flat slab turned into account. In this paper, the analysis of flat slab system at Sequis Tower located in South Jakarta will be assessed its performance under earthquake. It consists of 6 floors of the basement where the flat slab system is applied. The flat slab system will be the main focus in this paper to be compared for its performance with conventional slab system under earthquake. Regarding the floor plan of Sequis Tower basement, re-entrant corner signed for this building is 43.21% which exceeded the allowable re-entrant corner is 15% as stated in ASCE 7-05 Based on that, the horizontal irregularity will be another concern for analysis, otherwise vertical irregularity does not exist for this building. Flat slab system is a system where the slabs use drop panel with shear head as their support instead of using beams. Major advantages of flat slab application are decreasing dead load of structure, removing beams so that the clear height can be maximized, and providing lateral resistance due to lateral load. Whilst, deflection at middle strip and punching shear are problems to be detail considered. Torsion usually appears when the structural member under flexure such as beam or column dimension is improper in ratio. Considering flat slab as alternative slab system will keep the collapse due to torsion down. Common seismic load resisting system applied in the building is a shear wall. Installation of shear wall will keep the structural system stronger and stiffer affecting in reduced displacement under earthquake. Eccentricity of shear wall location of this building resolved the instability due to horizontal irregularity so that the earthquake load can be absorbed. Performing linear dynamic analysis such as response spectrum and time history analysis due to earthquake load is suitable as the irregularity arise so that the performance of structure can be significantly observed. Utilization of response spectrum data for South Jakarta which PGA 0.389g is basic for the earthquake load idealization to be involved in several load combinations stated on SNI 03-1726-2012. The analysis will result in some basic seismic parameters such as period, displacement, and base shear of the system; besides the internal forces of the critical member will be presented. Predicted period of a structure under earthquake load is 0.45 second, but as different slab system applied in the analysis then the period will show a different value. Flat slab system will probably result in better performance for the displacement parameter compare to conventional slab system due to higher contribution of stiffness to the whole system of the building. In line with displacement, the deflection of the slab will result smaller for flat slab than a conventional slab. Henceforth, shear wall will be effective to strengthen the conventional slab system than flat slab system.Keywords: conventional slab, flat slab, horizontal irregularity, response spectrum, shear wall
Procedia PDF Downloads 191431 Production of Functional Crackers Enriched with Olive (Olea europaea L.) Leaf Extract
Authors: Rosa Palmeri, Julieta I. Monteleone, Antonio C. Barbera, Carmelo Maucieri, Aldo Todaro, Virgilio Giannone, Giovanni Spagna
Abstract:
In recent years, considerable interest has been shown in the functional properties of foods, and a relevant role has been played by phenolic compounds, able to scavenge free radicals. A more sustainable agriculture has to emerge to guarantee food supply over the next years. Wheat, corn, and rice are the most common cereals cultivated, but also other cereal species, such as barley can be appreciated for their peculiarities. Barley (Hordeum vulgare L.) is a C3 winter cereal that shows high resistance at drought and salt stresses. There are growing interests in barley as ingredient for the production of functional foods due to its high content of phenolic compounds and Beta-glucans. In this respect, the possibility of separating specific functional fractions from food industry by-products looks very promising. Olive leaves represent a quantitatively significant by-product of olive grove farming, and are an interesting source of phenolic compounds. In particular, oleuropein, which provide important nutritional benefits, is the main phenolic compound in olive leaves and ranges from 17% to 23% depending upon the cultivar and growing season period. Together with oleuropein and its derivatives (e.g. dimethyloleuropein, oleuropein diglucoside), olive leaves further contain tyrosol, hydroxytyrosol, and a series of secondary metabolities structurally related to them: verbascoside, ligstroside, hydroxytyrosol glucoside, tyrosol glucoside, oleuroside, oleoside-11-methyl ester, and nuzhenide. Several flavonoids, flavonoid glycosides, and phenolic acids have also described in olive leaves. The aim of this work was the production of functional food with higher content of polyphenols and the evaluation of their shelf life. Organic durum wheat and barley grains contain higher levels of phenolic compounds were used for the production of crackers. Olive leaf extract (OLE) was obtained from cv. ‘Biancolilla’ by aqueous extraction method. Two baked goods trials were performed with both organic durum wheat and barley flours, adding olive leaf extract. Control crackers, made as comparison, were produced with the same formulation replacing OLE with water. Total phenolic compound, moisture content, activity water, and textural properties at different time of storage were determined to evaluate the shelf-life of the products. Our the preliminary results showed that the enriched crackers showed higher phenolic content and antioxidant activity than control. Alternative uses of olive leaf extracts for crackers production could represent a good candidate for the addition of functional ingredients because bakery items are daily consumed, and have long shelf-life.Keywords: barley, functional foods, olive leaf, polyphenols, shelf life
Procedia PDF Downloads 306430 Peripheral Neuropathy after Locoregional Anesthesia
Authors: Dalila Chaid, Bennameur Fedilli, Mohammed Amine Bellelou
Abstract:
The study focuses on the experience of lower-limb amputees, who face both physical and psychological challenges due to their disability. Chronic neuropathic pain and various types of limb pain are common in these patients. They often require orthopaedic interventions for issues such as dressings, infection, ulceration, and bone-related problems. Research Aim: The aim of this study is to determine the most suitable anaesthetic technique for lower-limb amputees, which can provide them with the greatest comfort and prolonged analgesia. The study also aims to demonstrate the effectiveness and cost-effectiveness of ultrasound-guided local regional anaesthesia (LRA) in this patient population. Methodology: The study is an observational analytical study conducted over a period of eight years, from 2010 to 2018. It includes a total of 955 cases of revisions performed on lower limb stumps. The parameters analyzed in this study include the effectiveness of the block and the use of sedation, the duration of the block, the post-operative visual analog scale (VAS) scores, and patient comfort. Findings: The study findings highlight the benefits of ultrasound-guided LRA in providing comfort by optimizing post-operative analgesia, which can contribute to psychological and bodily repair in lower-limb amputees. Additionally, the study emphasizes the use of alpha2 agonist adjuvants with sedative and analgesic properties, long-acting local anaesthetics, and larger volumes for better outcomes. Theoretical Importance: This study contributes to the existing knowledge by emphasizing the importance of choosing an appropriate anaesthetic technique for lower-limb amputees. It highlights the potential of ultrasound-guided LRA and the use of specific adjuvants and local anaesthetics in improving post-operative analgesia and overall patient outcomes. Data Collection and Analysis Procedures: Data for this study were collected through the analysis of medical records and relevant documentation related to the 955 cases included in the study. The effectiveness of the anaesthetic technique, duration of the block, post-operative pain scores, and patient comfort were analyzed using statistical methods. Question Addressed: The study addresses the question of which anaesthetic technique would be most suitable for lower-limb amputees to provide them with optimal comfort and prolonged analgesia. Conclusion: The study concludes that ultrasound-guided LRA, along with the use of alpha2 agonist adjuvants, long-acting local anaesthetics, and larger volumes, can be an effective approach in providing comfort and improving post-operative analgesia for lower-limb amputees. This technique can potentially contribute to the psychological and bodily repair of these patients. The findings of this study have implications for clinical practice in the management of lower-limb amputees, highlighting the importance of personalized anaesthetic approaches for better outcomes.Keywords: neuropathic pain, ultrasound-guided peripheral nerve block, DN4 quiz, EMG
Procedia PDF Downloads 79429 The Rise and Effects of Social Movement on Ethnic Relations in Malaysia: The Bersih Movement as a Case Study
Authors: Nur Rafeeda Daut
Abstract:
The significance of this paper is to provide an insight on the role of social movement in building stronger ethnic relations in Malaysia. In particular, it focuses on how the BERSIH movement have been able to bring together the different ethnic groups in Malaysia to resist the present political administration that is seen to manipulate the electoral process and oppress the basic freedom of expression of Malaysians. Attention is given on how and why this group emerged and its mobilisation strategies. Malaysia which is a multi-ethnic and multi-religious society gained its independence from the British in 1957. Like many other new nations, it faces the challenges of nation building and governance. From economic issues to racial and religious tension, Malaysia is experiencing high level of corruption and income disparity among the different ethnic groups. The political parties in Malaysia are also divided along ethnic lines. BERSIH which is translated as ‘clean’ is a movement which seeks to reform the current electoral system in Malaysia to ensure equality, justice, free and fair elections. It was originally formed in 2007 as a joint committee that comprised leaders from political parties, civil society groups and NGOs. In April 2010, the coalition developed as an entirely civil society movement unaffiliated to any political party. BERSIH claimed that the electoral roll in Malaysia has been marred by fraud and other irregularities. In 2015, the BERSIH movement organised its biggest rally in Malaysia which also includes 38 other rallies held internationally. Supporters of BERSIH that participated in the demonstration were comprised of all the different ethnic groups in Malaysia. In this paper, two social movement theories are used: resource mobilization theory and political opportunity structure to explain the emergence and mobilization of the BERSIH movement in Malaysia. Based on these two theories, corruption which is believed to have contributed to the income disparity among Malaysians has generated the development of this movement. The rise of re-islamisation values propagated by certain groups in Malaysia and the shift in political leadership has also created political opportunities for this movement to emerge. In line with the political opportunity structure theory, the BERSIH movement will continue to create more opportunities for the empowerment of civil society and the unity of ethnic relations in Malaysia. Comparison is made on the degree of ethnic unity in the country before and after BERSIH was formed. This would include analysing the level of re-islamisation values and also the level of corruption in relation to economic income under the premiership of the former Prime Minister Mahathir and the present Prime Minister Najib Razak. The country has never seen such uprisings like BERSIH where ethnic groups which over the years have been divided by ethnic based political parties and economic disparity joined together with a common goal for equality and fair elections. As such, the BERSIH movement is a unique case where it illustrates the change of political landscape, ethnic relations and civil society in Malaysia.Keywords: ethnic relations, Malaysia, political opportunity structure, resource mobilization theory and social movement
Procedia PDF Downloads 349428 The Lived Experiences and Coping Strategies of Women with Attention Deficit and Hyperactivity Disorder (ADHD)
Authors: Oli Sophie Meredith, Jacquelyn Osborne, Sarah Verdon, Jane Frawley
Abstract:
PROJECT OVERVIEW AND BACKGROUND: Over one million Australians are affected by ADHD at an economic and social cost of over $20 billion per annum. Despite health outcomes being significantly worse compared with men, women have historically been overlooked in ADHD diagnosis and treatment. While research suggests physical activity and other non-prescription options can help with ADHD symptoms, the frontline response to ADHD remains expensive stimulant medications that can have adverse side effects. By interviewing women with ADHD, this research will examine women’s self-directed approaches to managing symptoms, including alternatives to prescription medications. It will investigate barriers and affordances to potentially helpful approaches and identify any concerning strategies pursued in lieu of diagnosis. SIGNIFICANCE AND INNOVATION: Despite the economic and societal impact of ADHD on women, research investigating how women manage their symptoms is scant. This project is significant because although women’s ADHD symptoms are markedly different to those of men, mainstream treatment has been based on the experiences of men. Further, it is thought that in developing nuanced coping strategies, women may have masked their symptoms. Thus, this project will highlight strategies which women deem effective in ‘thriving’ rather than just ‘hiding’. By investigating the health service use, self-care and physical activity of women with ADHD, this research aligns with a priority research areas as identified by the November 2023 senate ADHD inquiry report. APPROACH AND METHODS: Semi-structured interviews will be conducted with up to 20 women with ADHD. Interviews will be conducted in person and online to capture experience across rural and metropolitan Australia. Participants will be recruited in partnership with the peak representative body, ADHD Australia. The research will use an intersectional framework, and data will be analysed thematically. This project is led by an interdisciplinary and cross-institutional team of women with ADHD. Reflexive interviewing skills will be employed to help interviewees feel more comfortable disclosing their experiences, especially where they share common ground ENGAGEMENT, IMPACT AND BENEFIT: This research will benefit women with ADHD by increasing knowledge of strategies and alternative treatments to prescription medications, reducing the social and economic burden of ADHD on Australia and on individuals. It will also benefit women by identifying risks involved with some self-directed approaches in lieu of medical advice. The project has an accessible impact plan to directly benefit end-users, which includes the development of a podcast and a PDF resource translating findings. The resources will reach a wide audience through ADHD Australia’s extensive national networks. We will collaborate with Charles Sturt’s Accessibility and Inclusion Division of Safety, Security and Well-being to create a targeted resource for students with ADHD.Keywords: ADHD, women's health, self-directed strategies, health service use, physical activity, public health
Procedia PDF Downloads 76