Search results for: open failure
299 Genome-Wide Analysis Identifies Locus Associated with Parathyroid Hormone Levels
Authors: Antonela Matana, Dubravka Brdar, Vesela Torlak, Marijana Popovic, Ivana Gunjaca, Ozren Polasek, Vesna Boraska Perica, Maja Barbalic, Ante Punda, Caroline Hayward, Tatijana Zemunik
Abstract:
Parathyroid hormone (PTH) plays a critical role in the regulation of bone mineral metabolism and calcium homeostasis. Higher PTH levels are associated with heart failure, hypertension, coronary artery disease, cardiovascular mortality and poorer bone health. A twin study estimated that 60% of the variation in PTH concentrations is genetically determined. Only one GWAS of PTH concentration has been reported to date. Identified loci explained 4.5% of the variance in circulating PTH, suggesting that additional genetic variants remain undiscovered. Therefore, the aim of this study was to identify novel genetic variants associated with PTH levels in a general population. We have performed a GWAS meta-analysis on 2596 individuals originating from three Croatian cohorts: City of Split and the Islands of Korčula and Vis, within a large-scale project of “10,001 Dalmatians”. A total of 7 411 206 variants, imputed using the 1000 Genomes reference panel, with minor allele frequency ≥ 1% and Rsq ≥ 0.5 were analyzed for the association. GWAS within each data set was performed under an additive model, controlling for age, gender and relatedness. Meta-analysis was conducted using the inverse-variance fixed-effects method. Furthermore, to identify sex-specific effects, we have conducted GWAS meta-analyses analyzing males and females separately. In addition, we have performed biological pathway analysis. Four SNPs, representing one locus, reached genome-wide significance. The most significant SNP was rs11099476 on chromosome 4 (P=1.15x10-8), which explained 1.14 % of the variance in PTH. The SNP is located near the protein-coding gene RASGEF1B. Additionally, we detected suggestive association with SNPs, rs77178854 located on chromosome 2 in the DPP10 gene (P=2.46x10-7) and rs481121 located on chromosome 1 (P=3.58x10-7) near the GRIK1 gene. One of the top hits detected in the main meta-analysis, intron variant rs77178854 located within DPP10 gene, reached genome-wide significance in females (P=2.21x10-9). No single locus was identified in the meta-analysis in males. Fifteen biological pathways were functionally enriched at a P<0.01, including muscle contraction, ion homeostasis and cardiac conduction as the most significant pathways. RASGEF1B is the guanine nucleotide exchange factor, known to be associated with height, bone density, and hip. DPP10 encodes a membrane protein that is a member of the serine proteases family, which binds specific voltage-gated potassium channels and alters their expression and biophysical properties. In conclusion, we identified 2 novel loci associated with PTH levels in a general population, providing us with further insights into the genetics of this complex trait.Keywords: general population, genome-wide association analysis, parathyroid hormone, single nucleotide polymorphisms.
Procedia PDF Downloads 226298 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ
Authors: Lalita, Niladri Sarkar, Subhasis Ghosh
Abstract:
Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal
Procedia PDF Downloads 62297 The Importance of Dialogue, Self-Respect, and Cultural Etiquette in Multicultural Society: An Islamic and Secular Perspective
Authors: Julia A. Ermakova
Abstract:
In today's multicultural societies, dialogue, self-respect, and cultural etiquette play a vital role in fostering mutual respect and understanding. Whether viewed from an Islamic or secular perspective, the importance of these values cannot be overstated. Firstly, dialogue is essential in multicultural societies as it allows individuals from different cultural backgrounds to exchange ideas, opinions, and experiences. To engage in dialogue, one must be open and willing to listen, understand, and respect the views of others. This requires a level of self-awareness, where individuals must know themselves and their interlocutors to create a productive and respectful conversation. Secondly, self-respect is crucial for individuals living in multicultural societies (McLarney). One must have adequately high self-esteem and self-confidence to interact with others positively. By valuing oneself, individuals can create healthy relationships and foster mutual respect, which is essential in diverse communities. Thirdly, cultural etiquette is a way of demonstrating the beauty of one's culture by exhibiting good temperament (Al-Ghazali). Adab, a concept that encompasses good manners, praiseworthy words and deeds, and the pursuit of what is considered good, is highly valued in Islamic teachings. By adhering to Adab, individuals can guard against making mistakes and demonstrate respect for others. Islamic teachings provide etiquette for every situation in life, making up the way of life for Muslims. In the Islamic view, an elegant Muslim woman has several essential qualities, including cultural speech and erudition, speaking style, awareness of how to greet, the ability to receive compliments, lack of desire to argue, polite behavior, avoiding personal insults, and having good intentions (Al-Ghazali). The Quran highlights the inclination of people towards arguing, bickering, and disputes (Qur'an, 4:114). Therefore, it is imperative to avoid useless arguments and disputes, for they are poison that poisons our lives. The Prophet Muhammad, peace and blessings be upon him, warned that the most hateful person to Allah is an irreconcilable disputant (Al-Ghazali). By refraining from such behavior, individuals can foster respect and understanding in multicultural societies. From a secular perspective, respecting the views of others is crucial to engage in productive dialogue. The rule of argument emphasizes the importance of showing respect for the other person's views, allowing for the possibility of error on one's part, and avoiding telling someone they are wrong (Atamali). By exhibiting polite behavior and having respect for everyone, individuals can create a welcoming environment and avoid conflict. In conclusion, the importance of dialogue, self-respect, and cultural etiquette in multicultural societies cannot be overstated. By engaging in dialogue, respecting oneself and others, and adhering to cultural etiquette, individuals can foster mutual respect and understanding in diverse communities. Whether viewed from an Islamic or secular perspective, these values are essential for creating harmonious societies.Keywords: multiculturalism, self-respect, cultural etiquette, adab, ethics, secular perspective
Procedia PDF Downloads 88296 New Media and the Personal Vote in General Elections: A Comparison of Constituency Level Candidates in the United Kingdom and Japan
Authors: Sean Vincent
Abstract:
Within the academic community, there is a consensus that political parties in established liberal democracies are facing a myriad of organisational challenges as a result of falling membership, weakening links to grass-roots support and rising voter apathy. During the same period of party decline and growing public disengagement political parties have become increasingly professionalised. The professionalisation of political parties owes much to changes in technology, with television becoming the dominant medium for political communication. In recent years, however, it has become clear that a new medium of communication is becoming utilised by political parties and candidates – New Media. New Media, a term hard to define but related to internet based communication, offers a potential revolution in political communication. It can be utilised by anyone with access to the internet and its most widely used platforms of communication such as Facebook and Twitter, are free to use. The advent of Web 2.0 has dramatically changed what can be done with the Internet. Websites now allow candidates at the constituency level to fundraise, organise and set out personalised policies. Social media allows them to communicate with supporters and potential voters practically cost-free. As such candidate dependency on the national party for resources and image now lies open to debate. Arguing that greater candidate independence may be a natural next step in light of the contemporary challenges faced by parties, this paper examines how New Media is being used by candidates at the constituency level to increase their personal vote. The paper will present findings from research carried out during two elections – the Japanese Lower House election of 2014 and the UK general election of 2015. During these elections a sample of candidates, totalling 150 candidates, from the three biggest parties in each country were selected and their new media output, specifically candidate websites, Twitter and Facebook output subjected to content analysis. The analysis examines how candidates are using new media to both become more functionally, through fundraising and volunteer mobilisation and politically, through the promotion of personal/local policies, independent from the national party. In order to validate the results of content analysis this paper will also present evidence from interviews carried out with 17 candidates that stood in the 2014 Japanese Lower House election or 2015 UK general election. With a combination of statistical analysis and interviews, several conclusions can be made about the use of New Media at constituency level. The findings show not just a clear difference in the way candidates from each country are using New Media but also differences within countries based upon the particular circumstances of each constituency. While it has not yet replaced traditional methods of fundraising and activist mobilisation, New Media is also becoming increasingly important in campaign organisation and the general consensus amongst candidates is that its importance will continue to grow along as politics in both countries becomes more diffuse.Keywords: political campaigns, elections, new media, political communication
Procedia PDF Downloads 228295 Application of Self-Efficacy Theory in Counseling Deaf and Hard of Hearing Students
Authors: Nancy A. Delich, Stephen D. Roberts
Abstract:
This case study explores using self-efficacy theory in counseling deaf and hard of hearing students in one California school district. Self-efficacy is described as the confidence a student has for performing a set of skills required to succeed at a specific task. When students need to learn a skill, self-efficacy can be a major factor in influencing behavioral change. Self-efficacy is domain specific, meaning that students can have high confidence in their abilities to accomplish a task in one domain, while at the same time having low confidence in their abilities to accomplish another task in a different domain. The communication isolation experienced by deaf and hard of hearing children and adolescents can negatively impact their belief about their ability to navigate life challenges. There is a need to address issues that impact deaf and hard of hearing students’ social-emotional development. Failure to address these needs may result in depression, suicidal ideation, and anxiety among other mental health concerns. Self-efficacy training can be used to address these socio-emotional developmental issues with this population. Four sources of experiences are applied during an intervention: (a) enactive mastery experience, (b) vicarious experience, (c) verbal persuasion, and (d) physiological and affective states. This case study describes the use of self-efficacy training with a coed group of 12 deaf and hard of hearing high school students who experienced bullying at school. Beginning with enactive mastery experience, the counselor introduced the topic of bullying to the group. The counselor educated the students about the different types of bullying while teaching them the terminology, signs and their meanings. The most effective way to increase self-efficacy is through extensive practice. To better understand these concepts, the students practiced through role-playing with the goal of developing self-advocacy skills. Vicarious experience is the perception that students have about their capabilities. Viewing other students advocating for themselves, cognitively rehearsing what actions they will and will not take, and teaching each other how to stand up against bullying can strengthen their belief in successfully overcoming bullying. The third source of self-efficacy beliefs is verbal persuasion. It occurs when others express belief in the capabilities of the student. Didactic training and pedagogic materials on bullying were employed as part of the group counseling sessions. The fourth source of self-efficacy appraisals is physiological and affective states. Students expect positive emotions to be associated with successful skilled performance. When students practice new skills, the counselor can apply several strategies to enhance self-efficacy while reducing and controlling emotional and physical states. The intervention plan incorporated all four sources of self-efficacy training during several interactive group sessions regarding bullying. There was an increased understanding around the issues of bullying, resulting in the students’ belief of their ability to perform protective behaviors and deter future occurrences. The outcome of the intervention plan resulted in a reduction of reported bullying incidents. In conclusion, self-efficacy training can be an effective counseling and teaching strategy in addressing and enhancing the social-emotional functioning with deaf and hard of hearing adolescents.Keywords: counseling, self-efficacy, bullying, social-emotional development, mental health, deaf and hard of hearing students
Procedia PDF Downloads 354294 Teaching English as a Foreign Language: Insights from the Philippine Context
Authors: Arlene Villarama, Micol Grace Guanzon, Zenaida Ramos
Abstract:
This paper provides insights into teaching English as a Foreign Language in the Philippines. The authors reviewed relevant theories and literature, and provide an analysis of the issues in teaching English in the Philippine setting in the light of these theories. The authors made an investigation in Bagong Barrio National High School (BBNHS) - a public school in Caloocan City. The institution has a population of nearly 3,000 students. The performances of randomly chosen 365 respondents were scrutinised. The study regarding the success of teaching English as a foreign language to Filipino children were highlighted. This includes the respondents’ family background, surroundings, way of living, and their behavior and understanding regarding education. The results show that there is a significant relationship between demonstrative, communal, and logical areas that touch the efficacy of introducing English as a foreign Dialectal. Filipino children, by nature, are adventurous and naturally joyful even for little things. They are born with natural skills and capabilities to discover new things. They highly consider activities and work that ignite their curiosity. They love to be recognised and are inspired the most when given the assurance of acceptance and belongingness. Fun is the appealing influence to ignite and motivate learning. The magic word is excitement. The study reveals the many facets of the accumulation and transmission of erudition, in introduction and administration of English as a foreign phonological; it runs and passes through different channels of diffusion. Along the way, there are particles that act as obstructions in protocols where knowledge are to be gathered. Data gained from the respondents conceals a reality that is beyond one’s imagination. One significant factor that touches the inefficacy of understanding and using English as a foreign language is an erroneous outset gained from an old belief handed down from generation to generation. This accepted perception about the power and influence of the use of language, gives the novices either a negative or a positive notion. The investigation shows that a higher number of dislikes in the use of English can be tracked down from the belief of the story on how the English language came into existence. The belief that only the great and the influential have the right to use English as a means of communication kills the joy of acceptance. A significant notation has to be examined so as to provide a solution or if not eradicate the misconceptions that lie behind the substance of the matter. The result of the authors’ research depicts a substantial correlation between the emotional (demonstrative), social (communal), and intellectual (logical). The focus of this paper is to bring out the right notation and disclose the misconceptions with regards to teaching English as a foreign language. This will concentrate on the emotional, social, and intellectual areas of the Filipino learners and how these areas affect the transmittance and accumulation of learning. The authors’ aim is to formulate logical ways and techniques that would open up new beginnings in understanding and acceptance of the subject matter.Keywords: accumulation, behaviour, facets, misconceptions, transmittance
Procedia PDF Downloads 205293 Indeterminacy: An Urban Design Tool to Measure Resilience to Climate Change, a Caribbean Case Study
Authors: Tapan Kumar Dhar
Abstract:
How well are our city forms designed to adapt to climate change and its resulting uncertainty? What urban design tools can be used to measure and improve resilience to climate change, and how would they do so? In addressing these questions, this paper considers indeterminacy, a concept originated in the resilience literature, to measure the resilience of built environments. In the realm of urban design, ‘indeterminacy’ can be referred to as built-in design capabilities of an urban system to serve different purposes which are not necessarily predetermined. An urban system, particularly that with a higher degree of indeterminacy, can enable the system to be reorganized and changed to accommodate new or unknown functions while coping with uncertainty over time. Underlying principles of this concept have long been discussed in the urban design and planning literature, including open architecture, landscape urbanism, and flexible housing. This paper argues that the concept indeterminacy holds the potential to reduce the impacts of climate change incrementally and proactively. With regard to sustainable development, both planning and climate change literature highly recommend proactive adaptation as it involves less cost, efforts, and energy than last-minute emergency or reactive actions. Nevertheless, the concept still remains isolated from resilience and climate change adaptation discourses even though the discourses advocate the incremental transformation of a system to cope with climatic uncertainty. This paper considers indeterminacy, as an urban design tool, to measure and increase resilience (and adaptive capacity) of Long Bay’s coastal settlements in Negril, Jamaica. Negril is one of the popular tourism destinations in the Caribbean highly vulnerable to sea-level rise and its associated impacts. This paper employs empirical information obtained from direct observation and informal interviews with local people. While testing the tool, this paper deploys an urban morphology study, which includes land use patterns and the physical characteristics of urban form, including street networks, block patterns, and building footprints. The results reveal that most resorts in Long Bay are designed for pre-determined purposes and offer a little potential to use differently if needed. Additionally, Negril’s street networks are found to be rigid and have limited accessibility to different points of interest. This rigidity can expose the entire infrastructure further to extreme climatic events and also impedes recovery actions after a disaster. However, Long Bay still has room for future resilient developments in other relatively less vulnerable areas. In adapting to climate change, indeterminacy can be reached through design that achieves a balance between the degree of vulnerability and the degree of indeterminacy: the more vulnerable a place is, the more indeterminacy is useful. This paper concludes with a set of urban design typologies to increase the resilience of coastal settlements.Keywords: climate change adaptation, resilience, sea-level rise, urban form
Procedia PDF Downloads 367292 Injunctions, Disjunctions, Remnants: The Reverse of Unity
Authors: Igor Guatelli
Abstract:
The universe of aesthetic perception entails impasses about sensitive divergences that each text or visual object may be subjected to. If approached through intertextuality that is not based on the misleading notion of kinships or similarities a priori admissible, the possibility of anachronistic, heterogeneous - and non-diachronic - assemblies can enhance the emergence of interval movements, intermediate, and conflicting, conducive to a method of reading, interpreting, and assigning meaning that escapes the rigid antinomies of the mere being and non-being of things. In negative, they operate in a relationship built by the lack of an adjusted meaning set by their positive existences, with no remainders; the generated interval becomes the remnant of each of them; it is the opening that obscures the stable positions of each one. Without the negative of absence, of that which is always missing or must be missing in a text, concept, or image made positive by history, nothing is perceived beyond what has been already given. Pairings or binary oppositions cannot lead only to functional syntheses; on the contrary, methodological disturbances accumulated by the approximation of signs and entities can initiate a process of becoming as an opening to an unforeseen other, transformation until a moment when the difficulties of [re]conciliation become the mainstay of a future of that sign/entity, not envisioned a priori. A counter-history can emerge from these unprecedented, misadjusted approaches, beginnings of unassigned injunctions and disjunctions, in short, difficult alliances that open cracks in a supposedly cohesive history, chained in its apparent linearity with no remains, understood as a categorical historical imperative. Interstices are minority fields that, because of their opening, are capable of causing opacity in that which, apparently, presents itself with irreducible clarity. Resulting from an incomplete and maladjusted [at the least dual] marriage between the signs/entities that originate them, this interval may destabilize and cause disorder in these entities and their own meanings. The interstitials offer a hyphenated relationship: a simultaneous union and separation, a spacing between the entity’s identity and its otherness or, alterity. One and the other may no longer be seen without the crack or fissure that now separates them, uniting, by a space-time lapse. Ontological, semantic shifts are caused by this fissure, an absence between one and the other, one with and against the other. Based on an improbable approximation between some conceptual and semantic shifts within the design production of architect Rem Koolhaas and the textual production of the philosopher Jacques Derrida, this article questions the notion of unity, coherence, affinity, and complementarity in the process of construction of thought from these ontological, epistemological, and semiological fissures that rattle the signs/entities and their stable meanings. Fissures in a thought that is considered coherent, cohesive, formatted are the negativity that constitutes the interstices that allow us to move towards what still remains as non-identity, which allows us to begin another story.Keywords: clearing, interstice, negative, remnant, spectrum
Procedia PDF Downloads 135291 Supermarket Shoppers Perceptions to Genetically Modified Foods in Trinidad and Tobago: Focus on Health Risks and Benefits
Authors: Safia Hasan Varachhia, Neela Badrie, Marsha Singh
Abstract:
Genetic modification of food is an innovative technology that offers a host of benefits and advantages to consumers. Consumer attitudes towards GM food and GM technologies can be identified a major determinant in conditioning market force and encouraging policy makers and regulators to recognize the significance of consumer influence on the market. This study aimed to investigate and evaluate the extent of consumer awareness, knowledge, perception and acceptance of GM foods and its associated health risks and benefit in Trinidad and Tobago, West Indies. The specific objectives of this study were to (determine consumer awareness to GM foods, ascertain their perspectives on health and safety risks and ethical issues associated with GM foods and determine whether labeling of GM foods and ingredients will influence consumers’ willingness to purchase GM foods. A survey comprising of a questionnaire consisting of 40 questions, both open-ended and close-ended was administered to 240 shoppers in small, medium and large-scale supermarkets throughout Trinidad between April-May, 2015 using convenience sampling. This survey investigated consumer awareness, knowledge, perception and acceptance of GM foods and its associated health risks/benefits. The data was analyzed using SPSS 19.0 and Minitab 16.0. One-way ANOVA investigated the effects categories of supermarkets and knowledge scores on shoppers’ awareness, knowledge, perception and acceptance of GM foods. Linear Regression tested whether demographic variables (category of supermarket, age of consumer, level of were useful predictors of consumer’s knowledge of GM foods). More than half of respondents (64.3%) were aware of GM foods and GM technologies, 28.3% of consumers indicated the presence of GM foods in local supermarkets and 47.1% claimed to be knowledgeable of GM foods. Furthermore, significant associations (P < 0.05) were observed between demographic variables (age, income, and education), and consumer knowledge of GM foods. Also, significant differences (P < 0.05) were observed between demographic variables (education, gender, and income) and consumer knowledge of GM foods. In addition, age, education, gender and income (P < 0.05) were useful predictors of consumer knowledge of GM foods. There was a contradiction as whilst 35% of consumers considered GM foods safe for consumption, 70% of consumers were wary of the unknown health risks of GM foods. About two-thirds of respondents (67.5%) considered the creation of GM foods morally wrong and unethical. Regarding GM food labeling preferences, 88% of consumers preferred mandatory labeling of GM foods and 67% of consumers specified that any food product containing a trace of GM food ingredients required mandatory GM labeling. Also, despite the declaration of GM food ingredients on food labels and the reassurance of its safety for consumption by food safety and regulatory institutions, the majority of consumers (76.1%) still preferred conventionally produced foods over GM foods. The study revealed the need to inform shoppers of the presence of GM foods and technologies, present the scientific evidence as to the benefits and risks and the need for a policy on labeling so that informed choices could be taken.Keywords: genetically modified foods, income, labeling consumer awareness, ingredients, morality and ethics, policy
Procedia PDF Downloads 329290 Biotechnology Approach: A Tool of Enhancement of Sticky Mucilage of Pulicaria Incisa (Medicinal Plant) for Wounds Treatment
Authors: Djamila Chabane, Asma Rouane, Karim Arab
Abstract:
Depending of the chemical substances responsible for the pharmacological effects, a future therapeutic drug might be produced by extraction from whole plants or by callus initiated from some parts. The optimized callus culture protocols now offer the possibility to use cell culture techniques for vegetative propagation and open minds for further studies on secondary metabolites and drug establishment. In Algerian traditional medicine, Pulicaria incisa (Asteraceae) is used in the treatment of daily troubles (stomachache, headhache., cold, sore throat and rheumatic arthralgia). Field findings revealed that many healers use some fresh parts (leaves, flowers) of this plant to treat skin wounds. This study aims to evaluate the healing efficiency of artisanal cream prepared from sticky mucilage isolated from calluses on dermal wounds of animal models. Callus cultures were initiated from reproductive explants (young inflorescences) excised from adult plants and transferred to a MS basal medium supplemented with growth regulators and maintained under dark for for months. Many calluses types were obtained with various color and aspect (friable, compact). Several subcultures of calli were performed to enhance the mucilage accumulation. After extraction, the mucilage extracts were tested on animal models as follows. The wound healing potential was studied by causing dermal wounds (1 cm diameter) at the dorsolumbar part of Rattus norvegicus; different samples of the cream were applied after hair removal on three rats each, including two controls (one treated by Vaseline and one without any treatment), two experimental groups (experimental group 1, treated with a reference ointment "Madecassol® and experimental group 2 treated by callus mucilage cream for a period of seventeen days. The evolution of the healing activity was estimated by calculating the percentage reduction of the area wounds treated by all compounds tested compared to the controls by using AutoCAD software. The percentage of healing effect of the cream prepared from callus mucilage was (99.79%) compared to that of Madecassol® (99.76%). For the treatment time, the significant healing activity was observed after 17 days compared to that of the reference pharmaceutical products without any wound infection. The healing effect of Madecassol® is more effective because it stimulates and regulates the production of collagen, a fibrous matrix essential for wound healing. Mucilage extracts also showed a high capacity to heal the skin without any infection. According to this pharmacological activity, we suggest to use calluses produced by in vitro culture to producing new compounds for the skin care and treatment.Keywords: calluses, Pulicaria incisa, mucilage, Wounds
Procedia PDF Downloads 130289 Auto Surgical-Emissive Hand
Authors: Abhit Kumar
Abstract:
The world is full of master slave Telemanipulator where the doctor’s masters the console and the surgical arm perform the operations, i.e. these robots are passive robots, what the world needs to focus is that in use of these passive robots we are acquiring doctors for operating these console hence the utilization of the concept of robotics is still not fully utilized ,hence the focus should be on active robots, Auto Surgical-Emissive Hand use the similar concept of active robotics where this anthropomorphic hand focuses on the autonomous surgical, emissive and scanning operation, enabled with the vision of 3 way emission of Laser Beam/-5°C < ICY Steam < 5°C/ TIC embedded in palm of the anthropomorphic hand and structured in a form of 3 way disc. Fingers of AS-EH (Auto Surgical-Emissive Hand) as called, will have tactile, force, pressure sensor rooted to it so that the mechanical mechanism of force, pressure and physical presence on the external subject can be maintained, conversely our main focus is on the concept of “emission” the question arises how all the 3 non related methods will work together that to merged in a single programmed hand, all the 3 methods will be utilized according to the need of the external subject, the laser if considered will be emitted via a pin sized outlet, this radiation is channelized via a thin channel which further connect to the palm of the surgical hand internally leading to the pin sized outlet, here the laser is used to emit radiation enough to cut open the skin for removal of metal scrap or any other foreign material while the patient is in under anesthesia, keeping the complexity of the operation very low, at the same time the TIC fitted with accurate temperature compensator will be providing us the real time feed of the surgery in the form of heat image, this gives us the chance to analyze the level, also ATC will help us to determine the elevated body temperature while the operation is being proceeded, the thermal imaging camera in rooted internally in the AS-EH while also being connected to the real time software externally to provide us live feedback. The ICY steam will provide the cooling effect before and after the operation, however for more utilization of this concept we can understand the working of simple procedure in which If a finger remain in icy water for a long time it freezes the blood flow stops and the portion become numb and isolated hence even if you try to pinch it will not provide any sensation as the nerve impulse did not coordinated with the brain hence sensory receptor did not got active which means no sense of touch was observed utilizing the same concept we can use the icy stem to be emitted via a pin sized hole on the area of concern ,temperature below 273K which will frost the area after which operation can be done, this steam can also be use to desensitized the pain while the operation in under process. The mathematical calculation, algorithm, programming of working and movement of this hand will be installed in the system prior to the procedure, since this AS-EH is a programmable hand it comes with the limitation hence this AS-EH robot will perform surgical process of low complexity only.Keywords: active robots, algorithm, emission, icy steam, TIC, laser
Procedia PDF Downloads 358288 Efficient Treatment of Azo Dye Wastewater with Simultaneous Energy Generation by Microbial Fuel Cell
Authors: Soumyadeep Bhaduri, Rahul Ghosh, Rahul Shukla, Manaswini Behera
Abstract:
The textile industry consumes a substantial amount of water throughout the processing and production of textile fabrics. The water eventually turns into wastewater, where it acts as an immense damaging nuisance due to its dye content. Wastewater streams contain a percentage ranging from 2.0% to 50.0% of the total weight of dye used, depending on the dye class. The management of dye effluent in textile industries presents a formidable challenge to global sustainability. The current focus is on implementing wastewater treatment technology that enable the recycling of wastewater, reduce energy usage and offset carbon emissions. Microbial fuel cell (MFC) is a device that utilizes microorganisms as a bio-catalyst to effectively treat wastewater while also producing electricity. The MFC harnesses the chemical energy present in wastewater by oxidizing organic compounds in the anodic chamber and reducing an electron acceptor in the cathodic chamber, thereby generating electricity. This research investigates the potential of MFCs to tackle this challenge of azo dye removal with simultaneously generating electricity. Although MFCs are well-established for wastewater treatment, their application in dye decolorization with concurrent electricity generation remains relatively unexplored. This study aims to address this gap by assessing the effectiveness of MFCs as a sustainable solution for treating wastewater containing azo dyes. By harnessing microorganisms as biocatalysts, MFCs offer a promising avenue for environmentally friendly dye effluent management. The performance of MFCs in treating azo dyes and generating electricity was evaluated by optimizing the Chemical Oxygen Demand (COD) and Hydraulic Retention Time (HRT) of influent. COD and HRT values ranged from 1600 mg/L to 2400 mg/L and 5 to 9 days, respectively. Results showed that the maximum open circuit voltage (OCV) reached 648 mV at a COD of 2400 mg/L and HRT of 5 days. Additionally, maximum COD removal of 98% and maximum color removal of 98.91% were achieved at a COD of 1600 mg/L and HRT of 9 days. Furthermore, the study observed a maximum power density of 19.95 W/m3 at a COD of 2400 mg/L and HRT of 5 days. Electrochemical analysis, including linear sweep voltammetry (LSV), cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) were done to find out the response current and internal resistance of the system. To optimize pH and dye concentration, pH values were varied from 4 to 10, and dye concentrations ranged from 25 mg/L to 175 mg/L. The highest voltage output of 704 mV was recorded at pH 7, while a dye concentration of 100 mg/L yielded the maximum output of 672 mV. This study demonstrates that MFCs offer an efficient and sustainable solution for treating azo dyes in textile industry wastewater, while concurrently generating electricity. These findings suggest the potential of MFCs to contribute to environmental remediation and sustainable development efforts on a global scale.Keywords: textile wastewater treatment, microbial fuel cell, renewable energy, sustainable wastewater treatment
Procedia PDF Downloads 23287 Locating the Role of Informal Urbanism in Building Sustainable Cities: Insights from Ghana
Authors: Gideon Abagna Azunre
Abstract:
Informal urbanism is perhaps the most ubiquitous urban phenomenon in sub-Saharan Africa (SSA) and Ghana specifically. Estimates suggest that about two-fifths of urban dwellers (37.9%) in Ghana live in informal settlements, while two-thirds of the working labour force are within the informal economy. This makes Ghana invariably an ‘informal country.’ Informal urbanism involves economic and housing activities that are – in law or in practice – not covered (or insufficiently covered) by formal regulations. Many urban folks rely on informal urbanism as a survival strategy due to limited formal waged employment opportunities or rising home prices in the open market. In an era of globalizing neoliberalism, this struggle to survive in cities resonates with several people globally. For years now, there have been intense debates on the utility of informal urbanism – both its economic and housing dimensions – in developing sustainable cities. While some scholars believe that informal urbanism is beneficial to the sustainable city development agenda, others argue that it generates unbearable negative consequences and it symbolizes lawlessness and squalor. Consequently, the main aim of this research was to dig below the surface of the narratives to locate the role of informal urbanism in the quest for sustainable cities. The research geographically focused on Ghana and its burgeoning informal sector. Also, both primary and secondary data were utilized for the analysis; Secondary data entailed a synthesis of the fragmented literature on informal urbanism in Ghana, while primary data entailed interviews with informal stakeholders (such as informal settlement dwellers), city authorities, and planners. These two data sets were weaved together to discover the nexus between informal urbanism and the tripartite dimensions of sustainable cities – economic, social, and environmental. The results from the research showed a two-pronged relationship between informal urbanism and the three dimensions of sustainable city development. In other words, informal urbanism was identified to both positively and negatively affect the drive for sustainable cities. On the one hand, it provides employment (particularly to women), supplies households’ basic needs (shelter, health, water, and waste management), and enhances civic engagement. However, on the other hand, it perpetuates social and gender inequalities, insecurity, congestion, and pollution. The research revealed that a ‘black and white’ interpretation and policy approach is incapable of capturing the complexities of informal urbanism. Therefore, trying to eradicate or remove it from the urbanscape because it exhibits some negative consequences means cities will lose their positive contributions. The inverse also holds true. A careful balancing act is necessary to maximize the benefits and minimize the costs. Overall, the research presented a de-colonial theorization of informal urbanism and thus followed post-colonial scholars’ clarion call to African cities to embrace the paradox of informality and find ways to integrate it into the city-building process.Keywords: informal urbanism, sustainable city development, economic sustainability, social sustainability, environmental sustainability, Ghana
Procedia PDF Downloads 108286 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays
Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal
Abstract:
Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).Keywords: fault tolerance, FPGA, single event upset, approximate computing
Procedia PDF Downloads 199285 Organic Rankine Cycles (ORC) for Mobile Applications: Economic Feasibility in Different Transportation Sectors
Authors: Roberto Pili, Alessandro Romagnoli, Hartmut Spliethoff, Christoph Wieland
Abstract:
Internal combustion engines (ICE) are today the most common energy system to drive vehicles and transportation systems. Numerous studies state that 50-60% of the fuel energy content is lost to the ambient as sensible heat. ORC offers a valuable alternative to recover such waste heat from ICE, leading to fuel energy savings and reduced emissions. In contrast, the additional weight of the ORC affects the net energy balance of the overall system and the ORC occupies additional volume that competes with vehicle transportation capacity. Consequently, a lower income from delivered freight or passenger tickets can be achieved. The economic feasibility of integrating an ORC into an ICE and the resulting economic impact of weight and volume have not been analyzed in open literature yet. This work intends to define such a benchmark for ORC applications in the transportation sector and investigates the current situation on the market. The applied methodology refers to the freight market, but it can be extended to passenger transportation as well. The economic parameter X is defined as the ratio between the variation of the freight revenues and the variation of fuel costs when an ORC is installed as a bottoming cycle for an ICE with respect to a reference case without ORC. A good economic situation is obtained when the reduction in fuel costs is higher than the reduction of revenues for the delivered freight, i.e. X<1. Through this constraint, a maximum allowable change of transport capacity for a given relative reduction in fuel consumption is determined. The specific fuel consumption is influenced by the ORC in two ways. Firstly because the transportable freight is reduced and secondly because the total weight of the vehicle is increased. Note, that the generated electricity of the ORC influences the size of the ICE and the fuel consumption as well. Taking the above dependencies into account, the limiting condition X = 1 results in a second order equation for the relative change in transported cargo. The described procedure is carried out for a typical city bus, a truck of 24-40 t of payload capacity, a middle-size freight train (1000 t), an inland water vessel (Va RoRo, 2500 t) and handysize-like vessel (25000 t). The maximum allowable mass and volume of the ORC are calculated in dependence of its efficiency in order to satisfy X < 1. Subsequently, these values are compared with weight and volume of commercial ORC products. For ships of any size, the situation appears already highly favorable. A different result is obtained for road and rail vehicles. For trains, the mass and the volume of common ORC products have to be reduced at least by 50%. For trucks and buses, the situation looks even worse. The findings of the present study show a theoretical and practical approach for the economic application of ORC in the transportation sector. In future works, the potential for volume and mass reduction of the ORC will be addressed, together with the integration of an economic assessment for the ORC.Keywords: ORC, transportation, volume, weight
Procedia PDF Downloads 229284 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units
Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz
Abstract:
Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting
Procedia PDF Downloads 223283 Assessing the Outcomes of Collaboration with Students on Curriculum Development and Design on an Undergraduate Art History Module
Authors: Helen Potkin
Abstract:
This paper presents a practice-based case study of a project in which the student group designed and planned the curriculum content, classroom activities and assessment briefs in collaboration with the tutor. It focuses on the co-creation of the curriculum within a history and theory module, Researching the Contemporary, which runs for BA (Hons) Fine Art and Art History and for BA (Hons) Art Design History Practice at Kingston University, London. The paper analyses the potential of collaborative approaches to engender students’ investment in their own learning and to encourage reflective and self-conscious understandings of themselves as learners. It also addresses some of the challenges of working in this way, attending to the risks involved and feelings of uncertainty produced in experimental, fluid and open situations of learning. Alongside this, it acknowledges the tensions inherent in adopting such practices within the framework of the institution and within the wider of context of the commodification of higher education in the United Kingdom. The concept underpinning the initiative was to test out co-creation as a creative process and to explore the possibilities of altering the traditional hierarchical relationship between teacher and student in a more active, participatory environment. In other words, the project asked about: what kind of learning could be imagined if we were all in it together? It considered co-creation as producing different ways of being, or becoming, as learners, involving us reconfiguring multiple relationships: to learning, to each other, to research, to the institution and to our emotions. The project provided the opportunity for students to bring their own research and wider interests into the classroom, take ownership of sessions, collaborate with each other and to define the criteria against which they would be assessed. Drawing on students’ reflections on their experience of co-creation alongside theoretical considerations engaging with the processual nature of learning, concepts of equality and the generative qualities of the interrelationships in the classroom, the paper suggests that the dynamic nature of collaborative and participatory modes of engagement have the potential to foster relevant and significant learning experiences. The findings as a result of the project could be quantified in terms of the high level of student engagement in the project, specifically investment in the assessment, alongside the ambition and high quality of the student work produced. However, reflection on the outcomes of the experiment prompts a further set of questions about the nature of positionality in connection to learning, the ways our identities as learners are formed in and through our relationships in the classroom and the potential and productive nature of creative practice in education. Overall, the paper interrogates questions of what it means to work with students to invent and assemble the curriculum and it assesses the benefits and challenges of co-creation. Underpinning it is the argument that, particularly in the current climate of higher education, it is increasingly important to ask what it means to teach and to envisage what kinds of learning can be possible.Keywords: co-creation, collaboration, learning, participation, risk
Procedia PDF Downloads 123282 [Keynote Talk]: New Generations and Employment: An Exploratory Study about Tensions between the Psycho-Social Characteristics of the Generation Z and Expectations and Actions of Organizational Structures Related with Employment (CABA, 2016)
Authors: Esteban Maioli
Abstract:
Generational studies have an important research tradition in social and human sciences. On the one hand, the speed of social change in the context of globalization imposes the need to research the transformations are identified both the subjectivity of the agents involved and its inclusion in the institutional matrix, specifically employment. Generation Z, (generally considered as the population group whose birth occurs after 1995) have unique psycho-social characteristics. Gen Z is characterized by a different set of values, beliefs, attitudes and ambitions that impact in their concrete action in organizational structures. On the other hand, managers often have to deal with generational differences in the workplace. Organizations have members who belong to different generations; they had never before faced the challenge of having such a diverse group of members. The members of each historical generation are characterized by a different set of values, beliefs, attitudes and ambitions that are manifest in their concrete action in organizational structures. Gen Z it’s the only one who can fully be considered "global," while its members were born in the consolidated context of globalization. Some salient features of the Generation Z can be summarized as follows. They’re the first fully born into a digital world. Social networks and technology are integrated into their lives. They are concerned about the challenges of the modern world (poverty, inequality, climate change, among others). They are self-expressive, more liberal and open to change. They often bore easily, with short attention spans. They do not like routine tasks. They want to achieve a good life-work balance, and they are interested in a flexible work environment, as opposed to traditional work schedule. They are critical thinkers, who come with innovative and creative ideas to help. Research design considered methodological triangulation. Data was collected with two techniques: a self-administered survey with multiple choice questions and attitudinal scales applied over a non-probabilistic sample by reasoned decision. According to the multi-method strategy, also it was conducted in-depth interviews. Organizations constantly face new challenges. One of the biggest ones is to learn to manage a multi-generational scope of work. While Gen Z has not yet been fully incorporated (expected to do so in five years or so), many organizations have already begun to implement a series of changes in its recruitment and development. The main obstacle to retaining young talent is the gap between the expectations of iGen applicants and what companies offer. Members of the iGen expect not only a good salary and job stability but also a clear career plan. Generation Z needs to have immediate feedback on their tasks. However, many organizations have yet to improve both motivation and monitoring practices. It is essential for companies to take a review of organizational practices anchored in the culture of the organization.Keywords: employment, expectations, generation Z, organizational culture, organizations, psycho-social characteristics
Procedia PDF Downloads 203281 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators
Authors: N. Naz, A. D. Domenico, M. N. Huda
Abstract:
Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator
Procedia PDF Downloads 90280 Lentiviral-Based Novel Bicistronic Therapeutic Vaccine against Chronic Hepatitis B Induces Robust Immune Response
Authors: Mohamad F. Jamiluddin, Emeline Sarry, Ana Bejanariu, Cécile Bauche
Abstract:
Introduction: Over 360 million people are chronically infected with hepatitis B virus (HBV), of whom 1 million die each year from HBV-associated liver cirrhosis or hepatocellular carcinoma. Current treatment options for chronic hepatitis B depend on interferon-α (IFNα) or nucleos(t)ide analogs, which control virus replication but rarely eliminate the virus. Treatment with PEG-IFNα leads to a sustained antiviral response in only one third of patients. After withdrawal of the drugs, the rebound of viremia is observed in the majority of patients. Furthermore, the long-term treatment is subsequently associated with the appearance of drug resistant HBV strains that is often the cause of the therapy failure. Among the new therapeutic avenues being developed, therapeutic vaccine aimed at inducing immune responses similar to those found in resolvers is of growing interest. The high prevalence of chronic hepatitis B necessitates the design of better vaccination strategies capable of eliciting broad-spectrum of cell-mediated immunity(CMI) and humoral immune response that can control chronic hepatitis B. Induction of HBV-specific T cells and B cells by therapeutic vaccination may be an innovative strategy to overcome virus persistence. Lentiviral vectors developed and optimized by THERAVECTYS, due to their ability to transduce non-dividing cells, including dendritic cells, and induce CMI response, have demonstrated their effectiveness as vaccination tools. Method: To develop a HBV therapeutic vaccine that can induce a broad but specific immune response, we generated recombinant lentiviral vector carrying IRES(Internal Ribosome Entry Site)-containing bicistronic constructs which allow the coexpression of two vaccine products, namely HBV T- cell epitope vaccine and HBV virus like particle (VLP) vaccine. HBV T-cell epitope vaccine consists of immunodominant cluster of CD4 and CD8 epitopes with spacer in between them and epitopes are derived from HBV surface protein, HBV core, HBV X and polymerase. While HBV VLP vaccine is a HBV core protein based chimeric VLP with surface protein B-cell epitopes displayed. In order to evaluate the immunogenicity, mice were immunized with lentiviral constructs by intramuscular injection. The T cell and antibody immune responses of the two vaccine products were analyzed using IFN-γ ELISpot assay and ELISA respectively to quantify the adaptive response to HBV antigens. Results: Following a single administration in mice, lentiviral construct elicited robust antigen-specific IFN-γ responses to the encoded antigens. The HBV T- cell epitope vaccine demonstrated significantly higher T cell immunogenicity than HBV VLP vaccine. Importantly, we demonstrated by ELISA that antibodies are induced against both HBV surface protein and HBV core protein when mice injected with vaccine construct (p < 0.05). Conclusion: Our results highlight that THERAVECTYS lentiviral vectors may represent a powerful platform for immunization strategy against chronic hepatitis B. Our data suggests the likely importance of Lentiviral vector based novel bicistronic construct for further study, in combination with drugs or as standalone antigens, as a therapeutic lentiviral based HBV vaccines. THERAVECTYS bicistronic HBV vaccine will be further evaluated in animal efficacy studies.Keywords: chronic hepatitis B, lentiviral vectors, therapeutic vaccine, virus-like particle
Procedia PDF Downloads 335279 Autophagy Promotes Vascular Smooth Muscle Cell Migration in vitro and in vivo
Authors: Changhan Ouyang, Zhonglin Xie
Abstract:
In response to proatherosclerotic factors such as oxidized lipids, or to therapeutic interventions such as angioplasty, stents, or bypass surgery, vascular smooth muscle cells (VSMCs) migrate from the media to the intima, resulting in intimal hyperplasia, restenosis, graft failure, or atherosclerosis. These proatherosclerotic factors also activate autophagy in VSMCs. However, the functional role of autophagy in vascular health and disease remains poorly understood. In the present study, we determined the role of autophagy in the regulation of VSMC migration. Autophagy activity in cultured human aortic smooth muscle cells (HASMCs) and mouse carotid arteries was measured by Western blot analysis of microtubule-associated protein 1 light chain 3 B (LC3B) and P62. The VSMC migration was determined by scratch wound assay and transwell migration assay. Ex vivo smooth muscle cell migration was determined using aortic ring assay. The in vivo SMC migration was examined by staining the carotid artery sections with smooth muscle alpha actin (alpha SMA) after carotid artery ligation. To examine the relationship between autophagy and neointimal hyperplasia, C57BL/6J mice were subjected to carotid artery ligation. Seven days after injury, protein levels of Atg5, Atg7, Beclin1, and LC3B drastically increased and remained higher in the injured arteries three weeks after the injury. In parallel with the activation of autophagy, vascular injury-induced neointimal hyperplasia as estimated by increased intima/media ratio. The en face staining of carotid artery showed that vascular injury enhanced alpha SMA staining in the intimal cells as compared with the sham operation. Treatment of HASMCs with platelet-derived growth factor (PDGF), one of the major factors for vascular remodeling in response to vascular injury, increased Atg7 and LC3 II protein levels and enhanced autophagosome formation. In addition, aortic ring assay demonstrated that PDGF treated aortic rings displayed an increase in neovessel formation compared with control rings. Whole mount staining for CD31 and alpha SMA in PDGF treated neovessels revealed that the neovessel structures were stained by alpha SMA but not CD31. In contrast, pharmacological and genetic suppression of autophagy inhibits VSMC migration. Especially, gene silencing of Atg7 inhibited VSMC migration induced by PDGF. Furthermore, three weeks after ligation, markedly decreased neointimal formation was found in mice treated with chloroquine, an inhibitor of autophagy. Quantitative morphometric analysis of the injured vessels revealed a marked reduction in the intima/media ratio in the mice treated with chloroquine. Conclusion: Autophagy activation increases VSMC migration while autophagy suppression inhibits VSMC migration. These findings suggest that autophagy suppression may be an important therapeutic strategy for atherosclerosis and intimal hyperplasia.Keywords: autophagy, vascular smooth muscle cell, migration, neointimal formation
Procedia PDF Downloads 314278 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 13277 Development of DNDC Modelling Method for Evaluation of Carbon Dioxide Emission from Arable Soils in European Russia
Authors: Olga Sukhoveeva
Abstract:
Carbon dioxide (CO2) is the main component of carbon biogeochemical cycle and one of the most important greenhouse gases (GHG). Agriculture, particularly arable soils, are one the largest sources of GHG emission for the atmosphere including CO2.Models may be used for estimation of GHG emission from agriculture if they can be adapted for different countries conditions. The only model used in officially at national level in United Kingdom and China for this purpose is DNDC (DeNitrification-DeComposition). In our research, the model DNDC is offered for estimation of GHG emission from arable soils in Russia. The aim of our research was to create the method of DNDC using for evaluation of CO2 emission in Russia based on official statistical information. The target territory was European part of Russia where many field experiments are located. At the first step of research the database on climate, soil and cropping characteristics for the target region from governmental, statistical, and literature sources were created. All-Russia Research Institute of Hydrometeorological Information – World Data Centre provides open daily data about average meteorological and climatic conditions. It must be calculated spatial average values of maximum and minimum air temperature and precipitation over the region. Spatial average values of soil characteristics (soil texture, bulk density, pH, soil organic carbon content) can be determined on the base of Union state register of soil recourses of Russia. Cropping technologies are published by agricultural research institutes and departments. We offer to define cropping system parameters (annual information about crop yields, amount and types of fertilizers and manure) on the base of the Federal State Statistics Service data. Content of carbon in plant biomass may be calculated via formulas developed and published by Ministry of Natural Resources and Environment of the Russian Federation. At the second step CO2 emission from soil in this region were calculated by DNDC. Modelling data were compared with empirical and literature data and good results were obtained, modelled values were equivalent to the measured ones. It was revealed that the DNDC model may be used to evaluate and forecast the CO2 emission from arable soils in Russia based on the official statistical information. Also, it can be used for creation of the program for decreasing GHG emission from arable soils to the atmosphere. Financial Support: fundamental scientific researching theme 0148-2014-0005 No 01201352499 ‘Solution of fundamental problems of analysis and forecast of Earth climatic system condition’ for 2014-2020; fundamental research program of Presidium of RAS No 51 ‘Climate change: causes, risks, consequences, problems of adaptation and regulation’ for 2018-2020.Keywords: arable soils, carbon dioxide emission, DNDC model, European Russia
Procedia PDF Downloads 192276 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model
Authors: Danjuma Bawa
Abstract:
This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics
Procedia PDF Downloads 148275 Microbial Fuel Cells: Performance and Applications
Authors: Andrea Pietrelli, Vincenzo Ferrara, Bruno Allard, Francois Buret, Irene Bavasso, Nicola Lovecchio, Francesca Costantini, Firas Khaled
Abstract:
This paper aims to show some applications of microbial fuel cells (MFCs), an energy harvesting technique, as clean power source to supply low power device for application like wireless sensor network (WSN) for environmental monitoring. Furthermore, MFC can be used directly as biosensor to analyse parameters like pH and temperature or arranged in form of cluster devices in order to use as small power plant. An MFC is a bioreactor that converts energy stored in chemical bonds of organic matter into electrical energy, through a series of reactions catalysed by microorganisms. We have developed a lab-scale terrestrial microbial fuel cell (TMFC), based on soil that acts as source of bacteria and flow of nutrient and a lab-scale waste water microbial fuel cell (WWMFC), where waste water acts as flow of nutrient and bacteria. We performed large series of tests to exploit the capability as biosensor. The pH value has strong influence on the open circuit voltage (OCV) delivered from TMFCs. We analyzed three condition: test A and B were filled with same soil but changing pH from 6 to 6.63, test C was prepared using a different soil with a pH value of 6.3. Experimental results clearly show how with higher pH value a higher OCV was produced; indeed reactors are influenced by different values of pH which increases the voltage in case of a higher pH value until the best pH value of 7 is achieved. The influence of pH on OCV of lab-scales WWMFC was analyzed at pH value of 6.5, 7, 7.2, 7.5 and 8. WWMFCs are influenced from temperature more than TMFCs. We tested the power performance of WWMFCs considering four imposed values of ambient temperature. Results show how power performance increase proportionally with higher temperature values, doubling the output power from 20° to 40°. The best value of power produced from our lab-scale TMFC was equal to 310 μW using peaty soil, at 1KΩ, corresponding to a current of 0.5 mA. A TMFC can supply proper energy to low power devices of a WSN by means of the design of three stages scheme of an energy management system, which adapts voltage level of TMFC to those required by a WSN node, as 3.3V. Using a commercial DC/DC boost converter, that needs an input voltage of 700 mV, the current source of 0.5 mA, charges a capacitor of 6.8 mF until it will have accumulated an amount of charge equal to 700 mV in a time of 10 s. The output stage includes an output switch that close the circuit after a time of 10s + 1.5ms because the converter can boost the voltage from 0.7V to 3.3V in 1.5 ms. Furthermore, we tested in form of clusters connected in series up to 20 WWMFCs, we have obtained a high voltage value as output, around 10V, but low current value. MFC can be considered a suitable clean energy source to be used to supply low power devices as a WSN node or to be used directly as biosensor.Keywords: energy harvesting, low power electronics, microbial fuel cell, terrestrial microbial fuel cell, waste-water microbial fuel cell, wireless sensor network
Procedia PDF Downloads 207274 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System
Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski
Abstract:
Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals
Procedia PDF Downloads 377273 Impact of Sufism on Indian Cinema: A New Cultural Construct for Mediating Conflict
Authors: Ravi Chaturvedi, Ghanshyam Beniwal
Abstract:
Without going much into the detail of long history of Sufism in the world and the etymological definition of the word ‘Sufi’, it will be sufficient to underline that the concept of Sufism was to focus the mystic power on the spiritual dimension of Islam with a view-shielding the believers from the outwardly and unrealistic dogma of the faith. Sufis adopted rather a liberal view in propagating the religious order of Islam suitable to the cultural and social environment of the land. It is, in fact, a mission of higher religious order of any faith, which disdains strife and conflict in any form. The joy of self-realization being the essence of religion is experienced after a long spiritual practice. India had Sufi and Bhakti (devotion) traditions in Islam and Hinduism, respectively. Both Sufism and Bhakti traditions were based on respect for different religions. The poorer and lower caste Hindus and Muslims were greatly influenced by these traditions. Unlike Ulemas and Brahmans, the Sufi and Bhakti saints were highly tolerant and open to the truth in other faiths. They never adopted sectarian attitudes and were never involved in power struggles. They kept away from power structures. Sufism is integrated with the Indian cinema since its initial days. In the earliest Bollywood movies, Sufism was represented in the form of qawwali which made its way from dargahs (shrines). Mixing it with pop influences, Hindi movies began using Sufi music in a big way only in the current decade. However, of late, songs with Sufi influences have become de rigueur in almost every film being released these days, irrespective of the genre, whether it is a romantic Gangster or a cerebral Corporate. 'Sufi is in the DNA of the Indian sub-continent', according to several contemporary filmmakers, critics, and spectators.The inherent theatricality motivates the performer of the 'Sufi' rituals for a dramatic behavior. The theatrical force of these stages of Sufi practice is so powerful that even the spectator cannot resist himself from being moved. In a multi-cultural country like India, the mediating streams have acquired a multi-layered importance in recent history. The second half of Indian post-colonial era has witnessed a regular chain of some conflicting religio-political waves arising from various sectarian camps in the country, which have compelled the counter forces to activate for keeping the spirit of composite cultural ethos alive. The study has revealed that the Sufi practice methodology is also being adapted for inclusion of spirituality in life at par to Yoga practice. This paper, a part of research study, is an attempt to establish that the Sufism in Indian cinema is one such mediating voice which is very active and alive throughout the length and width of the country continuously bridging the gap between various religious and social factions, and have a significant role to play in future as well.Keywords: Indian cinema, mediating voice, Sufi, yoga practice
Procedia PDF Downloads 497272 Achieving Sustainable Development through Transformative Pedagogies in Universities
Authors: Eugene Allevato
Abstract:
Developing a responsible personal worldview is central to sustainable development, but achieving quality education to promote transformative learning for sustainability is thus far, poorly understood. Most programs involving education for sustainable development rely on changing behavior, rather than attitudes. The emphasis is on the scientific and utilitarian aspect of sustainability with negligible importance on the intrinsic value of nature. Campus sustainability projects include building sustainable gardens and implementing energy-efficient upgrades, instead of focusing on educating for sustainable development through exploration of students’ values and beliefs. Even though green technology adoption maybe the right thing to do, most schools are not targeting the root cause of the environmental crisis; they are just providing palliative measures. This study explores the under-examined factors that lead to pro-environmental behavior by investigating the environmental perceptions of both college business students and personnel of green organizations. A mixed research approach of qualitative, based on structured interviews, and quantitative instruments was developed including 30 college-level students’ interviews and 40 green organization staff members involved in sustainable activities. The interviews were tape-recorded and transcribed for analysis. Categorization of the responses to the open‐ended questions was conducted with the purpose of identifying the main types of factors influencing attitudes and correlating with behaviors. Overall the findings of this study indicated a lack of appreciation for nature, and inability to understand interconnectedness and apply critical thinking. The results of the survey conducted on undergraduate students indicated that the responses of business and liberal arts students by independent t-test were significantly different, with a p‐value of 0.03. While liberal arts students showed an understanding of human interdependence with nature and its delicate balance, business students seemed to believe that humans were meant to rule over the rest of nature. This result was quite intriguing from the perspective that business students will be defining markets, influencing society, controlling and managing businesses that supposedly, in the face of climate change, shall implement sustainable activities. These alarming results led to the focus on green businesses in order to better understand their motivation to engage in sustainable activities. Additionally, a probit model revealed that childhood exposure to nature has a significantly positive impact in pro-environmental attitudes to most of the New Ecological Paradigm scales. Based on these findings, this paper discusses educators including Socrates, John Dewey and Paulo Freire in the implementation of eco-pedagogy and transformative learning following a curriculum with emphasis on critical and systems thinking, which are deemed to be key ingredients in quality education for sustainable development.Keywords: eco-pedagogy, environmental behavior, quality education for sustainable development, transformative learning
Procedia PDF Downloads 312271 Negotiating Communication Options for Deaf-Disabled Children
Authors: Steven J. Singer, Julianna F. Kamenakis, Allison R. Shapiro, Kimberly M. Cacciato
Abstract:
Communication and language are topics frequently studied among deaf children. However, there is limited research that focuses specifically on the communication and language experiences of Deaf-Disabled children. In this ethnography, researchers investigated the language experiences of six sets of parents with Deaf-Disabled children who chose American Sign Language (ASL) as the preferred mode of communication for their child. Specifically, the researchers were interested in the factors that influenced the parents’ decisions regarding their child’s communication options, educational placements, and social experiences. Data collection in this research included 18 hours of semi-structured interviews, 20 hours of participant observations, over 150 pages of reflexive journals and field notes, and a 2-hour focus group. The team conducted constant comparison qualitative analysis using NVivo software and an inductive coding procedure. The four researchers each read the data several times until they were able to chunk it into broad categories about communication and social influences. The team compared the various categories they developed, selecting ones that were consistent among researchers and redefining categories that differed. Continuing to use open inductive coding, the research team refined the categories until they were able to develop distinct themes. Two team members developed each theme through a process of independent coding, comparison, discussion, and resolution. The research team developed three themes: 1) early medical needs provided time for the parents to explore various communication options for their Deaf-Disabled child, 2) without intervention from medical professionals or educators, ASL emerged as a prioritized mode of communication for the family, 3) atypical gender roles affected familial communication dynamics. While managing the significant health issues of their Deaf-Disabled child at birth, families and medical professionals were so fixated on tending to the medical needs of the child that the typical pressures of determining a mode of communication were deprioritized. This allowed the families to meticulously research various methods of communication, resulting in an informed, rational, and well-considered decision to use ASL as the primary mode of communication with their Deaf-Disabled child. It was evident that having a Deaf-Disabled child meant an increased amount of labor and responsibilities for parents. This led to a shift in the roles of the family members. During the child’s development, the mother transformed from fulfilling the stereotypical roles of nurturer and administrator to that of administrator and champion. The mother facilitated medical proceedings and educational arrangements while the father became the caretaker and nurturer of their Deaf-Disabled child in addition to the traditional role of earning the family’s primary income. Ultimately, this research led to a deeper understanding of the critical role that time plays in parents’ decision-making process regarding communication methods with their Deaf-Disabled child.Keywords: American Sign Language, deaf-disabled, ethnography, sociolinguistics
Procedia PDF Downloads 122270 Exploring the Impact of Eye Movement Desensitization and Reprocessing (EMDR) And Mindfulness for Processing Trauma and Facilitating Healing During Ayahuasca Ceremonies
Authors: J. Hash, J. Converse, L. Gibson
Abstract:
Plant medicines are of growing interest for addressing mental health concerns. Ayahuasca, a traditional plant-based medicine, has established itself as a powerful way of processing trauma and precipitating healing and mood stabilization. Eye Movement Desensitization and Reprocessing (EMDR) is another treatment modality that aids in the rapid processing and resolution of trauma. We investigated group EMDR therapy, G-TEP, as a preparatory practice before Ayahuasca ceremonies to determine if the combination of these modalities supports participants in their journeys of letting go of past experiences negatively impacting mental health, thereby accentuating the healing of the plant medicine. We surveyed 96 participants (51 experimental G-TEP, 45 control grounding prior to their ceremony; age M=38.6, SD=9.1; F=57, M=34; white=39, Hispanic/Latinx=23, multiracial=11, Asian/Pacific Islander=10, other=7) in a pre-post, mixed methods design. Participants were surveyed for demographic characteristics, symptoms of PTSD and cPTSD (International Trauma Questionnaire (ITQ), depression (Beck Depression Inventory, BDI), and stress (Perceived Stress Scale, PSS) before the ceremony and at the end of the ceremony weekend. Open-ended questions also inquired about their expectations of the ceremony and results at the end. No baseline differences existed between the control and experimental participants. Overall, participants reported a decrease in meeting the threshold for PTSD symptoms (p<0.01); surprisingly, the control group reported significantly fewer thresholds met for symptoms of affective dysregulation, 2(1)=6.776, p<.01, negative self-concept, 2 (1)=7.122, p<.01, and disturbance in relationships, 2 (1)=9.804, p<.01, on subscales of the ITQ as compared to the experimental group. All participants also experienced a significant decrease in scores on the BDI, t(94)=8.995, p<.001, and PSS, t(91)=6.892, p<.001. Similar to patterns of PTSD symptoms, the control group reported significantly lower scores on the BDI, t(65.115)=-2.587, p<.01, and a trend toward lower PSS, t(90)=-1.775, p=.079 (this was significant with a one-sided test at p<.05), compared to the experimental group following the ceremony. Qualitative interviews among participants revealed a potential explanation for these relatively higher levels of depression and stress in the experimental group following the ceremony. Many participants reported needing more time to process their experience to gain an understanding of the effects of the Ayahuasca medicine. Others reported a sense of hopefulness and understanding of the sources of their trauma and the necessary steps to heal moving forward. This suggests increased introspection and openness to processing trauma, therefore making them more receptive to their emotions. The integration process of an Ayahuasca ceremony is a week- to months-long process that was not accessible in this stage of research, yet it is an integral process to understanding the full effects of the Ayahuasca medicine following the closure of a ceremony. Our future research aims to assess participants weeks into their integration process to determine the effectiveness of EMDR, and if the higher levels of depression and stress indicate the initial reaction to greater awareness of trauma and receptivity to healing.Keywords: ayahuasca, EMDR, PTSD, mental health
Procedia PDF Downloads 66