Search results for: concept of God
141 Supporting a Moral Growth Mindset Among College Students
Authors: Kate Allman, Heather Maranges, Elise Dykhuis
Abstract:
Moral Growth Mindset (MGM) is the belief that one has the capacity to become a more moral person, as opposed to a fixed conception of one’s moral ability and capacity (Han et al., 2018). Building from Dweck’s work in incremental implicit theories of intelligence (2008), Moral Growth Mindset (Han et al., 2020) extends growth mindsets into the moral dimension. The concept of MGM has the potential to help researchers understand how both mindsets and interventions can impact character development, and it has even been shown to have connections to voluntary service engagement (Han et al., 2018). Understanding the contexts in which MGM might be cultivated could help to promote the further cultivation of character, in addition to prosocial behaviors like service engagement, which may, in turn, promote larger scale engagement in social justice-oriented thoughts, feelings, and behaviors. In particular, college may be a place to intentionally cultivate a growth mindset toward moral capacities, given the unique developmental and maturational components of the college experience, including contextual opportunity (Lapsley & Narvaez, 2006) and independence requiring the constant consideration, revision, and internalization of personal values (Lapsley & Woodbury, 2016). In a semester-long, quasi-experimental study, we examined the impact of a pedagogical approach designed to cultivate college student character development on participants’ MGM. With an intervention (n=69) and a control group (n=97; Pre-course: 27% Men; 66% Women; 68% White; 18% Asian; 2% Black; <1% Hispanic/Latino), we investigated whether college courses that intentionally incorporate character education pedagogy (Lamb, Brant, Brooks, 2021) affect a variety of psychosocial variables associated with moral thoughts, feelings, identity, and behavior (e.g. moral growth mindset, honesty, compassion, etc.). The intervention group consisted of 69 undergraduate students (Pre-course: 40% Men; 52% Women; 68% White; 10.5% Black; 7.4% Asian; 4.2% Hispanic/Latino) that voluntarily enrolled in five undergraduate courses that encouraged students to engage with key concepts and methods of character development through the application of research-based strategies and personal reflection on goals and experiences. Moral Growth Mindset was measured using the four-item Moral Growth Mindset scale (Han et al., 2020), with items such as You can improve your basic morals and character considerably on a six-point Likert scale from 1 (strongly disagree) to 6 (strongly agree). Higher scores of MGM indicate a stronger belief that one can become a more moral person with personal effort. Reliability at Time 1 was Cronbach’s ɑ= .833, and at Time 2 Cronbach’s ɑ= .772. An Analysis of Covariance (ANCOVA) was conducted to explore whether post-course MGM scores were different between the intervention and control when controlling for pre-course MGM scores. The ANCOVA indicated significant differences in MGM between groups post-course, F(1,163) = 8.073, p = .005, R² = .11, where descriptive statistics indicate that intervention scores were higher than the control group at post-course. Results indicate that intentional character development pedagogy can be leveraged to support the development of Moral Growth Mindset and related capacities in undergraduate settings.Keywords: moral personality, character education, incremental theories of personality, growth mindset
Procedia PDF Downloads 145140 Overcoming the Challenges of Subjective Truths in the Post-Truth Age Through a Critical-Ethical English Pedagogy
Authors: Farah Vierra
Abstract:
Following the 2016 US presidential election and the advancement of the Brexit referendum, the concept of “post-truth,” defined by the Oxford Dictionary as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief,” came into prominent use in public, political and educational circles. What this essentially entails is that in this age, individuals are increasingly confronted with subjective perpetuations of truth in their discourse spheres that are informed by beliefs and opinions as opposed to any form of coherence to the reality of those to who this truth claims concern. In principle, a subjective delineation of truth is progressive and liberating – especially considering its potential to provide marginalised groups in the diverse communities of our globalised world with the voice to articulate truths that are representative of themselves and their experiences. However, any form of human flourishing that seems to be promised here collapses as the tenets of subjective truths initially in place to liberate have been distorted through post-truth to allow individuals to purport selective and individualistic truth claims that further oppress and silence certain groups within society without due accountability. The evidence of this is prevalent through the conception of terms such as "alternative facts" and "fake news" that we observe individuals declare when their problematic truth claims are being questioned. Considering the pervasiveness of post-truth and the ethical issues that accompany it, educators and scholars alike have increasingly noted the need to adapt educational practices and pedagogies to account for the diminishing objectivity of truth in the twenty-first century, especially because students, as digital natives, find themselves in the firing line of post-truth; engulfed in digital societies that proliferate post-truth through the surge of truth claims allowed in various media sites. In an attempt to equip students with the vital skills to navigate the post-truth age and oppose its proliferation of social injustices, English educators find themselves having to contend with a complex question: how can the teaching of English equip students with the ability to critically and ethically scrutinise truth claims whilst also mediating the subjectivity of truth in a manner that does not undermine the voices of diverse communities. In order to address this question, this paper will first examine the challenges that confront students as a result of post-truth. Following this, the paper will elucidate the role English education can play in helping students overcome the complex demands of the post-truth age. Scholars have consistently touted the affordances of literary texts in providing students with imagined spaces to explore societal issues through a critical discernment of language and an ethical engagement with its narrative developments. Therefore, this paper will explain and demonstrate how literary texts, when used alongside a critical-ethical post-truth pedagogy that equips students with interpretive strategies informed by literary traditions such as literary and ethical criticism, can be effective in helping students develop the pertinent skills to comprehensively examine truth claims and overcome the challenges of the post-truth age.Keywords: post-truth, pedagogy, ethics, english, education
Procedia PDF Downloads 66139 PolyScan: Comprehending Human Polymicrobial Infections for Vector-Borne Disease Diagnostic Purposes
Authors: Kunal Garg, Louise Theusen Hermansan, Kanoktip Puttaraska, Oliver Hendricks, Heidi Pirttinen, Leona Gilbert
Abstract:
The Germ Theory (one infectious determinant is equal to one disease) has unarguably evolved our capability to diagnose and treat infectious diseases over the years. Nevertheless, the advent of technology, climate change, and volatile human behavior has brought about drastic changes in our environment, leading us to question the relevance of the Germ Theory in our day, i.e. will vector-borne disease (VBD) sufferers produce multiple immune responses when tested for multiple microbes? Vector diseased patients producing multiple immune responses to different microbes would evidently suggest human polymicrobial infections (HPI). Ongoing diagnostic tools are exceedingly unequipped with the current research findings that would aid in diagnosing patients for polymicrobial infections. This shortcoming has caused misdiagnosis at very high rates, consequently diminishing the patient’s quality of life due to inadequate treatment. Equipped with the state-of-art scientific knowledge, PolyScan intends to address the pitfalls in current VBD diagnostics. PolyScan is a multiplex and multifunctional enzyme linked Immunosorbent assay (ELISA) platform that can test for numerous VBD microbes and allow simultaneous screening for multiple types of antibodies. To validate PolyScan, Lyme Borreliosis (LB) and spondyloarthritis (SpA) patient groups (n = 54 each) were tested for Borrelia burgdorferi, Borrelia burgdorferi Round Body (RB), Borrelia afzelii, Borrelia garinii, and Ehrlichia chaffeensis against IgM and IgG antibodies. LB serum samples were obtained from Germany and SpA serum samples were obtained from Denmark under relevant ethical approvals. The SpA group represented chronic LB stage because reactive arthritis (SpA subtype) in the form of Lyme arthritis links to LB. It was hypothesized that patients from both the groups will produce multiple immune responses that as a consequence would evidently suggest HPI. It was also hypothesized that the multiple immune response proportion in SpA patient group would be significantly larger when compared to the LB patient group across both antibodies. It was observed that 26% LB patients and 57% SpA patients produced multiple immune responses in contrast to 33% LB patients and 30% SpA patients that produced solitary immune responses when tested against IgM. Similarly, 52% LB patients and an astounding 73% SpA patients produced multiple immune responses in contrast to 30% LB patients and 8% SpA patients that produced solitary immune responses when tested against IgG. Interestingly, IgM immune dysfunction in both the patient groups was also recorded. Atypically, 6% of the unresponsive 18% LB with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Similarly, 12% of the unresponsive 19% SpA with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Thus, results not only supported hypothesis but also suggested that IgM may atypically prevail longer than IgG. The PolyScan concept will aid clinicians to detect patients for early, persistent, late, polymicrobial, & immune dysfunction conditions linked to different VBD. PolyScan provides a paradigm shift for the VBD diagnostic industry to follow that will drastically shorten patient’s time to receive adequate treatment.Keywords: diagnostics, immune dysfunction, polymicrobial, TICK-TAG
Procedia PDF Downloads 327138 An Odyssey to Sustainability: The Urban Archipelago of India
Authors: B. Sudhakara Reddy
Abstract:
This study provides a snapshot of the sustainability of selected Indian cities by employing 70 indicators in four dimensions to develop an overall city sustainability index. In recent years, the concept of ‘urban sustainability’ has become prominent due to its complexity. Urban areas propel growth and at the same time poses a lot of ecological, social and infrastructural problems and risks. In case of developing countries, the high population density of and the continuous in-migration run the highest risk in natural and man-made disasters. These issues combined with the inability of policy makers in providing basic services makes the cities unsustainable. To assess whether any given policy is moving towards or against urban sustainability it is necessary to consider the relationships among its various dimensions. Hence, in recent years, while preparing the sustainability index, an integral approach involving indicators of different dimensions such as ‘economic’, ‘environmental’ and 'social' is being used. It is also important for urban planners, social analysts and other related institutions to identify and understand the relationships in this complex system. The objective of the paper is to develop a city performance index (CPI) to measure and evaluate the urban regions in terms of sustainable performances. The objectives include: i) Objective assessment of a city’s performance, ii) setting achievable goals iii) prioritise relevant indicators for improvement, iv) learning from leaders, iv) assessment of the effectiveness of programmes that results in achieving high indicator values, v) Strengthening of stakeholder participation. Using the benchmark approach, a conceptual framework is developed for evaluating 25 Indian cities. We develop City Sustainability index (CSI) in order to rank cities according to their level of sustainability. The CSI is composed of four dimensions: Economic, Environment, Social, and Institutional. Each dimension is further composed of multiple indicators: (1) Economic that considers growth, access to electricity, and telephone availability; (2) environmental that includes waste water treatment, carbon emissions, (3) social that includes, equity, infant mortality, and 4) institutional that includes, voting share of population, urban regeneration policies. The CSI, consisting of four dimensions disaggregate into 12 categories and ultimately into 70 indicators. The data are obtained from public and non-governmental organizations, and also from city officials and experts. By ranking a sample of diverse cities on a set of specific dimensions the study can serve as a baseline of current conditions and a marker for referencing future results. The benchmarks and indices presented in the study provide a unique resource for the government and the city authorities to learn about the positive and negative attributes of a city and prepare plans for a sustainable urban development. As a result of our conceptual framework, the set of criteria we suggest is somewhat different to any already in the literature. The scope of our analysis is intended to be broad. Although illustrated with specific examples, it should be apparent that the principles identified are relevant to any monitoring that is used to inform decisions involving decision variables. These indicators are policy-relevant and, hence they are useful tool for decision-makers and researchers.Keywords: benchmark, city, indicator, performance, sustainability
Procedia PDF Downloads 269137 Detection of Triclosan in Water Based on Nanostructured Thin Films
Authors: G. Magalhães-Mota, C. Magro, S. Sério, E. Mateus, P. A. Ribeiro, A. B. Ribeiro, M. Raposo
Abstract:
Triclosan [5-chloro-2-(2,4-dichlorophenoxy) phenol], belonging to the class of Pharmaceuticals and Personal Care Products (PPCPs), is a broad-spectrum antimicrobial agent and bactericide. Because of its antimicrobial efficacy, it is widely used in personal health and skin care products, such as soaps, detergents, hand cleansers, cosmetics, toothpastes, etc. However, it has been considered to disrupt the endocrine system, for instance, thyroid hormone homeostasis and possibly the reproductive system. Considering the widespread use of triclosan, it is expected that environmental and food safety problems regarding triclosan will increase dramatically. Triclosan has been found in river water samples in both North America and Europe and is likely widely distributed wherever triclosan-containing products are used. Although significant amounts are removed in sewage plants, considerable quantities remain in the sewage effluent, initiating widespread environmental contamination. Triclosan undergoes bioconversion to methyl-triclosan, which has been demonstrated to bio accumulate in fish. In addition, triclosan has been found in human urine samples from persons with no known industrial exposure and in significant amounts in samples of mother's milk, demonstrating its presence in humans. The action of sunlight in river water is known to turn triclosan into dioxin derivatives and raises the possibility of pharmacological dangers not envisioned when the compound was originally utilized. The aim of this work is to detect low concentrations of triclosan in an aqueous complex matrix through the use of a sensor array system, following the electronic tongue concept based on impedance spectroscopy. To achieve this goal, we selected the appropriate molecules to the sensor so that there is a high affinity for triclosan and whose sensitivity ensures the detection of concentrations of at least nano-molar. Thin films of organic molecules and oxides have been produced by the layer-by-layer (LbL) technique and sputtered onto glass solid supports already covered by gold interdigitated electrodes. By submerging the films in complex aqueous solutions with different concentrations of triclosan, resistance and capacitance values were obtained at different frequencies. The preliminary results showed that an array of interdigitated electrodes sensor coated or uncoated with different LbL and films, can be used to detect TCS traces in aqueous solutions in a wide range concentration, from 10⁻¹² to 10⁻⁶ M. The PCA method was applied to the measured data, in order to differentiate the solutions with different concentrations of TCS. Moreover, was also possible to trace a curve, the plot of the logarithm of resistance versus the logarithm of concentration, which allowed us to fit the plotted data points with a decreasing straight line with a slope of 0.022 ± 0.006 which corresponds to the best sensitivity of our sensor. To find the sensor resolution near of the smallest concentration (Cs) used, 1pM, the minimum measured value which can be measured with resolution is 0.006, so the ∆logC =0.006/0.022=0.273, and, therefore, C-Cs~0.9 pM. This leads to a sensor resolution of 0.9 pM for the smallest concentration used, 1pM. This attained detection limit is lower than the values obtained in the literature.Keywords: triclosan, layer-by-layer, impedance spectroscopy, electronic tongue
Procedia PDF Downloads 250136 Rendering Religious References in English: Naguib Mahfouz in the Arabic as a Foreign Language Classroom
Authors: Shereen Yehia El Ezabi
Abstract:
The transition from the advanced to the superior level of Arabic proficiency is widely known to pose considerable challenges for English speaking students of Arabic as a Foreign Language (AFL). Apart from the increasing complexity of the grammar at this juncture, together with the sprawling vocabulary, to name but two of those challenges, there is also the somewhat less studied hurdle along the way to superior level proficiency, namely, the seeming opacity of many aspects of Arab/ic culture to such learners. This presentation tackles one specific dimension of such issues: religious references in literary texts. It illustrates how carefully constructed translation activities may be used to expand and deepen students’ understanding and use of them. This is shown to be vital for making the leap to the desired competency, given that such elements, as reflected in customs, traditions, institutions, worldviews, and formulaic expressions lie at the very core of Arabic culture and, as such, pervade all modes and levels of Arabic discourse. A short story from the collection “Stories from Our Alley”, by preeminent novelist Naguib Mahfouz is selected for use in this context, being particularly replete with such religious references, of which religious expressions will form the focus of the presentation. As a miniature literary work, it provides an organic whole, so to speak, within which to explore with the class the most precise denotation, as well as the subtlest connotation of each expression in an effort to reach the ‘best’ English rendering. The term ‘best’ refers to approximating the meaning in its full complexity from the source text, in this case Arabic, to the target text, English, according to the concept of equivalence in translation theory. The presentation will show how such a process generates the sort of thorough discussion and close text analysis which allows students to gain valuable insight into this central idiom of Arabic. A variety of translation methods will be highlighted, gleaned from the presenter’s extensive work with advanced/superior students in the Center for Arabic Study Abroad (CASA) program at the American University in Cairo. These begin with the literal rendering of expressions, with the purpose of reinforcing vocabulary learning and practicing the rules of derivational morphology as they form each word, since the larger context remains that of an AFL class, as opposed to a translation skills program. However, departures from the literal approach are subsequently explored by degrees, moving along the spectrum of functional and pragmatic freer translations in order to transmit the ‘real’ meaning in readable English to the target audience- no matter how culture/religion specific the expression- while remaining faithful to the original. Samples from students’ work pre and post discussion will be shared, demonstrating how class consensus is formed as to the final English rendering, proposed as the closest match to the Arabic, and shown to be the result of the above activities. Finally, a few examples of translation work which students have gone on to publish will be shared to corroborate the effectiveness of this teaching practice.Keywords: superior level proficiency in Arabic as a foreign language, teaching Arabic as a foreign language, teaching idiomatic expressions, translation in foreign language teaching
Procedia PDF Downloads 198135 Correlation of Hyperlipidemia with Platelet Parameters in Blood Donors
Authors: S. Nishat Fatima Rizvi, Tulika Chandra, Abbas Ali Mahdi, Devisha Agarwal
Abstract:
Introduction: Blood components are an unexplored area prone to numerous discoveries which influence patient’s care. Experiments at different levels will further change the present concept of blood banking. Hyperlipidemia is a condition of elevated plasma level of low-density lipoprotein (LDL) as well as decreased plasma level of high-density lipoprotein (HDL). Studies show that platelets play a vital role in the progression of atherosclerosis and thrombosis, a major cause of death worldwide. They are activated by many triggers like elevated LDL in the blood resulting in aggregation and formation of plaques. Hyperlipidemic platelets are frequently transfused to patients with various disorders. Screening the random donor platelets for hyperlipidemia and correlating the condition with other donor criteria such as lipid rich diet, oral contraceptive pills intake, weight, alcohol intake, smoking, sedentary lifestyle, family history of heart diseases will lead to further deciding the exclusion criteria for donor selection. This will help in making the patients safe as well as the donor deferral criteria more stringent to improve the quality of blood supply. Technical evaluation and assessment will enable blood bankers to supply safe blood and improve the guidelines for blood safety. Thus, we try to study the correlation between hyperlipidemic platelets with platelets parameters, weight, and specific history of the donors. Methodology: This case control study included 100 blood samples of Blood donors, out of 100 only 30 samples were found to be hyperlipidemic and were included as cases, while rest were taken as controls. Lipid Profile were measured by fully automated analyzer (TRIGL:triglycerides),(LDL-C:LDL –Cholesterol plus 2nd generation),CHOL 2: Cholesterol Gen 2), HDL C 3: HDL-Cholesterol plus 3rdgeneration)-(Cobas C311-Roche Diagnostic).And Platelets parameters were analyzed by the Sysmex KX21 automated hematology analyzer. Results: A significant correlation was found amongst hyperlipidemic level in single time donor. In which 80% donors have history of heart disease, 66.66% donors have sedentary life style, 83.3% donors were smokers, 50% donors were alcoholic, and 63.33% donors had taken lipid rich diet. Active physical activity was found amongst 40% donors. We divided donors sample in two groups based on their body weight. In group 1, hyperlipidemic samples: Platelet Parameters were 75% in normal 25% abnormal in >70Kg weight while in 50-70Kg weight 90% were normal 10% were abnormal. In-group 2, Non Hyperlipidemic samples: platelet Parameters were 95% normal and 5% abnormal in >70Kg weight, while in 50-70Kg Weight, 66.66% normal and 33.33% abnormal. Conclusion: The findings indicate that Hyperlipidemic status of donors may affect the platelet parameters and can be distinguished on history by their weight, Smoking, Alcoholic intake, Sedentary lifestyle, Active physical activity, Lipid rich diet, Oral contraceptive pills intake, and Family history of heart disease. However further studies on a large sample size will affirm this finding.Keywords: blood donors, hyperlipidemia, platelet, weight
Procedia PDF Downloads 313134 Creation of a Trust-Wide, Cross-Speciality, Virtual Teaching Programme for Doctors, Nurses and Allied Healthcare Professionals
Authors: Nelomi Anandagoda, Leanne J. Eveson
Abstract:
During the COVID-19 pandemic, the surge in in-patient admissions across the medical directorate of a district general hospital necessitated the implementation of an incident rota. Conscious of the impact on training and professional development, the idea of developing a virtual teaching programme was conceived. The programme initially aimed to provide junior doctors, specialist nurses, pharmacists, and allied healthcare professionals from medical specialties and those re-deployed from other specialties (e.g., ophthalmology, GP, surgery, psychiatry) the knowledge and skills to manage the deteriorating patient with COVID-19. The programme was later developed to incorporate the general internal medicine curriculum. To facilitate continuing medical education whilst maintaining social distancing during this period, a virtual platform was used to deliver teaching to junior doctors across two large district general hospitals and two community hospitals. Teaching sessions were recorded and uploaded to a common platform, providing a resource for participants to catch up on and re-watch teaching sessions, making strides towards reducing discrimination against the professional development of less than full-time trainees. Thus, creating a learning environment, which is inclusive and accessible to adult learners in a self-directed manner. The negative impact of the pandemic on the well-being of healthcare professionals is well documented. To support the multi-disciplinary team, the virtual teaching programme evolved to included sessions on well-being, resilience, and work-life balance. Providing teaching for learners across the multi-disciplinary team (MDT) has been an eye-opening experience. By challenging the concept that learners should only be taught within their own peer groups, the authors have fostered a greater appreciation of the strengths of the MDT and showcased the immense wealth of expertise available within the trust. The inclusive nature of the teaching and the ease of joining a virtual teaching session has facilitated the dissemination of knowledge across the MDT, thus improving patient care on the frontline. The weekly teaching programme has been running for over eight months, with ongoing engagement, interest, and participation. As described above, the teaching programme has evolved to accommodate the needs of its learners. It has received excellent feedback with an appreciation of its inclusive, multi-disciplinary, and holistic nature. The COVID-19 pandemic provided a catalyst to rapidly develop novel methods of working and training and widened access/exposure to the virtual technologies available to large organisations. By merging pedagogical expertise and technology, the authors have created an effective online learning environment. Although the authors do not propose to replace face-to-face teaching altogether, this model of virtual multidisciplinary team, cross-site teaching has proven to be a great leveler. It has made high-quality teaching accessible to learners of different confidence levels, grades, specialties, and working patterns.Keywords: cross-site, cross-speciality, inter-disciplinary, multidisciplinary, virtual teaching
Procedia PDF Downloads 168133 Assessment of Biofilm Production Capacity of Industrially Important Bacteria under Electroinductive Conditions
Authors: Omolola Ojetayo, Emmanuel Garuba, Obinna Ajunwa, Abiodun A. Onilude
Abstract:
Introduction: Biofilm is a functional community of microorganisms that are associated with a surface or an interface. These adherent cells become embedded within an extracellular matrix composed of polymeric substances, i.e., biofilms refer to biological deposits consisting of both microbes and their extracellular products on biotic and abiotic surfaces. Despite their detrimental effects in medicine, biofilms as natural cell immobilization have found several applications in biotechnology, such as in the treatment of wastewater, bioremediation and biodegradation, desulfurization of gas, and conversion of agro-derived materials into alcohols and organic acids. The means of enhancing immobilized cells have been chemical-inductive, and this affects the medium composition and final product. Physical factors including electrical, magnetic, and electromagnetic flux have shown potential for enhancing biofilms depending on the bacterial species, nature, and intensity of emitted signals, the duration of exposure, and substratum used. However, the concept of cell immobilisation by electrical and magnetic induction is still underexplored. Methods: To assess the effects of physical factors on biofilm formation, six American typed culture collection (Acetobacter aceti ATCC15973, Pseudomonas aeruginosa ATCC9027, Serratia marcescens ATCC14756, Gluconobacter oxydans ATCC19357, Rhodobacter sphaeroides ATCC17023, and Bacillus subtilis ATCC6633) were used. Standard culture techniques for bacterial cells were adopted. Natural autoimmobilisation potentials of test bacteria were carried out by simple biofilms ring formation on tubes, while crystal violet binding assay techniques were adopted in the characterisation of biofilm quantity. Electroinduction of bacterial cells by direct current (DC) application in cell broth, static magnetic field exposure, and electromagnetic flux were carried out, and autoimmobilisation of cells in a biofilm pattern was determined on various substrata tested, including wood, glass, steel, polyvinylchloride (PVC) and polyethylene terephthalate. Biot Savart law was used in quantifying magnetic field intensity, and statistical analyses of data obtained were carried out using the analyses of variance (ANOVA) as well as other statistical tools. Results: Biofilm formation by the selected test bacteria was enhanced by the physical factors applied. Electromagnetic induction had the greatest effect on biofilm formation, with magnetic induction producing the least effect across all substrata used. Microbial cell-cell communication could be a possible means via which physical signals affected the cells in a polarisable manner. Conclusion: The enhancement of biofilm formation by bacteria using physical factors has shown that their inherent capability as a cell immobilization method can be further optimised for industrial applications. A possible relationship between the presence of voltage-dependent channels, mechanosensitive channels, and bacterial biofilms could shed more light on this phenomenon.Keywords: bacteria, biofilm, cell immobilization, electromagnetic induction, substrata
Procedia PDF Downloads 187132 Contact Zones and Fashion Hubs: From Circular Economy to Circular Neighbourhoods
Authors: Tiziana Ferrero-Regis, Marissa Lindquist
Abstract:
Circular Economy (CE) is increasingly seen as the reorganisation of production and consumption, and cities are acknowledged as the sources of many ecological and social problems; at the same time, they can be re-imagined through an ecologically and socially resilient future. The concept of the CE has received pointed critiques for its techno-deterministic orientation, focus on science and transformation by the policy. At the heart of our local re-imagining of the CE into circularity through contact zones there is the acknowledgment of collective, spontaneous and shared imaginations of alternative and sustainable futures through the creation of networks of community initiatives that are transformative, creating opportunities that simultaneously make cities rich and enrich humans. This paper presents a mapping project of the fashion and textile ecosystem in Brisbane, Queensland, Australia. Brisbane is currently the most aspirational city in Australia, as its population growth rate is the highest in the country. Yet, Brisbane is considered the least “fashion city” in the country. In contrast, the project revealed a greatly enhanced picture of distinct fashion and textile clusters across greater Brisbane and the adjacency of key services that may act to consolidate CE community contact zones. Clusters to the north of Brisbane and several locales to the south are zones of a greater mix between public/social amenities, walkable zones and local transport networks with educational precincts, community hubs, concentration of small enterprises, designers, artisans and waste recovery centers that will help to establish knowledge of key infrastructure networks that will support enmeshing these zones together. The paper presents two case studies of independent designers who work on new and re-designed clothing through recovering pre-consumer textiles and that operate from within creative precincts. The first case is designer Nelson Molloy, who recently returned to the inner city suburb of West End with their Chasing Zero Design project. The area was known in the 1980s and 1990s for its alternative lifestyle with creative independent production, thrifty clothing shops, alternative fashion and a socialist agenda. After 30 years of progressive gentrification of the suburb, which has dislocated many of the artists, designers and artisans, West End is seeing the return and amplification of clusters of artisans, artists, designers and architects. The other case study is Practice Studio, located in a new zone of creative growth, Bowen Hills, north of the CBD. Practice Studio combines retail with a workroom, offers repair and remaking services, becoming a point of reference for young and emerging Australian designers and artists. The paper demonstrates the spatial politics of the CE and the way in which new cultural capital is produced thanks to cultural specificities and resources. It argues for the recognition of contact zones that are created by local actors, communities and knowledge networks, whose grass-roots agency is fundamental for the co-production of CE’s systems of local governance.Keywords: contact zones, circular citities, fashion and textiles, circular neighbourhoods, australia
Procedia PDF Downloads 96131 Cultivating Concentration and Flow: Evaluation of a Strategy for Mitigating Digital Distractions in University Education
Authors: Vera G. Dianova, Lori P. Montross, Charles M. Burke
Abstract:
In the digital age, the widespread and frequently excessive use of mobile phones amongst university students is recognized as a significant distractor which interferes with their ability to enter a deep state of concentration during studies and diminishes their prospects of experiencing the enjoyable and instrumental state of flow, as defined and described by psychologist M. Csikszentmihalyi. This study has targeted 50 university students with the aim of teaching them to cultivate their ability to engage in deep work and to attain the state of flow, fostering more effective and enjoyable learning experiences. Prior to the start of the intervention, all participating students completed a comprehensive survey based on a variety of validated scales assessing their inclination toward lifelong learning, frequency of flow experiences during study, frustration tolerance, sense of agency, as well as their love of learning and daily time devoted to non-academic mobile phone activities. Several days after this initial assessment, students received a 90-minute lecture on the principles of flow and deep work, accompanied by a critical discourse on the detrimental effects of excessive mobile phone usage. They were encouraged to practice deep work and strive for frequent flow states throughout the semester. Subsequently, students submitted weekly surveys, including the 10-item CORE Dispositional Flow Scale, a 3-item agency scale and furthermore disclosed their average daily hours spent on non-academic mobile phone usage. As a final step, at the end of the semester students engaged in reflective report writing, sharing their experiences and evaluating the intervention's effectiveness. They considered alterations in their love of learning, reflected on the implications of their mobile phone usage, contemplated improvements in their tolerance for boredom and perseverance in complex tasks, and pondered the concept of lifelong learning. Additionally, students assessed whether they actively took steps towards managing their recreational phone usage and towards improving their commitment to becoming lifelong learners. Employing a mixed-methods approach our study offers insights into the dynamics of concentration, flow, mobile phone usage and attitudes towards learning among undergraduate and graduate university students. The findings of this study aim to promote profound contemplation, on the part of both students and instructors, on the rapidly evolving digital-age higher education environment. In an era defined by digital and AI advancements, the ability to concentrate, to experience the state of flow, and to love learning has never been more crucial. This study underscores the significance of addressing mobile phone distractions and providing strategies for cultivating deep concentration. The insights gained can guide educators in shaping effective learning strategies for the digital age. By nurturing a love for learning and encouraging lifelong learning, educational institutions can better prepare students for a rapidly changing labor market, where adaptability and continuous learning are paramount for success in a dynamic career landscape.Keywords: deep work, flow, higher education, lifelong learning, love of learning
Procedia PDF Downloads 67130 Modeling the Present Economic and Social Alienation of Working Class in South Africa in the Musical Production ‘from Marikana to Mahagonny’ at Durban University of Technology (DUT)
Authors: Pamela Tancsik
Abstract:
The stage production in 2018, titled ‘From‘Marikana to Mahagonny’, began with a prologue in the form of the award-winning documentary ‘Miners Shot Down' by Rehad Desai, followed by Brecht/Weill’s song play or scenic cantata ‘Mahagonny’, premièred in Baden-Baden 1927. The central directorial concept of the DUT musical production ‘From Marikana to Mahagonny’ was to show a connection between the socio-political alienation of mineworkers in present-day South Africa and Brecht’s alienation effect in his scenic cantata ‘Mahagonny’. Marikana is a mining town about 50 km west of South Africa’s capital Pretoria. Mahagonny is a fantasy name for a utopian mining town in the United States. The characters, setting, and lyrics refer to America with of songs like ‘Benares’ and ‘Moon of Alabama’ and the use of typical American inventions such as dollars, saloons, and the telephone. The six singing characters in ‘Mahagonny’ all have typical American names: Charlie, Billy, Bobby, Jimmy, and the two girls they meet later are called Jessie and Bessie. The four men set off to seek Mahagonny. For them, it is the ultimate dream destination promising the fulfilment of all their desires, such as girls, alcohol, and dollars – in short, materialistic goals. Instead of finding a paradise, they experience how money and the practice of exploitive capitalism, and the lack of any moral and humanity is destroying their lives. In the end, Mahagonny gets demolished by a hurricane, an event which happened in 1926 in the United States. ‘God’ in person arrives disillusioned and bitter, complaining about violent and immoral mankind. In the end, he sends them all to hell. Charlie, Billy, Bobby, and Jimmy reply that this punishment does not mean anything to them because they have already been in hell for a long time – hell on earth is a reality, so the threat of hell after life is meaningless. Human life was also taken during the stand-off between striking mineworkers and the South African police on 16 August 2012. Miners from the Lonmin Platinum Mine went on an illegal strike, equipped with bush knives and spears. They were striking because their living conditions had never improved; they still lived in muddy shacks with no running water and electricity. Wages were as low as R4,000 (South African Rands), equivalent to just over 200 Euro per month. By August 2012, the negotiations between Lonmin management and the mineworkers’ unions, asking for a minimum wage of R12,500 per month, had failed. Police were sent in by the Government, and when the miners did not withdraw, the police shot at them. 34 were killed, some by bullets in their backs while running away and trying to hide behind rocks. In the musical play ‘From Marikana to Mahagonny’ audiences in South Africa are confronted with a documentary about Marikana, followed by Brecht/Weill’s scenic cantata, highlighting the tragic parallels between the Mahagonny story and characters from 1927 America and the Lonmin workers today in South Africa, showing that in 95 years, capitalism has not changed.Keywords: alienation, brecht/Weill, mahagonny, marikana/South Africa, musical theatre
Procedia PDF Downloads 96129 Method of Nursing Education: History Review
Authors: Cristina Maria Mendoza Sanchez, Maria Angeles Navarro Perán
Abstract:
Introduction: Nursing as a profession, from its initial formation and after its development in practice, has been built and identified mainly from its technical competence and professionalization within the positivist approach of the XIX century that provides a conception of the disease built on the basis of to the biomedical paradigm, where the care provided is more focused on the physiological processes and the disease than on the suffering person understood as a whole. The main issue that is in need of study here is a review of the nursing profession's history to get to know how the nursing profession was before the XIX century. It is unclear if there were organizations or people with knowledge about looking after others or if many people survived by chance. The holistic care, in which the appearance of the disease directly affects all its dimensions: physical, emotional, cognitive, social and spiritual. It is not a concept from the 21st century. It is common practice, most probably since established life in this world, with the final purpose of covering all these perspectives through quality care. Objective: In this paper, we describe and analyze the history of education in nursing learning in terms of reviewing and analysing theoretical foundations of clinical teaching and learning in nursing, with the final purpose of determining and describing the development of the nursing profession along the history. Method: We have done a descriptive systematic review study, doing a systematically searched of manuscripts and articles in the following health science databases: Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL. The selection of articles has been made according to PRISMA criteria, doing a critical reading of the full text using the CASPe method. A compliment to this, we have read a range of historical and contemporary sources to support the review, such as manuals of Florence Nightingale and John of God as primary manuscripts to establish the origin of modern nursing and her professionalization. We have considered and applied ethical considerations of data processing. Results: After applying inclusion and exclusion criteria in our search, in Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL, we have obtained 51 research articles. We have analyzed them in such a way that we have distinguished them by year of publication and the type of study. With the articles obtained, we can see the importance of our background as a profession before modern times in public health and as a review of our past to face challenges in the near future. Discussion: The important influence of key figures other than Nightingale has been overlooked and it emerges that nursing management and development of the professional body has a longer and more complex history than is generally accepted. Conclusions: There is a paucity of studies on the subject of the review to be able to extract very precise evidence and recommendations about nursing before modern times. But even so, as more representative data, an increase in research about nursing history has been observed. In light of the aspects analyzed, the need for new research in the history of nursing emerges from this perspective; in order to germinate studies of the historical construction of care before the XIX century and theories created then. We can assure that pieces of knowledge and ways of care were taught before the XIX century, but they were not called theories, as these concepts were created in modern times.Keywords: nursing history, nursing theory, Saint John of God, Florence Nightingale, learning, nursing education
Procedia PDF Downloads 111128 Achieving Flow at Work: An Experience Sampling Study to Comprehend How Cognitive Task Characteristics and Work Environments Predict Flow Experiences
Authors: Jonas De Kerf, Rein De Cooman, Sara De Gieter
Abstract:
For many decades, scholars have aimed to understand how work can become more meaningful by maximizing both potential and enhancing feelings of satisfaction. One of the largest contributions towards such positive psychology was made with the introduction of the concept of ‘flow,’ which refers to a condition in which people feel intense engagement and effortless action. Since then, valuable research on work-related flow has indicated that this state of mind is related to positive outcomes for both organizations (e.g., social, supportive climates) and workers (e.g., job satisfaction). Yet, scholars still do not fully comprehend how such deep involvement at work is obtained, given the notion that flow is considered a short-term, complex, and dynamic experience. Most research neglects that people who experience flow ought to be optimally challenged so that intense concentration is required. Because attention is at the core of this enjoyable state of mind, this study aims to comprehend how elements that affect workers’ cognitive functioning impact flow at work. Research on cognitive performance suggests that working on mentally demanding tasks (e.g., information processing tasks) requires workers to concentrate deeply, as a result leading to flow experiences. Based on social facilitation theory, working on such tasks in an isolated environment eases concentration. Prior research has indicated that working at home (instead of working at the office) or in a closed office (rather than in an open-plan office) impacts employees’ overall functioning in terms of concentration and productivity. Consequently, we advance such knowledge and propose an interaction by combining cognitive task characteristics and work environments among part-time teleworkers. Hence, we not only aim to shed light on the relation between cognitive tasks and flow but also provide empirical evidence that workers performing such tasks achieve the highest states of flow while working either at home or in closed offices. In July 2022, an experience-sampling study will be conducted that uses a semi-random signal schedule to understand how task and environment predictors together impact part-time teleworkers’ flow. More precisely, about 150 knowledge workers will fill in multiple surveys a day for two consecutive workweeks to report their flow experiences, cognitive tasks, and work environments. Preliminary results from a pilot study indicate that on a between level, tasks high in information processing go along with high self-reported fluent productivity (i.e., making progress). As expected, evidence was found for higher fluency in productivity for workers performing information processing tasks both at home and in a closed office, compared to those performing the same tasks at the office or in open-plan offices. This study expands the current knowledge on work-related flow by looking at a task and environmental predictors that enable workers to obtain such a peak state. While doing so, our findings suggest that practitioners should strive for ideal alignments between tasks and work locations to work with both deep involvement and gratification.Keywords: cognitive work, office lay-out, work location, work-related flow
Procedia PDF Downloads 100127 Characteristics of Plasma Synthetic Jet Actuator in Repetitive Working Mode
Authors: Haohua Zong, Marios Kotsonis
Abstract:
Plasma synthetic jet actuator (PSJA) is a new concept of zero net mass flow actuator which utilizes pulsed arc/spark discharge to rapidly pressurize gas in a small cavity under constant-volume conditions. The unique combination of high exit jet velocity (>400 m/s) and high actuation frequency (>5 kHz) provides a promising solution for high-speed high-Reynolds-number flow control. This paper focuses on the performance of PSJA in repetitive working mode which is more relevant to future flow control applications. A two-electrodes PSJA (cavity volume: 424 mm3, orifice diameter: 2 mm) together with a capacitive discharge circuit (discharge energy: 50 mJ-110 mJ) is designed to enable repetitive operation. Time-Resolved Particle Imaging Velocimetry (TR-PIV) system working at 10 kHz is exploited to investigate the influence of discharge frequency on performance of PSJA. In total, seven cases are tested, covering a wide range of discharge frequencies (20 Hz-560 Hz). The pertinent flow features (shock wave, vortex ring and jet) remain the same for single shot mode and repetitive working mode. Shock wave is issued prior to jet eruption. Two distinct vortex rings are formed in one cycle. The first one is produced by the starting jet whereas the second one is related with the shock wave reflection in cavity. A sudden pressure rise is induced at the throat inlet by the reflection of primary shock wave, promoting the shedding of second vortex ring. In one cycle, jet exit velocity first increases sharply, then decreases almost linearly. Afterwards, an alternate occurrence of multiple jet stages and refresh stages is observed. By monitoring the dynamic evolution of exit velocity in one cycle, some integral performance parameters of PSJA can be deduced. As frequency increases, the jet intensity in steady phase decreases monotonically. In the investigated frequency range, jet duration time drops from 250 µs to 210 µs and peak jet velocity decreases from 53 m/s to approximately 39 m/s. The jet impulse and the expelled gas mass (0.69 µN∙s and 0.027 mg at 20 Hz) decline by 48% and 40%, respectively. However, the electro-mechanical efficiency of PSJA defined by the ratio of jet mechanical energy to capacitor energy doesn’t show significant difference (o(0.01%)). Fourier transformation of the temporal exit velocity signal indicates two dominant frequencies. One corresponds to the discharge frequency, while the other accounts for the alternation frequency of jet stage and refresh stage in one cycle. The alternation period (300 µs approximately) is independent of discharge frequency, and possibly determined intrinsically by the actuator geometry. A simple analytical model is established to interpret the alternation of jet stage and refresh stage. Results show that the dynamic response of exit velocity to a small-scale disturbance (jump in cavity pressure) can be treated as a second-order under-damping system. Oscillation frequency of the exit velocity, namely alternation frequency, is positively proportional to exit area, but inversely proportional to cavity volume and throat length. Theoretical value of alternation period (305 µs) agrees well with the experimental value.Keywords: plasma, synthetic jet, actuator, frequency effect
Procedia PDF Downloads 251126 The 10,000 Fold Effect of Retrograde Neurotransmission, a New Concept for Stroke Revival: Use of Intracarotid Sodium Nitroprusside
Authors: Vinod Kumar
Abstract:
Background: Tissue Plasminogen Activator (tPA) showed a level 1 benefit in acute stroke (within 3-6 hrs). Intracarotid sodium nitroprusside (ICSNP) has been studied in this context with a wide treatment window, fast recovery and affordability. This work proposes two mechanisms for acute cases and one mechanism for chronic cases, which are interrelated, for physiological recovery. a)Retrograde Neurotransmission (acute cases): 1)Normal excitatory impulse: at the synaptic level, glutamate activates NMDA receptors, with nitric oxide synthetase (NOS) on the postsynaptic membrane, for further propagation by the calcium-calmodulin complex. Nitric oxide (NO, produced by NOS) travels backward across the chemical synapse and binds the axon-terminal NO receptor/sGC of a presynaptic neuron, regulating anterograde neurotransmission (ANT) via retrograde neurotransmission (RNT). Heme is the ligand-binding site of the NO receptor/sGC. Heme exhibits > 10,000-fold higher affinity for NO than for oxygen (the 10,000-fold effect) and is completed in 20 msec. 2)Pathological conditions: normal synaptic activity, including both ANT and RNT, is absent. A NO donor (SNP) releases NO from NOS in the postsynaptic region. NO travels backward across a chemical synapse to bind to the heme of a NO receptor in the axon terminal of a presynaptic neuron, generating an impulse, as under normal conditions. b)Vasospasm: (acute cases) Perforators show vasospastic activity. NO vasodilates the perforators via the NO-cAMP pathway. c)Long-Term Potentıatıon (LTP): (chronic cases) The NO–cGMP-pathway plays a role in LTP at many synapses throughout the CNS and at the neuromuscular junction. LTP has been reviewed both generally and with respect to brain regions specific for memory/learning. Aims/Study Des’gn: The principles of “generation of impulses from the presynaptic region to the postsynaptic region by very potent RNT (10,000-fold effect)” and “vasodilation of arteriolar perforators” are the basis of the authors’ hypothesis to treat stroke cases. Case-control prospective study. Mater’als And Methods: The experimental population included 82 stroke patients (10 patients were given control treatments without superfusion or with 5% dextrose superfusion, and 72 patients comprised the ICSNP group). The mean time for superfusion was 9.5 days post-stroke. Pre- and post-ICSNP status was monitored by NIHSS, MRI and TCD. Results: After 90 seconds in the ICSNP group, the mean change in the NIHSS score was a decrease of 1.44 points, or 6.55%; after 2 h, there was a decrease of 1.16 points; after 24 h, there was an increase of 0.66 points, 2.25%, compared to the control-group increase of 0.7 points, or 3.53%; at 7 days, there was an 8.61-point decrease, 44.58%, compared to the control-group increase of 2.55 points, or 22.37%; at 2 months in ICSNP, there was a 6.94-points decrease, 62.80%, compared to the control-group decrease of 2.77 points, or 8.78%. TCD was documented and improvements were noted. Conclusions: ICSNP is a swift-acting drug in the treatment of stroke, acting within 90 seconds on day 9.5 post-stroke with a small decrease after 24 hours. The drug recovers from this decrease quickly.Keywords: brain infarcts, intracarotid sodium nitroprusside, perforators, vasodilatıons, retrograde transmission, the 10, 000-fold effect
Procedia PDF Downloads 306125 Practice Based Approach to the Development of Family Medicine Residents’ Educational Environment
Authors: Lazzat M. Zhamaliyeva, Nurgul A. Abenova, Gauhar S. Dilmagambetova, Ziyash Zh. Tanbetova, Moldir B. Ahmetzhanova, Tatyana P. Ostretcova, Aliya A. Yegemberdiyeva
Abstract:
Introduction: There are many reasons for the weak training of family doctors in Kazakhstan: the unified national educational program is not focused on competencies, the role of a general practitioner (GP) is not clear, poor funding for the health care and education system, outdated teaching and assessment methods, inefficient management. We highlight two issues in particular. Firstly, academic teachers of family medicine (FM) in Kazakhstan do not practice as family doctors; most of them are narrow specialists (pediatricians, therapists, surgeons, etc.); they usually hold one-time consultations; clinical mentors from practical healthcare (non-academic teachers) do not have the teaching competences, and the vast majority of them are also narrow specialists. Secondly, clinical sites (polyclinics) are unprepared for general practice and do not follow the principles of family medicine; residents do not like to be in primary health care (PHC) settings due to the chaos that is happening there, as well as due to the lack of the necessary equipment for mastering and consolidating practical skills. Aim: We present the concept of the family physicians’ training office (FPTO), which is being created as a friendly learning environment for young general practitioners and for the involvement of academic teachers of family medicine in the practical work and innovative development of PHC. Methodology: In developing the conceptual framework and identifying practical activities, we drew on literature and expert input, and interviews. Results: The goal of the FPTO is to create a favorable educational and clinical environment for the development of the FM residents’ competencies, in which the residents with academic teachers and clinical mentors could understand and accept the principles of family medicine, improve clinical knowledge and skills, and gain experience in improving the quality of their practice in scientific basis. Three main areas of office activity are providing primary care to the patients, improving educational services for FM residents and other medical workers, and promoting research in PHC and innovations. The office arranges for residents to see outpatients at least 50% of the time, and teachers of FM departments at least 1/4 of their working time conduct general medical appointments next to residents. Taking into account the educational and scientific workload, the number of attached population for one GP does not exceed 500 persons. The equipment of the office allows FPTO workers to perform invasive and other manipulations without being sent to other clinics. In the office, training for residents is focused on their needs and aimed at achieving the required level of competence. International methodologies and assessment tools are adapted to local conditions and evaluated for their effectiveness and acceptability. Residents and their faculty actively conduct research in the field of family medicine. Conclusions: We propose to change the learning environment in order to create teams of like-minded people, to unite residents and teachers even more for the development of family medicine. The offices will also invest resources in developing and maintaining young doctors' interest in family medicine.Keywords: educational environment, family medicine residents, family physicians’ training office, primary care research
Procedia PDF Downloads 133124 Entrepreneurial Venture Creation through Anchor Event Activities: Pop-Up Stores as On-Site Arenas
Authors: Birgit A. A. Solem, Kristin Bentsen
Abstract:
Scholarly attention in entrepreneurship is currently directed towards understanding entrepreneurial venture creation as a process -the journey of new economic activities from nonexistence to existence often studied through flow- or network models. To complement existing research on entrepreneurial venture creation with more interactivity-based research of organized activities, this study examines two pop-up stores as anchor events involving on-site activities of fifteen participating entrepreneurs launching their new ventures. The pop-up stores were arranged in two middle-sized Norwegian cities and contained different brand stores that brought together actors of sub-networks and communities executing venture creation activities. The pop-up stores became on-site arenas for the entrepreneurs to create, maintain, and rejuvenate their networks, at the same time as becoming venues for temporal coordination of activities involving existing and potential customers in their venture creation. In this work, we apply a conceptual framework based on frequently addressed dilemmas within entrepreneurship theory (discovery/creation, causation/effectuation) to further shed light on the broad aspect of on-site anchor event activities and their venture creation outcomes. The dilemma-based concepts are applied as an analytic toolkit to pursue answers regarding the nature of anchor event activities typically found within entrepreneurial venture creation and how these anchor event activities affect entrepreneurial venture creation outcomes. Our study combines researcher participation with 200 hours of observation and twenty in-depth interviews. Data analysis followed established guidelines for hermeneutic analysis and was intimately intertwined with ongoing data collection. Data was coded and categorized in NVivo 12 software, and iterated several times as patterns were steadily developing. Our findings suggest that core anchor event activities typically found within entrepreneurial venture creation are; a concept- and product experimentation with visitors, arrangements to socialize (evening specials, auctions, and exhibitions), store-in-store concepts, arranged meeting places for peers and close connection with municipality and property owners. Further, this work points to four main entrepreneurial venture creation outcomes derived from the core anchor event activities; (1) venture attention, (2) venture idea-realization, (3) venture collaboration, and (4) venture extension. Our findings show that, depending on which anchor event activities are applied, the outcomes vary. Theoretically, this study offers two main implications. First, anchor event activities are both discovered and created, following the logic of causation, at the same time as being experimental, based on “learning by doing” principles of effectuation during the execution. Second, our research enriches prior studies on venture creation as a process. In this work, entrepreneurial venture creation activities and outcomes are understood through pop-up stores as on-site anchor event arenas, particularly suitable for interactivity-based research requested by the entrepreneurship field. This study also reveals important managerial implications, such as that entrepreneurs should allow themselves to find creative physical venture creation arenas (e.g., pop-up stores, showrooms), as well as collaborate with partners when discovering and creating concepts and activities based on new ideas. In this way, they allow themselves to both strategically plan for- and continually experiment with their venture.Keywords: anchor event, interactivity-based research, pop-up store, entrepreneurial venture creation
Procedia PDF Downloads 90123 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)
Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara
Abstract:
Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry
Procedia PDF Downloads 174122 University Climate and Psychological Adjustment: African American Women’s Experiences at Predominantly White Institutions in the United States
Authors: Faheemah N. Mustafaa, Tamarie Macon, Tabbye Chavous
Abstract:
A major concern of university leaders worldwide is how to create environments where students from diverse racial/ethnic, national, and cultural backgrounds can thrive. Over the past decade or so in the United States, African American women have done exceedingly well in terms of college enrollment, academic performance, and completion. However, the relative academic successes of African American women in higher education has in some ways overshadowed social challenges many Black women continue to encounter on college campuses in the United States. Within predominantly White institutions (PWIs) in particular, there is consistent evidence that many Black students experience racially hostile climates. However, research studies on racial climates within PWIs have mostly focused on cross-sectional comparisons of minority and majority group experiences, and few studies have examined campus racial climate in relation to short- and longer-term well-being. One longitudinal study reported that African American women’s psychological well-being was positively related to their comfort in cross-racial interactions (a concept closely related to campus climate). Thus, our primary research question was: Do African American women’s perceptions of campus climate (tension and positive association) during their freshman year predict their reports of psychological distress and well-being (self-acceptance) during their sophomore year? Participants were part of a longitudinal survey examining African American college students’ academic identity development, particularly in Science, Technology, Engineering, and Mathematics (STEM) fields. The final subsample included 134 self-identified African American/Black women enrolled in PWIs. Accounting for background characteristics (mother’s education, family income, interracial contact, and prior levels of outcomes), we employed hierarchical regression to examine relationships between campus racial climate during freshman year and psychological adjustment one year later. Both regression models significantly predicted African American women’s psychological outcomes (for distress, F(7,91)= 4.34, p < .001; and for self-acceptance, F(7,90)= 4.92, p < .001). Although none of the controls were significant predictors, perceptions of racial tension on campus were associated with both distress and self-acceptance. More perceptions of tension were related to African American women’s greater psychological distress the following year (B= 0.22, p= .01). Additionally, racial tension predicted later self-acceptance in the expected direction: Higher first-year reports of racial tension were related to less positive attitudes toward the self during the sophomore year (B= -0.16, p= .04). However, perceptions that it was normative for Black and White students to socialize on campus (or positive association scores) were unrelated to psychological distress or self-acceptance. Findings highlight the relevance of examining multiple facets of campus racial climate in relation to psychological adjustment, with possible emphasis on the import of racial tension on African American women’s psychological adjustment. Results suggest that negative dimensions of campus racial climate may have lingering effects on psychological well-being, over and above more positive aspects of climate. Thus, programs targeted toward improving student relations on campus should consider addressing cross-racial tensions.Keywords: higher education, psychological adjustment, university climate, university students
Procedia PDF Downloads 384121 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications
Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino
Abstract:
The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses
Procedia PDF Downloads 180120 Typology of Fake News Dissemination Strategies in Social Networks in Social Events
Authors: Mohadese Oghbaee, Borna Firouzi
Abstract:
The emergence of the Internet and more specifically the formation of social media has provided the ground for paying attention to new types of content dissemination. In recent years, Social media users share information, communicate with others, and exchange opinions on social events in this space. Many of the information published in this space are suspicious and produced with the intention of deceiving others. These contents are often called "fake news". Fake news, by disrupting the circulation of the concept and similar concepts such as fake news with correct information and misleading public opinion, has the ability to endanger the security of countries and deprive the audience of the basic right of free access to real information; Competing governments, opposition elements, profit-seeking individuals and even competing organizations, knowing about this capacity, act to distort and overturn the facts in the virtual space of the target countries and communities on a large scale and influence public opinion towards their goals. This process of extensive de-truthing of the information space of the societies has created a wave of harm and worries all over the world. The formation of these concerns has led to the opening of a new path of research for the timely containment and reduction of the destructive effects of fake news on public opinion. In addition, the expansion of this phenomenon has the potential to create serious and important problems for societies, and its impact on events such as the 2016 American elections, Brexit, 2017 French elections, 2019 Indian elections, etc., has caused concerns and led to the adoption of approaches It has been dealt with. In recent years, a simple look at the growth trend of research in "Scopus" shows an increasing increase in research with the keyword "false information", which reached its peak in 2020, namely 524 cases, reached, while in 2015, only 30 scientific-research contents were published in this field. Considering that one of the capabilities of social media is to create a context for the dissemination of news and information, both true and false, in this article, the classification of strategies for spreading fake news in social networks was investigated in social events. To achieve this goal, thematic analysis research method was chosen. In this way, an extensive library study was first conducted in global sources. Then, an in-depth interview was conducted with 18 well-known specialists and experts in the field of news and media in Iran. These experts were selected by purposeful sampling. Then by analyzing the data using the theme analysis method, strategies were obtained; The strategies achieved so far (research is in progress) include unrealistically strengthening/weakening the speed and content of the event, stimulating psycho-media movements, targeting emotional audiences such as women, teenagers and young people, strengthening public hatred, calling the reaction legitimate/illegitimate. events, incitement to physical conflict, simplification of violent protests and targeted publication of images and interviews were introduced.Keywords: fake news, social network, social events, thematic analysis
Procedia PDF Downloads 63119 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit
Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili
Abstract:
Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain
Procedia PDF Downloads 175118 The 10,000 Fold Effect of Retrograde Neurotransmission: A New Concept for Cerebral Palsy Revival by the Use of Nitric Oxide Donars
Authors: V. K. Tewari, M. Hussain, H. K. D. Gupta
Abstract:
Background: Nitric Oxide Donars (NODs) (intrathecal sodium nitroprusside (ITSNP) and oral tadalafil 20mg post ITSNP) has been studied in this context in cerebral palsy patients for fast recovery. This work proposes two mechanisms for acute cases and one mechanism for chronic cases, which are interrelated, for physiological recovery. a) Retrograde Neurotransmission (acute cases): 1) Normal excitatory impulse: at the synaptic level, glutamate activates NMDA receptors, with nitric oxide synthetase (NOS) on the postsynaptic membrane, for further propagation by the calcium-calmodulin complex. Nitric oxide (NO, produced by NOS) travels backward across the chemical synapse and binds the axon-terminal NO receptor/sGC of a presynaptic neuron, regulating anterograde neurotransmission (ANT) via retrograde neurotransmission (RNT). Heme is the ligand-binding site of the NO receptor/sGC. Heme exhibits > 10,000-fold higher affinity for NO than for oxygen (the 10,000-fold effect) and is completed in 20 msec. 2) Pathological conditions: normal synaptic activity, including both ANT and RNT, is absent. A NO donor (SNP) releases NO from NOS in the postsynaptic region. NO travels backward across a chemical synapse to bind to the heme of a NO receptor in the axon terminal of a presynaptic neuron, generating an impulse, as under normal conditions. b) Vasopasm: (acute cases) Perforators show vasospastic activity. NO vasodilates the perforators via the NO-cAMP pathway. c) Long-Term Potentiation (LTP): (chronic cases) The NO–cGMP-pathway plays a role in LTP at many synapses throughout the CNS and at the neuromuscular junction. LTP has been reviewed both generally and with respect to brain regions specific for memory/learning. Aims/Study Design: The principles of “generation of impulses from the presynaptic region to the postsynaptic region by very potent RNT (10,000-fold effect)” and “vasodilation of arteriolar perforators” are the basis of the authors’ hypothesis to treat cerebral palsy cases. Case-control prospective study. Materials and Methods: The experimental population included 82 cerebral palsy patients (10 patients were given control treatments without NOD or with 5% dextrose superfusion, and 72 patients comprised the NOD group). The mean time for superfusion was 5 months post-cerebral palsy. Pre- and post-NOD status was monitored by Gross Motor Function Classification System for Cerebral Palsy (GMFCS), MRI, and TCD studies. Results: After 7 days in the NOD group, the mean change in the GMFCS score was an increase of 1.2 points mean; after 3 months, there was an increase of 3.4 points mean, compared to the control-group increase of 0.1 points at 3 months. MRI and TCD documented the improvements. Conclusions: NOD (ITSNP boosts up the recovery and oral tadalafil maintains the recovery to a well-desired level) acts swiftly in the treatment of CP, acting within 7 days on 5 months post-cerebral palsy either of the three mechanisms.Keywords: cerebral palsy, intrathecal sodium nitroprusside, oral tadalafil, perforators, vasodilations, retrograde transmission, the 10, 000-fold effect, long-term potantiation
Procedia PDF Downloads 361117 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples
Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges
Abstract:
Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review
Procedia PDF Downloads 183116 A Qualitative Investigation into Street Art in an Indonesian City
Authors: Michelle Mansfield
Abstract:
Introduction: This paper uses the work of Deleuze and Guattari to consider the street art practice of youth in the Indonesian city of Yogyakarta, a hub of arts and culture in Central Java. Around the world young people have taken to city streets to populate the new informal exhibition spaces outside the galleries of official art institutions. However, rarely is the focus outside the urban metropolis of the ‘Global North.' This paper looks at these practices in a ‘Global South’ Asian context. Space and place are concepts central to understanding youth cultural expression as it emerges on the streets. Deleuze and Guattari’s notion of assemblage enriches understanding of this complex spatial and creative relationship. Yogyakarta street art combines global patterns and motifs with local meanings, symbolism, and language to express local youth voices that convey a unique sense of place on the world stage. Street art has developed as a global urban youth art movement and is theorised as a way in which marginalised young people reclaim urban space for themselves. Methodologies: This study utilised a variety of qualitative methodologies to collect and analyse data. This project took a multi-method approach to data collection, incorporating the qualitative social research methods of ethnography, nongkrong (deep hanging out), participatory action research, online research, in-depth interviews and focus group discussions. Both interviews and focus groups employed photo-elicitation methodology to stimulate rich data gathering. To analyse collected data, rhizoanalytic approaches incorporating discourse analysis and visual analysis were utilised. Street art practice is a fluid and shifting phenomenon, adding to the complexity of inquiry sites. A qualitative approach to data collection and analysis was the most appropriate way to map the components of the street art assemblage and to draw out complexities of this youth cultural practice in Yogyakarta. Major Findings: The rhizoanalytic approach devised for this study proved a useful way of examining in the street art assemblage. It illustrated the ways in which the street art assemblage is constructed. Especially the interaction of inspiration, materials, creative techniques, audiences, and spaces operate in the creations of artworks. The study also exposed the generational tensions between the senior arts practitioners, the established art world, and the young artists. Conclusion: In summary, within the spatial processes of the city, street art is inextricably linked with its audience, its striving artistic community and everyday life in the smooth rather than the striated worlds of the state and the official art world. In this way, the anarchic rhizomatic art practice of nomadic urban street crews can be described not only as ‘becoming-artist’ but as constituting ‘nomos’, a way of arranging elements which are not dependent on a structured, hierarchical organisation practice. The site, streets, crews, neighbourhood and the passers by can all be examined with the concept of assemblage. The assemblage effectively brings into focus the complexity, dynamism, and flows of desire that is a feature of street art practice by young people in Yogyakarta.Keywords: assemblage, Indonesia, street art, youth
Procedia PDF Downloads 181115 Comparison of On-Site Stormwater Detention Policies in Australian and Brazilian Cities
Authors: Pedro P. Drumond, James E. Ball, Priscilla M. Moura, Márcia M. L. P. Coelho
Abstract:
In recent decades, On-site Stormwater Detention (OSD) systems have been implemented in many cities around the world. In Brazil, urban drainage source control policies were created in the 1990’s and were mainly based on OSD. The concept of this technique is to promote the detention of additional stormwater runoff caused by impervious areas, in order to maintain pre-urbanization peak flow levels. In Australia OSD, was first adopted in the early 1980’s by the Ku-ring-gai Council in Sydney’s northern suburbs and Wollongong City Council. Many papers on the topic were published at that time. However, source control techniques related to stormwater quality have become to the forefront and OSD has been relegated to the background. In order to evaluate the effectiveness of the current regulations regarding OSD, the existing policies were compared in Australian cities, a country considered experienced in the use of this technique, and in Brazilian cities where OSD adoption has been increasing. The cities selected for analysis were Wollongong and Belo Horizonte, the first municipalities to adopt OSD in their respective countries, and Sydney and Porto Alegre, cities where these policies are local references. The Australian and Brazilian cities are located in Southern Hemisphere of the planet and similar rainfall intensities can be observed, especially in storm bursts greater than 15 minutes. Regarding technical criteria, Brazilian cities have a site-based approach, analyzing only on-site system drainage. This approach is criticized for not evaluating impacts on urban drainage systems and in rare cases may cause the increase of peak flows downstream. The city of Wollongong and most of the Sydney Councils adopted a catchment-based approach, requiring the use of Permissible Site Discharge (PSD) and Site Storage Requirements (SSR) values based on analysis of entire catchments via hydrograph-producing computer models. Based on the premise that OSD should be designed to dampen storms of 100 years Average Recurrence Interval (ARI) storm, the values of PSD and SSR in these four municipalities were compared. In general, Brazilian cities presented low values of PSD and high values of SSR. This can be explained by site-based approach and the low runoff coefficient value adopted for pre-development conditions. The results clearly show the differences between approaches and methodologies adopted in OSD designs among Brazilian and Australian municipalities, especially with regard to PSD values, being on opposite sides of the scale. However, lack of research regarding the real performance of constructed OSD does not allow for determining which is best. It is necessary to investigate OSD performance in a real situation, assessing the damping provided throughout its useful life, maintenance issues, debris blockage problems and the parameters related to rain-flow methods. Acknowledgments: The authors wish to thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico (Chamada Universal – MCTI/CNPq Nº 14/2014), FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais, and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for their financial support.Keywords: on-site stormwater detention, source control, stormwater, urban drainage
Procedia PDF Downloads 179114 Accessing Motional Quotient for All Round Development
Authors: Zongping Wang, Chengjun Cui, Jiacun Wang
Abstract:
The concept of intelligence has been widely used to access an individual's cognitive abilities to learn, form concepts, understand, apply logic, and reason. According to the multiple intelligence theory, there are eight distinguished types of intelligence. One of them is the bodily-kinaesthetic intelligence that links to the capacity of an individual controlling his body and working with objects. Motor intelligence, on the other hand, reflects the capacity to understand, perceive and solve functional problems by motor behavior. Both bodily-kinaesthetic intelligence and motor intelligence refer directly or indirectly to bodily capacity. Inspired by these two intelligence concepts, this paper introduces motional intelligence (MI). MI is two-fold. (1) Body strength, which is the capacity of various organ functions manifested by muscle activity under the control of the central nervous system during physical exercises. It can be measured by the magnitude of muscle contraction force, the frequency of repeating a movement, the time to finish a movement of body position, the duration to maintain muscles in a working status, etc. Body strength reflects the objective of MI. (2) Level of psychiatric willingness to physical events. It is a subjective thing and determined by an individual’s self-consciousness to physical events and resistance to fatigue. As such, we call it subjective MI. Subjective MI can be improved through education and proper social events. The improvement of subjective MI can lead to that of objective MI. A quantitative score of an individual’s MI is motional quotient (MQ). MQ is affected by several factors, including genetics, physical training, diet and lifestyle, family and social environment, and personal awareness of the importance of physical exercise. Genes determine one’s body strength potential. Physical training, in general, makes people stronger, faster and swifter. Diet and lifestyle have a direct impact on health. Family and social environment largely affect one’s passion for physical activities, so does personal awareness of the importance of physical exercise. The key to the success of the MQ study is developing an acceptable and efficient system that can be used to assess MQ objectively and quantitatively. We should apply different accessing systems to different groups of people according to their ages and genders. Field test, laboratory test and questionnaire are among essential components of MQ assessment. A scientific interpretation of MQ score is part of an MQ assessment system as it will help an individual to improve his MQ. IQ (intelligence quotient) and EQ (emotional quotient) and their test have been studied intensively. We argue that IQ and EQ study alone is not sufficient for an individual’s all round development. The significance of MQ study is that it offsets IQ and EQ study. MQ reflects an individual’s mental level as well as bodily level of intelligence in physical activities. It is well-known that the American Springfield College seal includes the Luther Gulick triangle with the words “spirit,” “mind,” and “body” written within it. MQ, together with IQ and EQ, echoes this education philosophy. Since its inception in 2012, the MQ research has spread rapidly in China. By now, six prestigious universities in China have established research centers on MQ and its assessment.Keywords: motional Intelligence, motional quotient, multiple intelligence, motor intelligence, all round development
Procedia PDF Downloads 161113 Assessing the Experiences of South African and Indian Legal Profession from the Perspective of Women Representation in Higher Judiciary: The Square Peg in a Round Hole Story
Authors: Sricheta Chowdhury
Abstract:
To require a woman to choose between her work and her personal life is the most acute form of discrimination that can be meted out against her. No woman should be given a choice to choose between her motherhood and her career at Bar, yet that is the most detrimental discrimination that has been happening in Indian Bar, which no one has questioned so far. The falling number of women in practice is a reality that isn’t garnering much attention given the sharp rise in women studying law but is not being able to continue in the profession. Moving from a colonial misogynist whim to a post-colonial “new-age construct of Indian woman” façade, the policymakers of the Indian Judiciary have done nothing so far to decolonize itself from its rudimentary understanding of ‘equality of gender’ when it comes to the legal profession. Therefore, when Indian jurisprudence was (and is) swooning to the sweeping effect of transformative constitutionalism in the understanding of equality as enshrined under the Indian Constitution, one cannot help but question why the legal profession remained out of brushing effect of achieving substantive equality. The Airline industry’s discriminatory policies were not spared from criticism, nor were the policies where women’s involvement in any establishment serving liquor (Anuj Garg case), but the judicial practice did not question the stereotypical bias of gender and unequal structural practices until recently. That necessitates the need to examine the existing Bar policies and the steps taken by the regulatory bodies in assessing the situations that are in favor or against the purpose of furthering women’s issues in present-day India. From a comparative feminist point of concern, South Africa’s pro-women Bar policies are attractive to assess their applicability and extent in terms of promoting inclusivity at the Bar. This article intends to tap on these two countries’ potential in carving a niche in giving women an equal platform to play a substantive role in designing governance policies through the Judiciary. The article analyses the current gender composition of the legal profession while endorsing the concept of substantive equality as a requisite in designing an appropriate appointment process of the judges. It studies the theoretical framework on gender equality, examines the international and regional instruments and analyses the scope of welfare policies that Indian legal and regulatory bodies can undertake towards a transformative initiative in re-modeling the Judiciary to a more diverse and inclusive institution. The methodology employs a comparative and analytical understanding of doctrinal resources. It makes quantitative use of secondary data and qualitative use of primary data collected for determining the present status of Indian women legal practitioners and judges. With respect to quantitative data, statistics on the representation of women as judges and chief justices and senior advocates from their official websites from 2018 till present have been utilized. In respect of qualitative data, results of the structured interviews conducted through open and close-ended questions with retired lady judges of the higher judiciary and senior advocates of the Supreme Court of India, contacted through snowball sampling, are utilized.Keywords: gender, higher judiciary, legal profession, representation, substantive equality
Procedia PDF Downloads 82112 Avoidance of Brittle Fracture in Bridge Bearings: Brittle Fracture Tests and Initial Crack Size
Authors: Natalie Hoyer
Abstract:
Bridges in both roadway and railway systems depend on bearings to ensure extended service life and functionality. These bearings enable proper load distribution from the superstructure to the substructure while permitting controlled movement of the superstructure. The design of bridge bearings, according to Eurocode DIN EN 1337 and the relevant sections of DIN EN 1993, increasingly requires the use of thick plates, especially for long-span bridges. However, these plate thicknesses exceed the limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with DIN EN 1993-1-10 regulations regarding material toughness and through-thickness properties necessitates further modifications. Consequently, these standards cannot be directly applied to the selection of bearing materials without supplementary guidance and design rules. In this context, a recommendation was developed in 2011 to regulate the selection of appropriate steel grades for bearing components. Prior to the initiation of the research project underlying this contribution, this recommendation had only been available as a technical bulletin. Since July 2023, it has been integrated into guideline 804 of the German railway. However, recent findings indicate that certain bridge-bearing components are exposed to high fatigue loads, which necessitate consideration in structural design, material selection, and calculations. Therefore, the German Centre for Rail Traffic Research called a research project with the objective of defining a proposal to expand the current standards in order to implement a sufficient choice of steel material for bridge bearings to avoid brittle fracture, even for thick plates and components subjected to specific fatigue loads. The results obtained from theoretical considerations, such as finite element simulations and analytical calculations, are validated through large-scale component tests. Additionally, experimental observations are used to calibrate the calculation models and modify the input parameters of the design concept. Within the large-scale component tests, a brittle failure is artificially induced in a bearing component. For this purpose, an artificially generated initial defect is introduced at the previously defined hotspot into the specimen using spark erosion. Then, a dynamic load is applied until the crack initiation process occurs to achieve realistic conditions in the form of a sharp notch similar to a fatigue crack. This initiation process continues until the crack length reaches a predetermined size. Afterward, the actual test begins, which requires cooling the specimen with liquid nitrogen until a temperature is reached where brittle fracture failure is expected. In the next step, the component is subjected to a quasi-static tensile test until failure occurs in the form of a brittle failure. The proposed paper will present the latest research findings, including the results of the conducted component tests and the derived definition of the initial crack size in bridge bearings.Keywords: bridge bearings, brittle fracture, fatigue, initial crack size, large-scale tests
Procedia PDF Downloads 42