Search results for: term spread
401 The Role of Anti-corruption Clauses in the Fight Against Corruption in Petroleum Sector
Authors: Azar Mahmoudi
Abstract:
Despite the rise of global anti-corruption movements and the strong emergence of international and national anti-corruption laws, corrupt practices are still prevalent in most places, and countries still struggle to translate these laws into practice. On the other hand, in most countries, political and economic elites oppose anti-corruption reforms. In such a situation, the role of external actors, like the other States, international organizations, and transnational actors, becomes essential. Among them, Transnational Corporations [TNCs] can develop their own regime-like framework to govern their internal activities, and through this, they can contribute to the regimes established by State actors to solve transnational issues. Among various regimes, TNCs may choose to comply with the transnational anti-corruption legal regime to avoid the cost of non-compliance with anti-corruption laws. As a result, they decide to strenghen their anti-corruption compliance as they expand into new overseas markets. Such a decision extends anti-corruption standards among their employees and third-party agents and within their projects across countries. To better address the challenges posed by corruption, TNCs have adopted a comprehensive anti-corruption toolkit. Among the various instruments, anti-corruption clauses have become one of the most anti-corruption means in international commercial agreements. Anti-corruption clauses, acting as a due diligence tool, can protect TNCs against the engagement of third-party agents in corrupt practices and further promote anti-corruption standards among businesses operating across countries. An anti-corruption clause allows parties to create a contractual commitment to exclude corrupt practices during the term of their agreement, including all levels of negotiation and implementation. Such a clause offers companies a mechanism to reduce the risk of potential corruption in their dealings with third parties while avoiding civil and administrative penalties. There have been few attempts to examine the role of anti-corruption clauses in the fight against corruption; therefore, this paper aims to fill this gap and examine anti-corruption clauses in a specific sector where corrupt practices are widespread and endemic, i.e., the petroleum industry. This paper argues that anti-corruption clauses are a positive step in ensuring that the petroleum industry operates in an ethical and transparent manner, helping to reducing the risk of corruption and promote integrity in this sector. Contractual anti-corruption clauses vary in terms of the types commitment, so parties have a wide range of options to choose from for their preferred clauses incorporated within their contracts. This paper intends to propose a categorization of anti-corruption clauses in the petroleum sector. It examines particularly the anti-corruption clauses incorporated in transnational hydrocarbon contracts published by the Resource Contract Portal, an online repository of extractive contracts. Then, this paper offers a quantitative assessment of anti-corruption clauses according to the types of contract, the date of conclusion, and the geographical distribution.Keywords: anti-corruption, oil and gas, transnational corporations, due diligence, contractual clauses, hydrocarbon, petroleum sector
Procedia PDF Downloads 130400 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India
Authors: Vinu Elias Jacob, Manoj Kumar Kini
Abstract:
Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.Keywords: disaster management, resilience, spatial planning, spatial transformations
Procedia PDF Downloads 296399 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements
Authors: Mohammad R. Bhuyan, Mohammad J. Khattak
Abstract:
Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement
Procedia PDF Downloads 166398 Using Balanced Scorecard Performance Metrics in Gauging the Delivery of Stakeholder Value in Higher Education: the Assimilation of Industry Certifications within a Business Program Curriculum
Authors: Thomas J. Bell III
Abstract:
This paper explores the value of assimilating certification training within a traditional course curriculum. This innovative approach is believed to increase stakeholder value within the Computer Information System program at Texas Wesleyan University. Stakeholder value is obtained from increased job marketability and critical thinking skills that create employment-ready graduates. This paper views value as first developing the capability to earn an industry-recognized certification, which provides the student with more job placement compatibility while allowing the use of critical thinking skills in a liberal arts business program. Graduates with industry-based credentials are often given preference in the hiring process, particularly in the information technology sector. And without a pioneering curriculum that better prepares students for an ever-changing employment market, its educational value is dubiously questioned. Since certifications are trending in the hiring process, academic programs should explore the viability of incorporating certification training into teaching pedagogy and courses curriculum. This study will examine the use of the balanced scorecard across four performance dimensions (financial, customer, internal process, and innovation) to measure the stakeholder value of certification training within a traditional course curriculum. The balanced scorecard as a strategic management tool may provide insight for leveraging resource prioritization and decisions needed to achieve various curriculum objectives and long-term value while meeting multiple stakeholders' needs, such as students, universities, faculty, and administrators. The research methodology will consist of quantitative analysis that includes (1) surveying over one-hundred students in the CIS program to learn what factor(s) contributed to their certification exam success or failure, (2) interviewing representatives from the Texas Workforce Commission to identify the employment needs and trends in the North Texas (Dallas/Fort Worth) area, (3) reviewing notable Workforce Innovation and Opportunity Act publications on training trends across several local business sectors, and (4) analyzing control variables to identify specific correlations between industry alignment and job placement to determine if a correlation exists. These findings may provide helpful insight into impactful pedagogical teaching techniques and curriculum that positively contribute to certification credentialing success. And should these industry-certified students land industry-related jobs that correlate with their certification credential value, arguably, stakeholder value has been realized.Keywords: certification exam teaching pedagogy, exam preparation, testing techniques, exam study tips, passing certification exams, embedding industry certification and curriculum alignment, balanced scorecard performance evaluation
Procedia PDF Downloads 108397 Comparison of Gestational Diabetes Influence on the Ultrastructure of Rectus Abdominis Muscle in Women and Rats
Authors: Giovana Vesentini, Fernanda Piculo, Gabriela Marini, Debora Damasceno, Angelica Barbosa, Selma Martheus, Marilza Rudge
Abstract:
Problem statement: Skeletal muscle is highly adaptable, muscle fiber composition and size can respond to a variety of stimuli, such physiologic, as pregnancy, and metabolic abnormalities, as Diabetes mellitus. This study aimed to analyze the effects of pregnancy-associated diabetes on the rectus abdominis muscle (RA), and to compare this changes in rats and women. Methods: Female Wistar rats were maintained under controlled conditions and distributed in Pregnant (P) and Long-term mild pregnant diabetic (LTMd) (n=3 r/group). Diabetes in rats was induced by streptozotocin (100mg/Kg, sc) on the first day of life, for a hyperglycemic state between 120-300 mg/dL in adult life. Female rats were mated overnight, at day 21 of pregnancy were anesthetized, and killed for the harvesting of maternal RA. Pregnant women who attended the Diabetes Prenatal Care Clinic of Botucatu Medical School were distributed in Pregnant non-diabetic (Pnd) and Gestational Diabetic (GDM) (n=3 w/group). The diagnosis of GDM was established according to ADA’s criteria (2016). The harvesting of RA was during the cesarean section. Transversal cross-sections of the RA of both women and rats were analyzed by transmission electron microscopy. All procedures were approved by the Ethics Committee on Animal Experiments of the Botucatu Medical School (Protocol Number 1003/2013) and by the Botucatu Medical School Ethical Committee for Human Research in Medical Sciences (CAAE: 41570815.0.0000.5411). Results: The photomicrographs of the RA of rats revealed disorganized Z lines, thinning sarcomeres, and a usual quantity of intermyofibrillar mitochondria in the P group. The LTMd group showed swollen sarcoplasmic reticulum, dilated T tubes and areas with sarcomere disruption. The ultrastructural analysis of Pnd non-diabetic women in the RA showed well-organized myofibrils forming intact sarcomeres, organized Z lines and a normal distribution of intermyofibrillar mitochondria. The GDM group revealed increase in intermyofibrillar mitochondria, areas with sarcomere disruption and increased lipid droplets. Conclusion: Pregnancy and diabetes induce adaptations in the ultrastructure of the rectus abdominis muscle for both women and rats, changing the architectural design of these tissues. However, in rats these changes are more severe maybe because, besides the high blood glucose levels, the quadrupedal animal may suffer an excessive mechanical tension during pregnancy by gravity. Probably, these findings may suggest that these alterations are a risk factor that contributes to the development of muscle dysfunction in women with GDM and may motivate treatment strategies in these patients.Keywords: gestational diabetes, muscle dysfunction, pregnancy, rectus abdominis
Procedia PDF Downloads 292396 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions
Authors: Pirta Palola, Richard Bailey, Lisa Wedding
Abstract:
Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.Keywords: economics of biodiversity, environmental valuation, natural capital, value function
Procedia PDF Downloads 194395 Consensual A-Monogamous Relationships: Challenges and Ways of Coping
Authors: Tal Braverman Uriel, Tal Litvak Hirsch
Abstract:
Background and Objectives: Little or only partial emphasis has been placed on exploring the complexity of consensual non-monogamous relationships. The term "polyamory" refers to consensual non-monogamy, and it is defined as having emotional and/or sexual relations simultaneously with two or more people, the consent and knowledge of all the partners concerned. Managing multiple romantic relationships with different people evokes more emotions, leads to more emotional conflicts arising from different interests, and demands practical strategies. An individual's transition from a monogamous lifestyle to a consensual non-monogamous lifestyle yields new challenges, accompanied by stress, uncertainty, and question marks, as do other life-changing events, such as divorce or transition to parenthood. The study examines both the process of transition and adaptation to a consensually non-monogamous relationship, as well as the coping mechanism involved in the daily conduct of this lifestyle. The research focuses on understanding the consequences, challenges, and coping methods from a personal, marital, and familial point of view and focuses on 40 middle-aged individuals (20 men and 20 women ages 40-60). The research sheds light on a way of life that has not been previously studied in Israel and is still considered unacceptable. Theories of crisis (e.g., as Folkman and Lazarus) were applied, and as a result, a deeper understanding of the subject was reached, all while focusing on multiple aspects of dealing with stress. The basic research question examines the consequences of entering a polyamorous life from a personal point of view as an individual, partner, and parent and the ways of coping with these consequences. Method: The research is conducted with a narrative qualitative approach in the interpretive paradigm, including semi-structured in-depth interviews. The method of analysis is thematic. Results: The findings indicate that in most cases, an individual's motivation to open the relationship is mainly a longing for better sexuality and for an added layer of excitement to their lives. Most of the interviewees were assisted by their spouses in the process, as well as by social networks and podcasts on the subject. Some of them therapeutic professionals from the field are helpful. It also clearly emerged that among those who experienced acute emotional crises with the primary partner or painful separations from secondary partners, all believed polyamory to be the adequate way of life for them. Finally, a key resource for managing tension and stress is the ability to share and communicate with the primary partner. Conclusions: The study points to the challenges and benefits of a non-monogamous lifestyle as well as the use of coping mechanisms and resources that are consistent with the existing theory and research in the field in the context of life changes. The study indicates the need to expand the research canvas in the future in the context of parenting and the consequences for children.Keywords: a-monogamy, consent, family, stress, tension
Procedia PDF Downloads 76394 Characterization of a Three-Electrodes Bioelectrochemical System from Mangrove Water and Sediments for the Reduction of Chlordecone in Martinique
Authors: Malory Jonata
Abstract:
Chlordecone (CLD) is an organochlorine pesticide used between 1971 and 1993 in both Guadeloupe and Martinique for the control of banana black weevil. The bishomocubane structure which characterizes this chemical compound led to high stability in organic matter and high persistence in the environment. Recently, researchers found that CLD can be degraded by isolated bacteria consortiums and, particularly, by bacteria such as Citrobacter sp 86 and Delsulfovibrio sp 86. Actually, six transformation product families of CLD are known. Moreover, the latest discovery showed that CLD was disappearing faster than first predicted in highly contaminated soil in Guadeloupe. However, the toxicity of transformation products is still unknown, and knowledge has to be deepened on the degradation ways and chemical characteristics of chlordecone and its transformation products. Microbial fuel cells (MFC) are electrochemical systems that can convert organic matter into electricity thanks to electroactive bacteria. These bacteria can exchange electrons through their membranes to solid surfaces or molecules. MFC have proven their efficiency as bioremediation systems in water and soils. They are already used for the bioremediation of several organochlorine compounds such as perchlorate, trichlorophenol or hexachlorobenzene. In this study, a three-electrodes system, inspired by MFC, is used to try to degrade chlordecone using bacteria from a mangrove swamp in Martinique. As we know, some mangrove bacteria are electroactive. Furthermore, the CLD rate seems to decline in mangrove swamp sediments. This study aims to prove that electroactive bacteria from a mangrove swamp in Martinique can degrade CLD thanks to a three-electrodes bioelectrochemical system. To achieve this goal, the tree-electrodes assembly has been connected to a potentiostat. The substrate used is mangrove water and sediments sampled in the mangrove swamp of La Trinité, a coastal city in Martinique, where CLD contamination has already been studied. Electroactive biofilms are formed by imposing a potential relative to Saturated Calomel Electrode using chronoamperometry. Moreover, their comportment has been studied by using cyclic voltametry. Biofilms have been studied under different imposed potentials, several conditions of the substrate and with or without CLD. In order to quantify the evolution of CLD rates in the substrate’s system, gas chromatography coupled with mass spectrometry (GC-MS) was performed on pre-treated samples of water and sediments after short, medium and long-term contact with the electroactive biofilms. Results showed that between -0,8V and -0,2V, the three-electrodes system was able to reduce the chemical in the substrate solution. The first GC-MS analysis result of samples spiked with CLD seems to reveal decreased CLD concentration over time. In conclusion, the designed bioelectrochemical system can provide the necessary conditions for chlordecone degradation. However, it is necessary to improve three-electrodes control settings in order to increase degradation rates. The biological pathways are yet to enlighten by biologicals analysis of electroactive biofilms formed in this system. Moreover, the electrochemical study of mangrove substrate gives new informations on the potential use of this substrate for bioremediation. But further studies are needed to a better understanding of the electrochemical potential of this environment.Keywords: bioelectrochemistry, bioremediation, chlordecone, mangrove swamp
Procedia PDF Downloads 80393 Training for Search and Rescue Teams: Online Training for SAR Teams to Locate Lost Persons with Dementia Using Drones
Authors: Dalia Hanna, Alexander Ferworn
Abstract:
This research provides detailed proposed training modules for the public safety teams and, specifically, SAR teams responsible for search and rescue operations related to finding lost persons with dementia. Finding a lost person alive is the goal of this training. Time matters if a lost person is to be found alive. Finding lost people living with dementia is quite challenging, as they are unaware they are lost and will not seek help. Even a small contribution to SAR operations could contribute to saving a life. SAR operations will always require expert professional and human volunteers. However, we can reduce their time, save lives, and reduce costs by providing practical training that is based on real-life scenarios. The content for the proposed training is based on the research work done by the researcher in this area. This research has demonstrated that, based on utilizing drones, the algorithmic approach could support a successful search outcome. Understanding the behavior of the lost person, learning where they may be found, predicting their survivability, and automating the search are all contributions of this work, founded in theory and demonstrated in practice. In crisis management, human behavior constitutes a vital aspect in responding to the crisis; the speed and efficiency of the response often get affected by the difficulty of the context of the operation. Therefore, training in this area plays a significant role in preparing the crisis manager to manage the emotional aspects that lead to decision-making in these critical situations. Since it is crucial to gain high-level strategic choices and the ability to apply crisis management procedures, simulation exercises become central in training crisis managers to gain the needed skills to respond critically to these events. The training will enhance the responders’ ability to make decisions and anticipate possible consequences of their actions through flexible and revolutionary reasoning in responding to the crisis efficiently and quickly. As adult learners, search and rescue teams will be approaching training and learning by taking responsibility of the learning process, appreciate flexible learning and as contributors to the teaching and learning happening during that training. These are all characteristics of adult learning theories. The learner self-reflects, gathers information, collaborates with others and is self-directed. One of the learning strategies associated with adult learning is effective elaboration. It helps learners to remember information in the long term and use it in situations where it might be appropriate. It is also a strategy that can be taught easily and used with learners of different ages. Designers must design reflective activities to improve the student’s intrapersonal awareness.Keywords: training, OER, dementia, drones, search and rescue, adult learning, UDL, instructional design
Procedia PDF Downloads 108392 Chemical vs Visual Perception in Food Choice Ability of Octopus vulgaris (Cuvier, 1797)
Authors: Al Sayed Al Soudy, Valeria Maselli, Gianluca Polese, Anna Di Cosmo
Abstract:
Cephalopods are considered as a model organism with a rich behavioral repertoire. Sophisticated behaviors were widely studied and described in different species such as Octopus vulgaris, who has evolved the largest and more complex nervous system among invertebrates. In O. vulgaris, cognitive abilities in problem-solving tasks and learning abilities are associated with long-term memory and spatial memory, mediated by highly developed sensory organs. They are equipped with sophisticated eyes, able to discriminate colors even with a single photoreceptor type, vestibular system, ‘lateral line analogue’, primitive ‘hearing’ system and olfactory organs. They can recognize chemical cues either through direct contact with odors sources using suckers or by distance through the olfactory organs. Cephalopods are able to detect widespread waterborne molecules by the olfactory organs. However, many volatile odorant molecules are insoluble or have a very low solubility in water, and must be perceived by direct contact. O. vulgaris, equipped with many chemosensory neurons located in their suckers, exhibits a peculiar behavior that can be provocatively described as 'smell by touch'. The aim of this study is to establish the priority given to chemical vs. visual perception in food choice. Materials and methods: Three different types of food (anchovies, clams, and mussels) were used, and all sessions were recorded with a digital camera. During the acclimatization period, Octopuses were exposed to the three types of food to test their natural food preferences. Later, to verify if food preference is maintained, food was provided in transparent screw-jars with pierced lids to allow both visual and chemical recognition of the food inside. Subsequently, we tested alternatively octopuses with food in sealed transparent screw-jars and food in blind screw-jars with pierced lids. As a control, we used blind sealed jars with the same lid color to verify a random choice among food types. Results and discussion: During the acclimatization period, O. vulgaris shows a higher preference for anchovies (60%) followed by clams (30%), then mussels (10%). After acclimatization, using the transparent and pierced screw jars octopus’s food choices resulted in 50-50 between anchovies and clams, avoiding mussels. Later, guided by just visual sense, with transparent but not pierced jars, their food preferences resulted in 100% anchovies. With pierced but not transparent jars their food preference resulted in 100% anchovies as first food choice, the clams as a second food choice result (33.3%). With no possibility to select food, neither by vision nor by chemoreception, the results were 20% anchovies, 20% clams, and 60% mussels. We conclude that O. vulgaris uses both chemical and visual senses in an integrative way in food choice, but if we exclude one of them, it appears clear that its food preference relies on chemical sense more than on visual perception.Keywords: food choice, Octopus vulgaris, olfaction, sensory organs, visual sense
Procedia PDF Downloads 220391 Fashion Utopias: The Role of Fashion Exhibitions and Fashion Archives to Defining (and Stimulating) Possible Future Fashion Landscapes
Authors: Vittorio Linfante
Abstract:
Utopìa is a term that, since its first appearance in 1516, in Tommaso Moro’s work, has taken on different meanings and forms in various fields: social studies, politics, art, creativity, and design. The utopias, although of short duration and in their apparent impossibility, have been able to give a shape to the future, laying the foundations for our present and the future of the next generations. The Twentieth century was the historical period crossed by many changes, and it saw the most significant number of utopias not only social, political, and scientific but also artistic, architectural, in design, communication, and, last but not least, in fashion. Over the years, fashion has been able to interpret various utopistic impulses giving form to the most futuristic visions. From the Manifesto del Vestito by Giacomo Balla, through the functional experiments that led to the Tuta by Thayath and the Varst by Aleksandr Rodčenko and Varvara Stepanova, through the Space Age visions of Rudi Gernreich, Paco Rabanne and Pierre Cardin, and the Archizoom’s political actions and their fashion project Vestirsi è facile. Experiments that have continued to the present days through the (sometimes) excessive visions of Hussein Chalayan, Alexander McQueen, and Gareth Pugh or those that are more anchored to the market (but no fewer innovative and visionaries) by Prada, Chanel, and Raf Simmons. If, as Bauman states, it is true that we have entered in a phase of Retrotopia characterized by the inability to think about new forms of the future; it is necessary, more than ever, to redefine the role of history, of its narration and its mise en scène, within the contemporary creative process. A process that increasingly requires an in-depth knowledge of the past for the definition of a renewed discourse about design processes. A discourse in which words like archive, exhibition, curating, revival, vintage, and costume take on new meanings. The paper aims to investigate–through case studies, research, and professional projects–the renewed role of curating and preserving fashion artefacts. A renewed role that–in an era of Retrotopia–museums, exhibitions, and archives can (and must) assume, to contribute to the definition of new design paradigms, capable of overcoming the traditional categories of revival or costume in favour of a more contemporary “mash-up” approach. Mash-up in which past and present, craftsmanship and new technologies, revival and experimentation merge seamlessly. In this perspective, dresses (as well as fashion accessories) should be considered not only as finished products but as artefacts capable of talking about the past and of producing unpublished new stories at the same time. Archives, exhibitions (academic and not), and museums thus become powerful sources of inspiration for fashion: places and projects capable of generating innovation, becoming active protagonists of the contemporary fashion design processes.Keywords: heritage, history, costume and fashion interface, performance, language, design research
Procedia PDF Downloads 114390 Barriers to Entry: The Pitfall of Charter School Accountability
Authors: Ian Kingsbury
Abstract:
The rapid expansion of charter schools (public schools that receive government but do not face the same regulations as traditional public schools) over the preceding two decades has raised concerns over the potential for graft and fraud. These concerns are largely justified: Incidents of financial crime and mismanagement are not unheard of, and the charter sector has become a darling of hedge fund managers. In response, several states have strengthened their charter school regulatory regimes. Imposing regulations and attempting to increase accountability seem like sensible measures, and perhaps they are necessary. However, increased regulation may come at the cost of imposing barriers to entry. Specifically, increased regulation often entails evidence for a high likelihood of fiscal solvency. That should theoretically entail access to capital in the short-term, which may systematically preclude Black or Hispanic applicants from opening charter schools. Moreover, increased regulation necessarily entails more red tape. The institutional wherewithal and the number of hours required to complete an application to open a charter school might favor those who have partnered with an education service provider, specifically a charter management organization (CMO) or education management organization (EMO). These potential barriers to entry pose a significant policy concern. Just as policymakers hope to increase the share of minority teachers and principals, they should sensibly care whether individuals who open charter schools look like the students in that school. Moreover, they might be concerned if successful applications in states with stringent regulations are overwhelmingly affiliated with education service providers. One of the original missions of charter schools was to serve as a laboratory of innovation. Approving only those applications affiliated with education service providers (and in effect establishing a parallel network of schools rather than a diverse marketplace of schools) undermines that mission. Data and methods: The analysis examines more than 2,000 charter school applications from 15 states. It compares the outcomes of applications from states with a strong regulatory environment (those with high scores) from NACSA-the National Association of Charter School Authorizers- to applications from states with a weak regulatory environment (those with a low NACSA score). If the hypothesis is correct, applicants not affiliated with an ESP are more likely to be rejected in high-regulation states compared to those affiliated with an ESP, and minority candidates not affiliated with an education service provider (ESP) are particularly likely to be rejected. Initial returns indicate that the hypothesis holds. More applications in low NASCA-scoring Arizona come from individuals not associated with an ESP, and those individuals are as likely to be accepted as those affiliated with an ESP. On the other hand, applicants in high-NACSA scoring Indiana and Ohio are more than 20 percentage points more likely to be accepted if they are affiliated with an ESP, and the effect is particularly pronounced for minority candidates. These findings should spur policymakers to consider the drawbacks of charter school accountability and consider accountability regimes that do not impose barriers to entry.Keywords: accountability, barriers to entry, charter schools, choice
Procedia PDF Downloads 159389 Teachers of the Pandemic: Retention, Resilience, and Training
Authors: Theoni Soublis
Abstract:
The COVID-19 pandemic created a severe interruption in teaching and learning in K-12 schools. It is essential that educational researchers, teachers, and administrators understand the long term effects that COVID-19 had on a variety of stakeholders in education. This investigation aims to analyze the research since the beginning of the pandemic that focuses specifically on teacher retention, resilience, and training. The results of this investigation will help to inform future research in order to better understand how the institution of education can continue to be prepared and to better prepare for future significant shifts in the modalities of instruction. The results of this analysis will directly impact the field of education as it will broaden the scope of understanding regarding how COVID- 19 impacted teaching and learning. The themes that will emerge from the data analysis will directly inform policy makers, administrators, and researchers about how to best implement training and curriculum design in order to support teacher effectiveness this in the classroom. Educational researchers have written about how teacher morale plummeted and how many teachers reported early burnout and higher stress levels. Teachers’ stress and anxiety soared during the COVID-19 pandemic, but so has their resilience and dedication to the field of education. This research aims to understand how public-school teachers overcame teaching obstacles presented to them during COVID-19. Research has been conducted to identify a variety of information regarding the impact the pandemic has had on K-12 teachers, students, and families. This research aims to understand how teachers continued to pursue their teaching objectives without significant training of effective online instruction methods. Not many educators even heard of the video conferencing platform Zoom before the spring of 2020. Researchers are interested in understanding how teachers used their expertise, prior knowledge, and training to institute immediate and effective online learning environments, what types of relationships did teachers build with students while teaching 100% remotely, and how did relationships change with students while teaching remotely? Furthermore, did the teacher-student relationship propel teacher resolve to be successful while teaching during a pandemic. Recent world events have significantly impacted the field of public-school teaching. The pandemic forced teachers to shift their paradigm about how to maintain high academic expectations, meet state curriculum standards, and assess students learning gains to make data-informed decisions while simultaneously adapting modes of instruction through multiple outlets with little to no training on remote, synchronous, asynchronous, virtual, and hybrid teaching. While it would be very interesting to study how teaching positively impacted students learning during the pandemic, I am more interested in understanding how teaches stayed the course and maintained their mental health while dealing with the stress and pressure of teaching during COVID-19.Keywords: teacher retention, COVID-19, teacher education, teacher moral
Procedia PDF Downloads 85388 Integration of a Protective Film to Enhance the Longevity and Performance of Miniaturized Ion Sensors
Authors: Antonio Ruiz Gonzalez, Kwang-Leong Choy
Abstract:
The measurement of electrolytes has a high value in the clinical routine. Ions are present in all body fluids with variable concentrations and are involved in multiple pathologies such as heart failures and chronic kidney disease. In the case of dissolved potassium, although a high concentration in the blood (hyperkalemia) is relatively uncommon in the general population, it is one of the most frequent acute electrolyte abnormalities. In recent years, the integration of thin films technologies in this field has allowed the development of highly sensitive biosensors with ultra-low limits of detection for the assessment of metals in liquid samples. However, despite the current efforts in the miniaturization of sensitive devices and their integration into portable systems, only a limited number of successful examples used commercially can be found. This fact can be attributed to a high cost involved in their production and the sustained degradation of the electrodes over time, which causes a signal drift in the measurements. Thus, there is an unmet necessity for the development of low-cost and robust sensors for the real-time monitoring of analyte concentrations in patients to allow the early detection and diagnosis of diseases. This paper reports a thin film ion-selective sensor for the evaluation of potassium ions in aqueous samples. As an alternative for this fabrication method, aerosol assisted chemical vapor deposition (AACVD), was applied due to cost-effectivity and fine control over the film deposition. Such a technique does not require vacuum and is suitable for the coating of large surface areas and structures with complex geometries. This approach allowed the fabrication of highly homogeneous surfaces with well-defined microstructures onto 50 nm thin gold layers. The degradative processes of the ubiquitously employed poly (vinyl chloride) membranes in contact with an electrolyte solution were studied, including the polymer leaching process, mechanical desorption of nanoparticles and chemical degradation over time. Rational design of a protective coating based on an organosilicon material in combination with cellulose to improve the long-term stability of the sensors was then carried out, showing an improvement in the performance after 5 weeks. The antifouling properties of such coating were assessed using a cutting-edge quartz microbalance sensor, allowing the quantification of the adsorbed proteins in the nanogram range. A correlation between the microstructural properties of the films with the surface energy and biomolecules adhesion was then found and used to optimize the protective film.Keywords: hyperkalemia, drift, AACVD, organosilicon
Procedia PDF Downloads 123387 Preventing Discharge to No Fixed Address-Youth (NFA-Y)
Authors: Cheryl Forchuk, Sandra Fisman, Steve Cordes, Dan Catunto, Katherine Krakowski, Melissa Jeffrey, John D’Oria
Abstract:
The discharge of youth aged 16-25 from hospital into homelessness is a prevalent issue despite research indicating social, safety, health and economic detriments on both the individual and community. Lack of stable housing for youth discharged into homelessness results in long-term consequences, including exacerbation of health problems and costly health care service use and hospital readmission. People experiencing homelessness are four times more likely to be readmitted within one month of discharge and hospitals must spend $2,559 more per client. Finding safe housing for these individuals is imperative to their recovery and transition back to the community. People discharged from hospital to homelessness experience challenges, including poor health outcomes and increased hospital readmissions. Youth are the fastest-growing subgroup of people experiencing homelessness in Canada. The needs of youth are unique and include supports related to education, employment opportunities, and age-related service barriers. This study aims to identify the needs of youth at risk of homelessness by evaluating the efficacy of the “Preventing Discharge to No Fixed Address – Youth” (NFA-Y) program, which aims to prevent youth from being discharged from hospital into homelessness. The program connects youth aged 16-25 who are inpatients at London Health Sciences Centre and St. Joseph’s Health Care London to housing and financial support. Supports are offered through collaboration with community partners: Youth Opportunities Unlimited, Canadian Mental Health Association Elgin Middlesex, City of London Coordinated Access, Ontario Works, and Salvation Army’s Housing Stability Bank. This study was reviewed and approved by Western University’s Research Ethics Board. A series of interviews are being conducted with approximately ninety-three youth participants at three time points: baseline (pre-discharge), six, and twelve months post-discharge. Focus groups with participants, health care providers, and community partners are being conducted at three-time points. In addition, administrative data from service providers will be collected and analyzed. Since homelessness has a detrimental effect on recovery, client and community safety, and healthcare expenditure, locating safe housing for psychiatric patients has had a positive impact on treatment, rehabilitation, and the system as a whole. If successful, the findings of this project will offer safe policy alternatives for the prevention of homelessness for at-risk youth, help set them up for success in their future years, and mitigate the rise of the homeless youth population in Canada.Keywords: youth homelessness, no-fixed address, mental health, homelessness prevention, hospital discharge
Procedia PDF Downloads 103386 Gender Differences in the Impact and Subjective Interpretation of Childhood Sexual Abuse Survivors
Authors: T. Borja-Alvarez, V. Jiménez-Borja, M. Jiménez Borja, C. J. Jiménez-Mosquera
Abstract:
Research on child sexual abuse has predominantly focused on female survivors. This has resulted in less research looking at the particular context in which this abuse takes place for boys and the impact this abuse may have on male survivors. The aim of this study is to examine the sex and age of the perpetrators of child sexual abuse and explore gender differences in the impact along with the subjective interpretation that survivors attribute to these experiences. The data for this study was obtained from Ecuadorian university students (M = 230, F = 293) who reported sexual abuse using the ISPCAN Child Abuse Screening Tool Retrospective version (ICAST-R). Participants completed Horowitz's Impact of Event Scale (IES) and were also requested to choose among neutral, positive, and negative adjectives to describe these experiences. The results indicate that in the case of males, perpetrators were both males (adults =27%, peers =20%, relatives =10.3%, cousins =7.4%) and young females (girlfriends or ex-girlfriends =25.6%, neighborhood =20.7%, school =16.7%, cousins =15.3%, strangers =12.8%). In contrast, almost all females reported that adult males were the perpetrators (relatives =29.6%, neighborhood =11.9%, strangers =19.9%, family friends =9.7%). Regarding the impact of these events, significant gender differences emerged. More females (50%) than males (20%) presented symptoms of post-traumatic stress disorder (PTSD). Gender differences also surfaced in the way survivors interpret their experiences. Almost half of the male participants selected the word “consensual” followed by the words “normal”, “helped me to mature”, “shameful”, “confusing”, and “traumatic”. In contrast, almost all females chose the word “non-consensual” followed by the words “shameful”, “traumatic”, “scary”, and “confusing”. In conclusion, the findings of this study suggest that young females and adult males were the most common perpetrators of sexually abused boys whereas adult males were the most common perpetrators of sexually abused girls. The impact and the subjective interpretation of these experiences were more negative for girls than for boys. The factors that account for the gender differences in the impact and the interpretation of these experiences need further exploration. It is likely that the cultural expectations of sexual behaviors for boys and girls in Latin American societies may partially explain the differential impact in the way these childhood sexual abuse experiences are interpreted in adulthood. In Ecuador, as is the case in other Latin American countries, the machismo culture not only accepts but encourages early sexual behaviors in boys and negatively judges premature sexual behavior in females. The result of these different sexual expectations may be that sexually abused boys may re-define these experiences as “consensual” and “normal” in adulthood, even though these were not consensual at the time of occurrence. Future studies are needed to more deeply understand the different contexts of sexual abuse for boys and girls in order to analyze the long-term impact of these experiences.Keywords: abuse, child, gender differences, sexual
Procedia PDF Downloads 102385 Implementing the WHO Air Quality Guideline for PM2.5 Worldwide can Prevent Millions of Premature Deaths Per Year
Authors: Despina Giannadaki, Jos Lelieveld, Andrea Pozzer, John Evans
Abstract:
Outdoor air pollution by fine particles ranks among the top ten global health risk factors that can lead to premature mortality. Epidemiological cohort studies, mainly conducted in United States and Europe, have shown that the long-term exposure to PM2.5 (particles with an aerodynamic diameter less than 2.5μm) is associated with increased mortality from cardiovascular, respiratory diseases and lung cancer. Fine particulates can cause health impacts even at very low concentrations. Previously, no concentration level has been defined below which health damage can be fully prevented. The World Health Organization ambient air quality guidelines suggest an annual mean PM2.5 concentration limit of 10μg/m3. Populations in large parts of the world, especially in East and Southeast Asia, and in the Middle East, are exposed to high levels of fine particulate pollution that by far exceeds the World Health Organization guidelines. The aim of this work is to evaluate the implementation of recent air quality standards for PM2.5 in the EU, the US and other countries worldwide and estimate what measures will be needed to substantially reduce premature mortality. We investigated premature mortality attributed to fine particulate matter (PM2.5) under adults ≥ 30yrs and children < 5yrs, applying a high-resolution global atmospheric chemistry model combined with epidemiological concentration-response functions. The latter are based on the methodology of the Global Burden of Disease for 2010, assuming a ‘safe’ annual mean PM2.5 threshold of 7.3μg/m3. We estimate the global premature mortality by PM2.5 at 3.15 million/year in 2010. China is the leading country with about 1.33 million, followed by India with 575 thousand and Pakistan with 105 thousand. For the European Union (EU) we estimate 173 thousand and the United States (US) 52 thousand in 2010. Based on sensitivity calculations we tested the gains from PM2.5 control by applying the air quality guidelines (AQG) and standards of the World Health Organization (WHO), the EU, the US and other countries. To estimate potential reductions in mortality rates we take into consideration the deaths that cannot be avoided after the implementation of PM2.5 upper limits, due to the contribution of natural sources to total PM2.5 and therefore to mortality (mainly airborne desert dust). The annual mean EU limit of 25μg/m3 would reduce global premature mortality by 18%, while within the EU the effect is negligible, indicating that the standard is largely met and that stricter limits are needed. The new US standard of 12μg/m3 would reduce premature mortality by 46% worldwide, 4% in the US and 20% in the EU. Implementing the AQG by the WHO of 10μg/m3 would reduce global premature mortality by 54%, 76% in China and 59% in India. In the EU and US, the mortality would be reduced by 36% and 14%, respectively. Hence, following the WHO guideline will prevent 1.7 million premature deaths per year. Sensitivity calculations indicate that even small changes at the lower PM2.5 standards can have major impacts on global mortality rates.Keywords: air quality guidelines, outdoor air pollution, particulate matter, premature mortality
Procedia PDF Downloads 310384 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity
Authors: Yuri Laevsky, Tatyana Nosova
Abstract:
The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation
Procedia PDF Downloads 299383 Re-Presenting the Egyptian Informal Urbanism in Films between 1994 and 2014
Authors: R. Mofeed, N. Elgendy
Abstract:
Cinema constructs mind-spaces that reflect inherent human thoughts and emotions. As a representational art, Cinema would introduce comprehensive images of life phenomena in different ways. The term “represent” suggests verity of meanings; bring into presence, replace or typify. In that sense, Cinema may present a phenomenon through direct embodiment, or introduce a substitute image that replaces the original phenomena, or typify it by relating the produced image to a more general category through a process of abstraction. This research is interested in questioning the type of images that Egyptian Cinema introduces to informal urbanism and how these images were conditioned and reshaped in the last twenty years. The informalities/slums phenomenon first appeared in Egypt and, particularly, Cairo in the early sixties, however, this phenomenon was completely ignored by the state and society until the eighties, and furthermore, its evident representation in Cinema was by the mid-nineties. The Informal City represents the illegal housing developments, and it is a fast growing form of urbanization in Cairo. Yet, this expanding phenomenon is still depicted as the minority, exceptional and marginal through the Cinematic lenses. This paper aims at tracing the forms of representations of the urban informalities in the Egyptian Cinema between 1994 and 2014, and how did that affect the popular mind and its perception of these areas. The paper runs two main lines of inquiry; the first traces the phenomena through a chronological and geographical mapping of the informal urbanism has been portrayed in films. This analysis is based on an academic research work at Cairo University in Fall 2014. The visual tracing through maps and timelines allowed a reading of the phases of ignorance, presence, typifying and repetition in the representation of this huge sector of the city through more than 50 films that has been investigated. The analysis clearly revealed the “portrayed image” of informality by the Cinema through the examined period. However, the second part of the paper explores the “perceived image”. A designed questionnaire is applied to highlight the main features of that image that is perceived by both inhabitants of informalities and other Cairenes based on watching selected films. The questionnaire covers the different images of informalities proposed in the Cinema whether in a comic or a melodramatic background and highlight the descriptive terms used, to see which of them resonate with the mass perceptions and affected their mental images. The two images; “portrayed” and “perceived” are then to be encountered to reflect on issues of repetitions, stereotyping and reality. The formulated stereotype of informal urbanism is finally outlined and justified in relation to both production consumption mechanisms of films and the State official vision of informalities.Keywords: cinema, informal urbanism, popular mind, representation
Procedia PDF Downloads 296382 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students
Authors: Majed A. Alsalem
Abstract:
In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.Keywords: deaf and hard of hearing, digital books, literacy, technology
Procedia PDF Downloads 487381 The Digital Divide: Examining the Use and Access to E-Health Based Technologies by Millennials and Older Adults
Authors: Delana Theiventhiran, Wally J. Bartfay
Abstract:
Background and Significance: As the Internet is becoming the epitome of modern communications, there are many pragmatic reasons why the digital divide matters in terms of accessing and using E-health based technologies. With the rise of technology usage globally, those in the older adult generation may not be as familiar and comfortable with technology usage and are thus put at a disadvantage compared to other generations such as millennials when examining and using E-health based platforms and technology. Currently, little is known about how older adults and millennials access and use e-health based technologies. Methods: A systemic review of the literature was undertaken employing the following three databases: (i) PubMed, (ii) ERIC, and (iii) CINAHL; employing the search term 'digital divide and generations' to identify potential articles. To extract required data from the studies, a data abstraction tool was created to obtain the following information: (a) author, (b) year of publication, (c) sample size, (d) country of origin, (e) design/methods, (f) major findings/outcomes obtained. Inclusion criteria included publication dates between the years of Jan 2009 to Aug 2018, written in the English language, target populations of older adults aged 65 and above and millennials, and peer reviewed quantitative studies only. Major Findings: PubMed provided 505 potential articles, where 23 of those articles met the inclusion criteria. Specifically, ERIC provided 53 potential articles, where no articles met criteria following data extraction. CINAHL provided 14 potential articles, where eight articles met criteria following data extraction. Conclusion: Practically speaking, identifying how newer E-health based technologies can be integrated into society and identifying why there is a gap with digital technology will help reduce the impact on generations and individuals who are not as familiar with technology and Internet usage. The largest concern of all is how to prepare older adults for new and emerging E-health technologies. Currently, there is a dearth of literature in this area because it is a newer area of research and little is known about it. The benefits and consequences of technology being integrated into daily living are being investigated as a newer area of research. Several of the articles (N=11) indicated that age is one of the larger factors contributing to the digital divide. Similarly, many of the examined articles (N=5) identify that privacy concerns were one of the main deterrents of technology usage for elderly individuals aged 65 and above. The older adult generation feels that privacy is one of the major concerns, especially in regards to how data is collected, used and possibly sold to third party groups by various websites. Additionally, access to technology, the Internet, and infrastructure also plays a large part in the way that individuals are able to receive and use information. Lastly, a change in the way that healthcare is currently used, received and distributed would also help attribute to the change to ensure that no generation is left behind in a technologically advanced society.Keywords: digital divide, e-health, millennials, older adults
Procedia PDF Downloads 172380 The Participation of Experts in the Criminal Policy on Drugs: The Proposal of a Cannabis Regulation Model in Spain by the Cannabis Policy Studies Group
Authors: Antonio Martín-Pardo
Abstract:
With regard to the context in which this paper is inserted, it is noteworthy that the current criminal policy model in which we find immersed, denominated by some doctrine sector as the citizen security model, is characterized by a marked tendency towards the discredit of expert knowledge. This type of technic knowledge has been displaced by the common sense and by the daily experience of the people at the time of legislative drafting, as well as by excessive attention to the short-term political effects of the law. Despite this criminal-political adverse scene, we still find valuable efforts in the side of experts to bring some rationality to the legislative development. This is the case of the proposal for a new cannabis regulation model in Spain carried out by the Cannabis Policy Studies Group (hereinafter referred as ‘GEPCA’). The GEPCA is a multidisciplinary group composed by authors with multiple/different orientations, trajectories and interests, but with a common minimum objective: the conviction that the current situation regarding cannabis is unsustainable and, that a rational legislative solution must be given to the growing social pressure for the regulation of their consumption and production. This paper details the main lines through which this technical proposal is developed with the purpose of its dissemination and discussion in the Congress. The basic methodology of the proposal is inductive-expository. In that way, firstly, we will offer a brief, but solid contextualization of the situation of cannabis in Spain. This contextualization will touch on issues such as the national regulatory situation and its relationship with the international context; the criminal, judicial and penitentiary impact of the offer and consumption of cannabis, or the therapeutic use of the substance, among others. In second place, we will get down to the business properly by detailing the minutia of the three main cannabis access channels that are proposed. Namely: the regulated market, the associations of cannabis users and personal self-cultivation. In each of these options, especially in the first two, special attention will be paid to both, the production and processing of the substance and the necessary administrative control of the activity. Finally, in a third block, some notes will be given on a series of subjects that surround the different access options just mentioned above and that give fullness and coherence to the proposal outlined. Among those related issues we find some such as consumption and tenure of the substance; the issue of advertising and promotion of cannabis; consumption in areas of special risk (work or driving v. g.); the tax regime; the need to articulate evaluation instruments for the entire process; etc. The main conclusion drawn from the analysis of the proposal is the unsustainability of the current repressive system, clearly unsuccessful, and the need to develop new access routes to cannabis that guarantee both public health and the rights of people who have freely chosen to consume it.Keywords: cannabis regulation proposal, cannabis policies studies group, criminal policy, expertise participation
Procedia PDF Downloads 119379 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 140378 Acute Antihyperglycemic Activity of a Selected Medicinal Plant Extract Mixture in Streptozotocin Induced Diabetic Rats
Authors: D. S. N. K. Liyanagamage, V. Karunaratne, A. P. Attanayake, S. Jayasinghe
Abstract:
Diabetes mellitus is an ever increasing global health problem which causes disability and untimely death. Current treatments using synthetic drugs have caused numerous adverse effects as well as complications, leading research efforts in search of safe and effective alternative treatments for diabetes mellitus. Even though there are traditional Ayurvedic remedies which are effective, due to a lack of scientific exploration, they have not been proven to be beneficial for common use. Hence the aim of this study is to evaluate the traditional remedy made of mixture of plant components, namely leaves of Murraya koenigii L. Spreng (Rutaceae), cloves of Allium sativum L. (Amaryllidaceae), fruits of Garcinia queasita Pierre (Clusiaceae) and seeds of Piper nigrum L. (Piperaceae) used for the treatment of diabetes. We report herein the preliminary results for the in vivo study of the anti-hyperglycaemic activity of the extracts of the above plant mixture in Wistar rats. A mixture made out of equal weights (100 g) of the above mentioned medicinal plant parts were extracted into cold water, hot water (3 h reflux) and water: acetone mixture (1:1) separately. Male wistar rats were divided into six groups that received different treatments. Diabetes mellitus was induced by intraperitoneal administration of streptozotocin at a dose of 70 mg/ kg in male Wistar rats in group two, three, four, five and six. Group one (N=6) served as the healthy untreated and group two (N=6) served as diabetic untreated control and both groups received distilled water. Cold water, hot water, and water: acetone plant extracts were orally administered in diabetic rats in groups three, four and five, respectively at different doses of 0.5 g/kg (n=6), 1.0 g/kg(n=6) and 1.5 g/kg(n=6) for each group. Glibenclamide (0.5 mg/kg) was administered to diabetic rats in group six (N=6) served as the positive control. The acute anti-hyperglycemic effect was evaluated over a four hour period using the total area under the curve (TAUC) method. The results of the test group of rats were compared with the diabetic untreated control. The TAUC of healthy and diabetic rats were 23.16 ±2.5 mmol/L.h and 58.31±3.0 mmol/L.h, respectively. A significant dose dependent improvement in acute anti-hyperglycaemic activity was observed in water: acetone extract (25%), hot water extract ( 20 %), and cold water extract (15 %) compared to the diabetic untreated control rats in terms of glucose tolerance (P < 0.05). Therefore, the results suggest that the plant mixture has a potent antihyperglycemic effect and thus validating their used in Ayurvedic medicine for the management of diabetes mellitus. Future studies will be focused on the determination of the long term in vivo anti-diabetic mechanisms and isolation of bioactive compounds responsible for the anti-diabetic activity.Keywords: acute antihyperglycemic activity, herbal mixture, oral glucose tolerance test, Sri Lankan medicinal plant extracts
Procedia PDF Downloads 179377 (Re)connecting to the Spirit of the Language: Decolonizing from Eurocentric Indigenous Language Revitalization Methodologies
Authors: Lana Whiskeyjack, Kyle Napier
Abstract:
The Spirit of the language embodies the motivation for indigenous people to connect with the indigenous language of their lineage. While the concept of the spirit of the language is often woven into the discussion by indigenous language revitalizationists, particularly those who are indigenous, there are few tangible terms in academic research conceptually actualizing the term. Through collaborative work with indigenous language speakers, elders, and learners, this research sets out to identify the spirit of the language, the catalysts of disconnection from the spirit of the language, and the sources of reconnection to the spirit of the language. This work fundamentally addresses the terms of engagement around collaboration with indigenous communities, itself inviting a decolonial approach to community outreach and individual relationships. As indigenous researchers, this means beginning, maintain, and closing this work in the ceremony while being transparent with community members in this work and related publishing throughout the project’s duration. Decolonizing this approach also requires maintaining explicit ongoing consent by the elders, knowledge keepers, and community members when handling their ancestral and indigenous knowledge. The handling of this knowledge is regarded in this work as stewardship, both in the handling of digital materials and the handling of ancestral Indigenous knowledge. This work observes recorded conversations in both nêhiyawêwin and English, resulting from 10 semi-structured interviews with fluent nêhiyawêwin speakers as well as three structured dialogue circles with fluent and emerging speakers. The words were transcribed by a speaker fluent in both nêhiyawêwin and English. The results of those interviews were categorized thematically to conceptually actualize the spirit of the language, catalysts of disconnection to thespirit of the language, and community voices methods of reconnection to the spirit of the language. Results of these interviews vastly determine that the spirit of the language is drawn from the land. Although nêhiyawêwin is the focus of this work, Indigenous languages are by nature inherently related to the land. This is further reaffirmed by the Indigenous language learners and speakers who expressed having ancestries and lineages from multiple Indigenous communities. Several other key differences embody this spirit of the language, which include ceremony and spirituality, as well as the semantic worldviews tied to polysynthetic verb-oriented morphophonemics most often found in indigenous languages — and of focus, nêhiyawêwin. The catalysts of disconnection to the spirit of the language are those whose histories have severed connections between Indigenous Peoples and the spirit of their languages or those that have affected relationships with the land, ceremony, and ways of thinking. Results of this research and its literature review have determined the three most ubiquitously damaging interdependent factors, which are catalysts of disconnection from the spirit of the language as colonization, capitalism, and Christianity. As voiced by the Indigenous language learners, this work necessitates addressing means to reconnect to the spirit of the language. Interviewees mentioned that the process of reconnection involves a whole relationship with the land, the practice of reciprocal-relational methodologies for language learning, and indigenous-protected and -governed learning. This work concludes in support of those reconnection methodologies.Keywords: indigenous language acquisition, indigenous language reclamation, indigenous language revitalization, nêhiyawêwin, spirit of the language
Procedia PDF Downloads 143376 Research on Reducing Food Losses by Extending the Date of Minimum Durability on the Example of Cereal Products
Authors: Monika Trzaskowska, Dorota Zielinska, Anna Lepecka, Katarzyna Neffe-Skocinska, Beata Bilska, Marzena Tomaszewska, Danuta Kolozyn-Krajewska
Abstract:
Microbiological quality and food safety are important food characteristics. Regulation (EU) No 1169/2011 of the European Parliament and of the Council on the provision of food information to consumers introduces the obligation to provide information on the 'use-by' date or the date of minimum durability (DMD). The second term is the date until which the properly stored or transported foodstuff retains its physical, chemical, microbiological and organoleptic properties. The date should be preceded by 'best before'. It is used for durable products, e.g., pasta. In relation to reducing food losses, the question may be asked whether products with the date of minimum durability currently declared retain quality and safety beyond this. The aim of the study was to assess the sensory quality and microbiological safety of selected cereal products, i.e., pasta and millet after DMD. The scope of the study was to determine the markers of microbiological quality, i.e., the total viable count (TVC), the number of bacteria from the Enterobacteriaceae family and the number of yeast and mold (TYMC) on the last day of DMD and after 1 and 3 months of storage. In addition, the presence of Salmonella and Listeria monocytogenes was examined on the last day of DMD. The sensory quality of products was assessed by quantitative descriptive analysis (QDA), the intensity of 14 differentiators and overall quality were defined and determined. In the tested samples of millet and pasta, no pathogenic bacteria Salmonella and Listeria monocytogenes were found. The value of the distinguishing features of selected quality and microbiological safety indicators on the last DMD day was in the range of about 3-1 log cfu/g. This demonstrates the good microbiological quality of the tested food. Comparing the products, a higher number of microorganisms was found in the samples of millet. After 3 months of storage, TVC decreased in millet, while in pasta, it was found to increase in value. In both products, the number of bacteria from the Enterobacretiaceae family decreased. In contrast, the number of TYMCs increased in samples of millet, and in pasta decreased. The intensity of sensory characteristic in the studied period varied. It remained at a similar level or increased. Millet was found to increase the intensity and flavor of 'cooked porridge' 3 months after DMD. Similarly, in the pasta, the smell and taste of 'cooked pasta' was more intense. To sum up, the researched products on the last day of the minimum durability date were characterized by very good microbiological and sensory quality, which was maintained for 3 months after this date. Based on these results, the date of minimum durability of tested products could be extended. The publication was financed on the basis of an agreement with the National Center for Research and Development No. Gospostrateg 1/385753/1/NCBR/2018 for the implementation and financing of the project under the strategic research and development program 'social and economic development of Poland in the conditions of globalizing markets – GOSPOSTRATEG - acronym PROM'.Keywords: date of minimum durability, food losses, food quality and safety, millet, pasta
Procedia PDF Downloads 161375 Analysis of Influencing Factors on Infield-Logistics: A Survey of Different Farm Types in Germany
Authors: Michael Mederle, Heinz Bernhardt
Abstract:
The Management of machine fleets or autonomous vehicle control will considerably increase efficiency in future agricultural production. Especially entire process chains, e.g. harvesting complexes with several interacting combine harvesters, grain carts, and removal trucks, provide lots of optimization potential. Organization and pre-planning ensure to get these efficiency reserves accessible. One way to achieve this is to optimize infield path planning. Particularly autonomous machinery requires precise specifications about infield logistics to be navigated effectively and process optimized in the fields individually or in machine complexes. In the past, a lot of theoretical optimization has been done regarding infield logistics, mainly based on field geometry. However, there are reasons why farmers often do not apply the infield strategy suggested by mathematical route planning tools. To make the computational optimization more useful for farmers this study focuses on these influencing factors by expert interviews. As a result practice-oriented navigation not only to the field but also within the field will be possible. The survey study is intended to cover the entire range of German agriculture. Rural mixed farms with simple technology equipment are considered as well as large agricultural cooperatives which farm thousands of hectares using track guidance and various other electronic assistance systems. First results show that farm managers using guidance systems increasingly attune their infield-logistics on direction giving obstacles such as power lines. In consequence, they can avoid inefficient boom flippings while doing plant protection with the sprayer. Livestock farmers rather focus on the application of organic manure with its specific requirements concerning road conditions, landscape terrain or field access points. Cultivation of sugar beets makes great demands on infield patterns because of its particularities such as the row crop system or high logistics demands. Furthermore, several machines working in the same field simultaneously influence each other, regardless whether or not they are of the equal type. Specific infield strategies always are based on interactions of several different influences and decision criteria. Single working steps like tillage, seeding, plant protection or harvest mostly cannot be considered each individually. The entire production process has to be taken into consideration to detect the right infield logistics. One long-term objective of this examination is to integrate the obtained influences on infield strategies as decision criteria into an infield navigation tool. In this way, path planning will become more practical for farmers which is a basic requirement for automatic vehicle control and increasing process efficiency.Keywords: autonomous vehicle control, infield logistics, path planning, process optimizing
Procedia PDF Downloads 233374 Controlled Synthesis of Pt₃Sn-SnOx/C Electrocatalysts for Polymer Electrolyte Membrane Fuel Cells
Authors: Dorottya Guban, Irina Borbath, Istvan Bakos, Peter Nemeth, Andras Tompos
Abstract:
One of the greatest challenges of the implementation of polymer electrolyte membrane fuel cells (PEMFCs) is to find active and durable electrocatalysts. The cell performance is always limited by the oxygen reduction reaction (ORR) on the cathode since it is at least 6 orders of magnitude slower than the hydrogen oxidation on the anode. Therefore high loading of Pt is required. Catalyst corrosion is also more significant on the cathode, especially in case of mobile applications, where rapid changes of loading have to be tolerated. Pt-Sn bulk alloys and SnO2-decorated Pt3Sn nanostructures are among the most studied bimetallic systems for fuel cell applications. Exclusive formation of supported Sn-Pt alloy phases with different Pt/Sn ratios can be achieved by using controlled surface reactions (CSRs) between hydrogen adsorbed on Pt sites and tetraethyl tin. In this contribution our results for commercial and a home-made 20 wt.% Pt/C catalysts modified by tin anchoring via CSRs are presented. The parent Pt/C catalysts were synthesized by modified NaBH4-assisted ethylene-glycol reduction method using ethanol as a solvent, which resulted either in dispersed and highly stable Pt nanoparticles or evenly distributed raspberry-like agglomerates according to the chosen synthesis parameters. The 20 wt.% Pt/C catalysts prepared that way showed improved electrocatalytic performance in the ORR and stability in comparison to the commercial 20 wt.% Pt/C catalysts. Then, in order to obtain Sn-Pt/C catalysts with Pt/Sn= 3 ratio, the Pt/C catalysts were modified with tetraethyl tin (SnEt4) using three and five consecutive tin anchoring periods. According to in situ XPS studies in case of catalysts with highly dispersed Pt nanoparticles, pre-treatment in hydrogen even at 170°C resulted in complete reduction of the ionic tin to Sn0. No evidence of the presence of SnO2 phase was found by means of the XRD and EDS analysis. These results demonstrate that the method of CSRs is a powerful tool to create Pt-Sn bimetallic nanoparticles exclusively, without tin deposition onto the carbon support. On the contrary, the XPS results revealed that the tin-modified catalysts with raspberry-like Pt agglomerates always contained a fraction of non-reducible tin oxide. At the same time, they showed increased activity and long-term stability in the ORR than Pt/C, which was assigned to the presence of SnO2 in close proximity/contact with Pt-Sn alloy phase. It has been demonstrated that the content and dispersion of the fcc Pt3Sn phase within the electrocatalysts can be controlled by tuning the reaction conditions of CSRs. The bimetallic catalysts displayed an outstanding performance in the ORR. The preparation of a highly dispersed 20Pt/C catalyst permits to decrease the Pt content without relevant decline in the electrocatalytic performance of the catalysts.Keywords: anode catalyst, cathode catalyst, controlled surface reactions, oxygen reduction reaction, PtSn/C electrocatalyst
Procedia PDF Downloads 234373 Liquid Illumination: Fabricating Images of Fashion and Architecture
Authors: Sue Hershberger Yoder, Jon Yoder
Abstract:
“The appearance does not hide the essence, it reveals it; it is the essence.”—Jean-Paul Sartre, Being and Nothingness Three decades ago, transarchitect Marcos Novak developed an early form of algorithmic animation he called “liquid architecture.” In that project, digitally floating forms morphed seamlessly in cyberspace without claiming to evolve or improve. Change itself was seen as inevitable. And although some imagistic moments certainly stood out, none was hierarchically privileged over another. That project challenged longstanding assumptions about creativity and artistic genius by posing infinite parametric possibilities as inviting alternatives to traditional notions of stability, originality, and evolution. Through ephemeral processes of printing, milling, and projecting, the exhibition “Liquid Illumination” destabilizes the solid foundations of fashion and architecture. The installation is neither worn nor built in the conventional sense, but—like the sensual art forms of fashion and architecture—it is still radically embodied through the logics and techniques of design. Appearances are everything. Surface pattern and color are no longer understood as minor afterthoughts or vapid carriers of dubious content. Here, they become essential but ever-changing aspects of precisely fabricated images. Fourteen silk “colorways” (a term from the fashion industry) are framed selections from ongoing experiments with intricate pattern and complex color configurations. Whether these images are printed on fabric, milled in foam, or illuminated through projection, they explore and celebrate the untapped potentials of the surficial and superficial. Some components of individual prints appear to float in front of others through stereoscopic superimpositions; some figures appear to melt into others due to subtle changes in hue without corresponding changes in value; and some layers appear to vibrate via moiré effects that emerge from unexpected pattern and color combinations. The liturgical atmosphere of Liquid Illumination is intended to acknowledge that, like the simultaneously sacred and superficial qualities of rose windows and illuminated manuscripts, artistic and religious ideologies are also always malleable. The intellectual provocation of this paper pushes the boundaries of current thinking concerning viable applications for fashion print designs and architectural images—challenging traditional boundaries between fine art and design. The opportunistic installation of digital printing, CNC milling, and video projection mapping in a gallery that is normally reserved for fine art exhibitions raises important questions about cultural/commercial display, mass customization, digital reproduction, and the increasing prominence of surface effects (color, texture, pattern, reflection, saturation, etc.) across a range of artistic practices and design disciplines.Keywords: fashion, print design, architecture, projection mapping, image, fabrication
Procedia PDF Downloads 88372 Cross-Language Variation and the ‘Fused’ Zone in Bilingual Mental Lexicon: An Experimental Research
Authors: Yuliya E. Leshchenko, Tatyana S. Ostapenko
Abstract:
Language variation is a widespread linguistic phenomenon which can affect different levels of a language system: phonological, morphological, lexical, syntactic, etc. It is obvious that the scope of possible standard alternations within a particular language is limited by a variety of its norms and regulations which set more or less clear boundaries for what is possible and what is not possible for the speakers. The possibility of lexical variation (alternate usage of lexical items within the same contexts) is based on the fact that the meanings of words are not clearly and rigidly defined in the consciousness of the speakers. Therefore, lexical variation is usually connected with unstable relationship between words and their referents: a case when a particular lexical item refers to different types of referents, or when a particular referent can be named by various lexical items. We assume that the scope of lexical variation in bilingual speech is generally wider than that observed in monolingual speech due to the fact that, besides ‘lexical item – referent’ relations it involves the possibility of cross-language variation of L1 and L2 lexical items. We use the term ‘cross-language variation’ to denote a case when two equivalent words of different languages are treated by a bilingual speaker as freely interchangeable within the common linguistic context. As distinct from code-switching which is traditionally defined as the conscious use of more than one language within one communicative act, in case of cross-language lexical variation the speaker does not perceive the alternate lexical items as belonging to different languages and, therefore, does not realize the change of language code. In the paper, the authors present research of lexical variation of adult Komi-Permyak – Russian bilingual speakers. The two languages co-exist on the territory of the Komi-Permyak District in Russia (Komi-Permyak as the ethnic language and Russian as the official state language), are usually acquired from birth in natural linguistic environment and, according to the data of sociolinguistic surveys, are both identified by the speakers as coordinate mother tongues. The experimental research demonstrated that alternation of Komi-Permyak and Russian words within one utterance/phrase is highly frequent both in speech perception and production. Moreover, our participants estimated cross-language word combinations like ‘маленькая /Russian/ нывка /Komi-Permyak/’ (‘a little girl’) or ‘мунны /Komi-Permyak/ домой /Russian/’ (‘go home’) as regular/habitual, containing no violation of any linguistic rules and being equally possible in speech as the equivalent intra-language word combinations (‘учöтик нывка’ /Komi-Permyak/ or ‘идти домой’ /Russian/). All the facts considered, we claim that constant concurrent use of the two languages results in the fact that a large number of their words tend to be intuitively interpreted by the speakers as lexical variants not only related to the same referent, but also referring to both languages or, more precisely, to none of them in particular. Consequently, we can suppose that bilingual mental lexicon includes an extensive ‘fused’ zone of lexical representations that provide the basis for cross-language variation in bilingual speech.Keywords: bilingualism, bilingual mental lexicon, code-switching, lexical variation
Procedia PDF Downloads 148