Search results for: composite action
259 Mathematical Modelling of Bacterial Growth in Products of Animal Origin in Storage and Transport: Effects of Temperature, Use of Bacteriocins and pH Level
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cordova
Abstract:
The pathogen growth in animal source foods is a common problem in the food industry, causing monetary losses due to the spoiling of products or food intoxication outbreaks in the community. In this sense, the quality of the product is reflected by the population of deteriorating agents present in it, which are mainly bacteria. The factors which are likely associated with freshness in animal source foods are temperature and processing, storage, and transport times. However, the level of deterioration of products depends, in turn, on the characteristics of the bacterial population, causing the decomposition or spoiling, such as pH level and toxins. Knowing the growth dynamics of the agents that are involved in product contamination allows the monitoring for more efficient processing. This means better quality and reasonable costs, along with a better estimation of necessary time and temperature intervals for transport and storage in order to preserve product quality. The objective of this project is to design a secondary model that allows measuring the impact on temperature bacterial growth and the competition for pH adequacy and release of bacteriocins in order to describe such phenomenon and, thus, estimate food product half-life with the least possible risk of deterioration or spoiling. In order to achieve this objective, the authors propose an analysis of a three-dimensional ordinary differential which includes; logistic bacterial growth extended by the inhibitory action of bacteriocins including the effect of the medium pH; change in the medium pH levels through an adaptation of the Luedeking-Piret kinetic model; Bacteriocin concentration modeled similarly to pH levels. These three dimensions are being influenced by the temperature at all times. Then, this differential system is expanded, taking into consideration the variable temperature and the concentration of pulsed bacteriocins, which represent characteristics inherent of the modeling, such as transport and storage, as well as the incorporation of substances that inhibit bacterial growth. The main results lead to the fact that temperature changes in an early stage of transport increased the bacterial population significantly more than if it had increased during the final stage. On the other hand, the incorporation of bacteriocins, as in other investigations, proved to be efficient in the short and medium-term since, although the population of bacteria decreased, once the bacteriocins were depleted or degraded over time, the bacteria eventually returned to their regular growth rate. The efficacy of the bacteriocins at low temperatures decreased slightly, which equates with the fact that their natural degradation rate also decreased. In summary, the implementation of the mathematical model allowed the simulation of a set of possible bacteria present in animal based products, along with their properties, in various transport and storage situations, which led us to state that for inhibiting bacterial growth, the optimum is complementary low constant temperatures and the initial use of bacteriocins.Keywords: bacterial growth, bacteriocins, mathematical modelling, temperature
Procedia PDF Downloads 137258 Photography as a Medium Of Communication within the Campaign for Raising Awarenes of Controlled Consumption of Television Contents
Authors: Jelena Kovačević Vorgučin, Sibila Petenji Arbutina
Abstract:
The postmodern age brings a rapid development of technology which inevitably leads to man's need to adapt to modern lifestyle. On the one hand, technological achievements have made human life easier, but there are numerous risks involved. Moreover, man's awareness and perception is changing and adapting unconsciously to the world we live in, while communication in the 21st century is predominantly based on the consumption of images. This paper presents sociological aspects of a community which is confined due to turbulent political-economic circumstances and its impact on the development of media literacy in Serbia. Previous researches led to the conclusion that the media culture is on an extremely low level, and that it can have a strong influence on the general development of the society, starting from the youngest segment of the population. Our aim is to use the conceptual authorial photographs inspired by the obtained research results to emphasize the importance that the impact of visual art has in delivering the message, its role in education and in raising awareness of universal social problems. The paper presents a number of stages involved in the conceptual project which is designed to last over a longer period of time in order to facilitate dissemination of information. First, a survey was carried out in several preschool institutions. This resulted in obtaining the necessary information on the habitual use of the medium of television in children and their carers-parents. The second stage focused on the relationship between the parent and the child in TV consumption. Further, an overview of the visual part of the project was made, which consisted of photographs in various dimensions, ranging from miniature to large dimensions, and following various exhibition principles in both gallery and alternative spaces. This stage of the project placed particular emphasis on the non-standard exhibiting formats and alternative exhibition principles which are increasingly present in all kinds of visual art aimed at achieving a higher level of information noticing and memorizing. The motif on the authorial photographs is children's portraits taken while they are watching different television contents, with emphasis on their emotional response. The importance of the medium of TV is particularly emphasized due to the fact that its consumption is the highest, even though there are newer and more advanced information-technological achievements. The already realized part of the project was used for an analysis of the results in the last stage of the project, which led to the conclusion that the response to the entire visual expression campaign was extremely positive, and action as such very useful indeed. The results obtained speak in favour of widening and continuation of the project, both on a greater number of sites locally as well as in other communities in Serbia with the aim of guiding people towards meaningful consumption of the television medium.Keywords: alternative space exhibiting, children and TV, conceptual portrait photography, media literacy
Procedia PDF Downloads 258257 Linguistic Cyberbullying, a Legislative Approach
Authors: Simona Maria Ignat
Abstract:
Bullying online has been an increasing studied topic during the last years. Different approaches, psychological, linguistic, or computational, have been applied. To our best knowledge, a definition and a set of characteristics of phenomenon agreed internationally as a common framework are still waiting for answers. Thus, the objectives of this paper are the identification of bullying utterances on Twitter and their algorithms. This research paper is focused on the identification of words or groups of words, categorized as “utterances”, with bullying effect, from Twitter platform, extracted on a set of legislative criteria. This set is the result of analysis followed by synthesis of law documents on bullying(online) from United States of America, European Union, and Ireland. The outcome is a linguistic corpus with approximatively 10,000 entries. The methods applied to the first objective have been the following. The discourse analysis has been applied in identification of keywords with bullying effect in texts from Google search engine, Images link. Transcription and anonymization have been applied on texts grouped in CL1 (Corpus linguistics 1). The keywords search method and the legislative criteria have been used for identifying bullying utterances from Twitter. The texts with at least 30 representations on Twitter have been grouped. They form the second corpus linguistics, Bullying utterances from Twitter (CL2). The entries have been identified by using the legislative criteria on the the BoW method principle. The BoW is a method of extracting words or group of words with same meaning in any context. The methods applied for reaching the second objective is the conversion of parts of speech to alphabetical and numerical symbols and writing the bullying utterances as algorithms. The converted form of parts of speech has been chosen on the criterion of relevance within bullying message. The inductive reasoning approach has been applied in sampling and identifying the algorithms. The results are groups with interchangeable elements. The outcomes convey two aspects of bullying: the form and the content or meaning. The form conveys the intentional intimidation against somebody, expressed at the level of texts by grammatical and lexical marks. This outcome has applicability in the forensic linguistics for establishing the intentionality of an action. Another outcome of form is a complex of graphemic variations essential in detecting harmful texts online. This research enriches the lexicon already known on the topic. The second aspect, the content, revealed the topics like threat, harassment, assault, or suicide. They are subcategories of a broader harmful content which is a constant concern for task forces and legislators at national and international levels. These topic – outcomes of the dataset are a valuable source of detection. The analysis of content revealed algorithms and lexicons which could be applied to other harmful contents. A third outcome of content are the conveyances of Stylistics, which is a rich source of discourse analysis of social media platforms. In conclusion, this corpus linguistics is structured on legislative criteria and could be used in various fields.Keywords: corpus linguistics, cyberbullying, legislation, natural language processing, twitter
Procedia PDF Downloads 86256 (De)Motivating Mitigation Behavior: An Exploratory Framing Study Applied to Sustainable Food Consumption
Authors: Youval Aberman, Jason E. Plaks
Abstract:
This research provides initial evidence that self-efficacy of mitigation behavior – the belief that one’s action can make a difference on the environment – can be implicitly inferred from the way numerical information is presented in environmental messages. The scientific community sees climate change as a pressing issue, but the general public tends to construe climate change as an abstract phenomenon that is psychologically distant. As such, a main barrier to pro-environmental behavior is that individuals often believe that their own behavior makes little to no difference on the environment. When it comes to communicating how the behavior of billions of individuals affects global climate change, it might appear valuable to aggregate those billions and present the shocking enormity of the resources individuals consume. This research provides initial evidence that, in fact, this strategy is ineffective; presenting large-scale aggregate data dilutes the contribution of the individual and impedes individuals’ motivation to act pro-environmentally. The high-impact, underrepresented behavior of eating a sustainable diet was chosen for the present studies. US Participants (total N = 668) were recruited online for a study on ‘meat and the environment’ and received information about some of resources used in meat production – water, CO2e, and feed – with numerical information that varied in its frame of reference. A ‘Nation’ frame of reference discussed the resources used in the beef industry, such as the billions of CO2e released daily by the industry, while a ‘Meal’ frame of reference presented the resources used in the production of a single beef dish. Participants completed measures of pro-environmental attitudes and behavioral intentions, either immediately (Study 1) or two days (Study 2) after reading the information. In Study 2 (n = 520) participants also indicated whether they consumed less or more meat than usual. Study 2 included an additional control condition that contained no environmental data. In Study 1, participants who read about meat production at a national level, compared to at a meal level, reported lower motivation to make ecologically conscious dietary choices and reported lower behavioral intention to change their diet. In Study 2, a similar pattern emerged, with the added insight that the Nation condition, but not the Meal condition, deviated from the control condition. Participants across conditions, on average, reduced their meat consumption in the duration of Study 2, except those in the Nation condition who remained unchanged. Presenting nation-wide consequences of human behavior is a double-edged sword: Framing in a large scale might reveal the relationship between collective actions and environmental issues, but it hinders the belief that individual actions make a difference.Keywords: climate change communication, environmental concern, meat consumption, motivation
Procedia PDF Downloads 159255 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays
Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner
Abstract:
Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation
Procedia PDF Downloads 294254 Immobilization of Superoxide Dismutase Enzyme on Layered Double Hydroxide Nanoparticles
Authors: Istvan Szilagyi, Marko Pavlovic, Paul Rouster
Abstract:
Antioxidant enzymes are the most efficient defense systems against reactive oxygen species, which cause severe damage in living organisms and industrial products. However, their supplementation is problematic due to their high sensitivity to the environmental conditions. Immobilization on carrier nanoparticles is a promising research direction towards the improvement of their functional and colloidal stability. In that way, their applications in biomedical treatments and manufacturing processes in the food, textile and cosmetic industry can be extended. The main goal of the present research was to prepare and formulate antioxidant bionanocomposites composed of superoxide dismutase (SOD) enzyme, anionic clay (layered double hydroxide, LDH) nanoparticle and heparin (HEP) polyelectrolyte. To characterize the structure and the colloidal stability of the obtained compounds in suspension and solid state, electrophoresis, dynamic light scattering, transmission electron microscopy, spectrophotometry, thermogravimetry, X-ray diffraction, infrared and fluorescence spectroscopy were used as experimental techniques. LDH-SOD composite was synthesized by enzyme immobilization on the clay particles via electrostatic and hydrophobic interactions, which resulted in a strong adsorption of the SOD on the LDH surface, i.e., no enzyme leakage was observed once the material was suspended in aqueous solutions. However, the LDH-SOD showed only limited resistance against salt-induced aggregation and large irregularly shaped clusters formed during short term interval even at lower ionic strengths. Since sufficiently high colloidal stability is a key requirement in most of the applications mentioned above, the nanocomposite was coated with HEP polyelectrolyte to develop highly stable suspensions of primary LDH-SOD-HEP particles. HEP is a natural anticoagulant with one of the highest negative line charge density among the known macromolecules. The experimental results indicated that it strongly adsorbed on the oppositely charged LDH-SOD surface leading to charge inversion and to the formation of negatively charged LDH-SOD-HEP. The obtained hybrid materials formed stable suspension even under extreme conditions, where classical colloid chemistry theories predict rapid aggregation of the particles and unstable suspensions. Such a stabilization effect originated from electrostatic repulsion between the particles of the same sign of charge as well as from steric repulsion due to the osmotic pressure raised during the overlap of the polyelectrolyte chains adsorbed on the surface. In addition, the SOD enzyme kept its structural and functional integrity during the immobilization and coating processes and hence, the LDH-SOD-HEP bionanocomposite possessed excellent activity in decomposition of superoxide radical anions, as revealed in biochemical test reactions. In conclusion, due to the improved colloidal stability and the good efficiency in scavenging superoxide radical ions, the developed enzymatic system is a promising antioxidant candidate for biomedical or other manufacturing processes, wherever the aim is to decompose reactive oxygen species in suspensions.Keywords: clay, enzyme, polyelectrolyte, formulation
Procedia PDF Downloads 268253 Urban Seismic Risk Reduction in Algeria: Adaptation and Application of the RADIUS Methodology
Authors: Mehdi Boukri, Mohammed Naboussi Farsi, Mounir Naili, Omar Amellal, Mohamed Belazougui, Ahmed Mebarki, Nabila Guessoum, Brahim Mezazigh, Mounir Ait-Belkacem, Nacim Yousfi, Mohamed Bouaoud, Ikram Boukal, Aboubakr Fettar, Asma Souki
Abstract:
The seismic risk to which the urban centres are more and more exposed became a world concern. A co-operation on an international scale is necessary for an exchange of information and experiments for the prevention and the installation of action plans in the countries prone to this phenomenon. For that, the 1990s was designated as 'International Decade for Natural Disaster Reduction (IDNDR)' by the United Nations, whose interest was to promote the capacity to resist the various natural, industrial and environmental disasters. Within this framework, it was launched in 1996, the RADIUS project (Risk Assessment Tools for Diagnosis of Urban Areas Against Seismic Disaster), whose the main objective is to mitigate seismic risk in developing countries, through the development of a simple and fast methodological and operational approach, allowing to evaluate the vulnerability as well as the socio-economic losses, by probable earthquake scenarios in the exposed urban areas. In this paper, we will present the adaptation and application of this methodology to the Algerian context for the seismic risk evaluation in urban areas potentially exposed to earthquakes. This application consists to perform an earthquake scenario in the urban centre of Constantine city, located at the North-East of Algeria, which will allow the building seismic damage estimation of this city. For that, an inventory of 30706 building units was carried out by the National Earthquake Engineering Research Centre (CGS). These buildings were digitized in a data base which comprises their technical information by using a Geographical Information system (GIS), and then they were classified according to the RADIUS methodology. The study area was subdivided into 228 meshes of 500m on side and Ten (10) sectors of which each one contains a group of meshes. The results of this earthquake scenario highlights that the ratio of likely damage is about 23%. This severe damage results from the high concentration of old buildings and unfavourable soil conditions. This simulation of the probable seismic damage of the building and the GIS damage maps generated provide a predictive evaluation of the damage which can occur by a potential earthquake near to Constantine city. These theoretical forecasts are important for decision makers in order to take the adequate preventive measures and to develop suitable strategies, prevention and emergency management plans to reduce these losses. They can also help to take the adequate emergency measures in the most impacted areas in the early hours and days after an earthquake occurrence.Keywords: seismic risk, mitigation, RADIUS, urban areas, Algeria, earthquake scenario, Constantine
Procedia PDF Downloads 262252 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations
Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso
Abstract:
Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.Keywords: pipeline, leakage, detection, AI
Procedia PDF Downloads 193251 The Role of Structural Poverty in the Know-How and Moral Economy of Doctors in Africa: An Anthropological Perspective
Authors: Isabelle Gobatto
Abstract:
Based on an anthropological approach, this paper explores the medical profession and the construction of medical practices by considering the multiform articulations between structural poverty and the production of care from a low-resource francophone West African country, Burkina Faso. This country is considered in its exemplary dimension of culturally differentiated countries of the African continent that share the same situation of structural poverty. The objective is to expose the effects of structural poverty on the ways of constructing professional knowledge and thinking about the sense of the medical profession. If doctors are trained to have the same capacities in South and West countries, which are to treat and save lives whatever the cultural contexts of the practice of medicine, the ways of investing their role and of dealing with this context of action fracture the homogenization of the medical profession. In the line of anthropology of biomedicine, this paper outlines the complex effects of structural poverty on health care, care relations, and the moral economy of doctors. The materials analyzed are based on an ethnography including two temporalities located thirty years apart (1990-1994 and 2020-2021), based on long-term observations of care practices conducted in healthcare institutions, interviews coupled with the life histories of physicians. The findings reveal that disabilities faced by doctors to deliver care are interpreted as policy gaps, but they are also considered by physicians as constitutive of the social and cultural characteristics of patients, making their capacities and incapacities in terms of accompanying caregivers in the production of care. These perceptions have effects on know-how, structured around the need to act even when diagnoses are not made so as not to see patients desert health structures if the costs of care are too high for them. But these interpretations of highly individualizing dimensions of these difficulties place part of the blame on patients for the difficulties in using learned knowledge and delivering effective care. These situations challenge the ethics of caregivers but also of ethnologists. Firstly because the interpretations of disabilities prevent caregivers from considering vulnerabilities of care as constituting a common condition shared with their patients in these health systems, affecting them in an identical way although in different places in the production of care. Correlatively, these results underline that these professional conceptions prevent the emergence of a figure of victim, which could be shared between patients and caregivers who, together, undergo working and care conditions at the limit of the acceptable. This dimension directly involves politics. Secondly, structural poverty and its effects on care challenge the ethics of the anthropologist who observes caregivers producing, without intent to arm, experiences of care marked by an ordinary violence, by not giving them the care they need. It is worth asking how anthropologists could get doctors to think in this light in west-African societies.Keywords: Africa, care, ethics, poverty
Procedia PDF Downloads 69250 Cytotoxicity and Genotoxicity of Glyphosate and Its Two Impurities in Human Peripheral Blood Mononuclear Cells
Authors: Marta Kwiatkowska, Paweł Jarosiewicz, Bożena Bukowska
Abstract:
Glyphosate (N-phosphonomethylglycine) is a non-selected broad spectrum ingredient in the herbicide (Roundup) used for over 35 years for the protection of agricultural and horticultural crops. Glyphosate was believed to be environmentally friendly but recently, a large body of evidence has revealed that glyphosate can negatively affect on environment and humans. It has been found that glyphosate is present in the soil and groundwater. It can also enter human body which results in its occurrence in blood in low concentrations of 73.6 ± 28.2 ng/ml. Research conducted for potential genotoxicity and cytotoxicity can be an important element in determining the toxic effect of glyphosate. Due to regulation of European Parliament 1107/2009 it is important to assess genotoxicity and cytotoxicity not only for the parent substance but also its impurities, which are formed at different stages of production of major substance – glyphosate. Moreover verifying, which of these compounds are more toxic is required. Understanding of the molecular pathways of action is extremely important in the context of the environmental risk assessment. In 2002, the European Union has decided that glyphosate is not genotoxic. Unfortunately, recently performed studies around the world achieved results which contest decision taken by the committee of the European Union. World Health Organization (WHO) in March 2015 has decided to change the classification of glyphosate to category 2A, which means that the compound is considered to "probably carcinogenic to humans". This category relates to compounds for which there is limited evidence of carcinogenicity to humans and sufficient evidence of carcinogenicity on experimental animals. That is why we have investigated genotoxicity and cytotoxicity effects of the most commonly used pesticide: glyphosate and its impurities: N-(phosphonomethyl)iminodiacetic acid (PMIDA) and bis-(phosphonomethyl)amine on human peripheral blood mononuclear cells (PBMCs), mostly lymphocytes. DNA damage (analysis of DNA strand-breaks) using the single cell gel electrophoresis (comet assay) and ATP level were assessed. Cells were incubated with glyphosate and its impurities: PMIDA and bis-(phosphonomethyl)amine at concentrations from 0.01 to 10 mM for 24 hours. Evaluating genotoxicity using the comet assay showed a concentration-dependent increase in DNA damage for all compounds studied. ATP level was decreased to zero as a result of using the highest concentration of two investigated impurities, like bis-(phosphonomethyl)amine and PMIDA. Changes were observed using the highest concentration at which a person can be exposed as a result of acute intoxication. Our survey leads to a conclusion that the investigated compounds exhibited genotoxic and cytotoxic potential but only in high concentrations, to which people are not exposed environmentally. Acknowledgments: This work was supported by the Polish National Science Centre (Contract-2013/11/N/NZ7/00371), MSc Marta Kwiatkowska, project manager.Keywords: cell viability, DNA damage, glyphosate, impurities, peripheral blood mononuclear cells
Procedia PDF Downloads 482249 Preparedness is Overrated: Community Responses to Floods in a Context of (Perceived) Low Probability
Authors: Kim Anema, Matthias Max, Chris Zevenbergen
Abstract:
For any flood risk manager the 'safety paradox' has to be a familiar concept: low probability leads to a sense of safety, which leads to more investments in the area, which leads to higher potential consequences: keeping the aggregated risk (probability*consequences) at the same level. Therefore, it is important to mitigate potential consequences apart from probability. However, when the (perceived) probability is so low that there is no recognizable trend for society to adapt to, addressing the potential consequences will always be the lagging point on the agenda. Preparedness programs fail because of lack of interest and urgency, policy makers are distracted by their day to day business and there's always a more urgent issue to spend the taxpayer's money on. The leading question in this study was how to address the social consequences of flooding in a context of (perceived) low probability. Disruptions of everyday urban life, large or small, can be caused by a variety of (un)expected things - of which flooding is only one possibility. Variability like this is typically addressed with resilience - and we used the concept of Community Resilience as the framework for this study. Drawing on face to face interviews, an extensive questionnaire and publicly available statistical data we explored the 'whole society response' to two recent urban flood events; the Brisbane Floods (AUS) in 2011 and the Dresden Floods (GE) in 2013. In Brisbane, we studied how the societal impacts of the floods were counteracted by both authorities and the public, and in Dresden we were able to validate our findings. A large part of the reactions, both public as institutional, to these two urban flood events were not fuelled by preparedness or proper planning. Instead, more important success factors in counteracting social impacts like demographic changes in neighborhoods and (non-)economic losses were dynamics like community action, flexibility and creativity from authorities, leadership, informal connections and a shared narrative. These proved to be the determining factors for the quality and speed of recovery in both cities. The resilience of the community in Brisbane was good, due to (i) the approachability of (local) authorities, (ii) a big group of ‘secondary victims’ and (iii) clear leadership. All three of these elements were amplified by the use of social media and/ or web 2.0 by both the communities and the authorities involved. The numerous contacts and social connections made through the web were fast, need driven and, in their own way, orderly. Similarly in Dresden large groups of 'unprepared', ad hoc organized citizens managed to work together with authorities in a way that was effective and speeded up recovery. The concept of community resilience is better fitted than 'social adaptation' to deal with the potential consequences of an (im)probable flood. Community resilience is built on capacities and dynamics that are part of everyday life and which can be invested in pre-event to minimize the social impact of urban flooding. Investing in these might even have beneficial trade-offs in other policy fields.Keywords: community resilience, disaster response, social consequences, preparedness
Procedia PDF Downloads 353248 Transforming Ganges to be a Living River through Waste Water Management
Authors: P. M. Natarajan, Shambhu Kallolikar, S. Ganesh
Abstract:
By size and volume of water, Ganges River basin is the biggest among the fourteen major river basins in India. By Hindu’s faith, it is the main ‘holy river’ in this nation. But, of late, the pollution load, both domestic and industrial sources are deteriorating the surface and groundwater as well as land resources and hence the environment of the Ganges River basin is under threat. Seeing this scenario, the Indian government began to reclaim this river by two Ganges Action Plans I and II since 1986 by spending Rs. 2,747.52 crores ($457.92 million). But the result was no improvement in the water quality of the river and groundwater and environment even after almost three decades of reclamation, and hence now the New Indian Government is taking extra care to rejuvenate this river and allotted Rs. 2,037 cores ($339.50 million) in 2014 and Rs. 20,000 crores ($3,333.33 million) in 2015. The reasons for the poor water quality and stinking environment even after three decades of reclamation of the river are either no treatment/partial treatment of the sewage. Hence, now the authors are suggesting a tertiary level treatment standard of sewages of all sources and origins of the Ganges River basin and recycling the entire treated water for nondomestic uses. At 20million litres per day (MLD) capacity of each sewage treatment plant (STP), this basin needs about 2020 plants to treat the entire sewage load. Cost of the STPs is Rs. 3,43,400 million ($5,723.33 million) and the annual maintenance cost is Rs. 15,352 million ($255.87 million). The advantages of the proposed exercise are: we can produce a volume of 1,769.52 million m3 of biogas. Since biogas is energy, can be used as a fuel, for any heating purpose, such as cooking. It can also be used in a gas engine to convert the energy in the gas into electricity and heat. It is possible to generate about 3,539.04 million kilowatt electricity per annum from the biogas generated in the process of wastewater treatment in Ganges basin. The income generation from electricity works out to Rs 10,617.12million ($176.95million). This power can be used to bridge the supply and demand gap of energy in the power hungry villages where 300million people are without electricity in India even today, and to run these STPs as well. The 664.18 million tonnes of sludge generated by the treatment plants per annum can be used in agriculture as manure with suitable amendments. By arresting the pollution load the 187.42 cubic kilometer (km3) of groundwater potential of the Ganges River basin could be protected from deterioration. Since we can recycle the sewage for non-domestic purposes, about 14.75km3 of fresh water per annum can be conserved for future use. The total value of the water saving per annum is Rs.22,11,916million ($36,865.27million) and each citizen of Ganges River basin can save Rs. 4,423.83/ ($73.73) per annum and Rs. 12.12 ($0.202) per day by recycling the treated water for nondomestic uses. Further the environment of this basin could be kept clean by arresting the foul smell as well as the 3% of greenhouse gages emission from the stinking waterways and land. These are the ways to reclaim the waterways of Ganges River basin from deterioration.Keywords: Holy Ganges River, lifeline of India, wastewater treatment and management, making Ganges permanently holy
Procedia PDF Downloads 285247 The Burmese Exodus of 1942: Towards Evolving Policy Protocols for a Refugee Archive
Authors: Vinod Balakrishnan, Chrisalice Ela Joseph
Abstract:
The Burmese Exodus of 1942, which left more than 4 lakh as refugees and thousands dead, is one of the worst forced migrations in recorded history. Adding to the woes of the refugees is the lack of credible documentation of their lived experiences, trauma, and stories and their erasure from recorded history. Media reports, national records, and mainstream narratives that have registered the exodus provide sanitized versions which have reduced the refugees to a nameless, faceless mass of travelers and obliterated their lived experiences, trauma, and sufferings. This attitudinal problem compels the need to stem the insensitivity that accompanies institutional memory by making a case for a more humanistically evolved policy that puts in place protocols for the way the humanities would voice the concern for the refugee. A definite step in this direction and a far more relevant project in our times is the need to build a comprehensive refugee archive that can be a repository of the refugee experiences and perspectives. The paper draws on Hannah Arendt’s position on the Jewish refugee crisis, Agamben’s work on statelessness and citizenship, Foucault’s notion of governmentality and biopolitics, Edward Said’s concepts on Exile, Fanon’s work on the dispossessed, Derrida’s work on ‘the foreigner and hospitality’ in order to conceptualize the refugee condition which will form the theoretical framework for the paper. It also refers to the existing scholarship in the field of refugee studies such as Roger Zetter’s work on the ‘refugee label’, Philip Marfleet’s work on ‘refugees and history’, Lisa Malkki’s research on the anthropological discourse of the refugee and refugee studies. The paper is also informed by the work that has been done by the international organizations to address the refugee crisis. The emphasis is on building a strong argument for the establishment of the refugee archive that finds but a passing and a none too convincing reference in refugee studies in order to enable a multi-dimensional understanding of the refugee crisis. Some of the old questions cannot be dismissed as outdated as the continuing travails of the refugees in different parts of the world only remind us that they are still, largely, unanswered. The questions are -What is the nature of a Refugee Archive? How is it different from the existing historical and political archives? What are the implications of the refugee archive? What is its contribution to refugee studies? The paper draws on Diana Taylor’s concept of the archive and the repertoire to theorize the refugee archive as a repository that has the documentary function of the ‘archive’ and the ‘agency’ function of the repertoire. It then reads Ayya’s Accounts- a memoir by Anand Pandian -in the light of Hannah Arendt’s concepts of the ‘refugee as vanguard’ and ‘story telling as political action’- to illustrate how the memoir contributes to the refugee archive that provides the refugee a place and agency in history. The paper argues for a refugee archive that has implications for the formulation of inclusive refugee policies.Keywords: Ayya’s Accounts, Burmese Exodus, policy protocol, refugee archive
Procedia PDF Downloads 141246 Cost Efficient Receiver Tube Technology for Eco-Friendly Concentrated Solar Thermal Applications
Authors: M. Shiva Prasad, S. R. Atchuta, T. Vijayaraghavan, S. Sakthivel
Abstract:
The world is in need of efficient energy conversion technologies which are affordable, accessible, and sustainable with eco-friendly nature. Solar energy is one of the cornerstones for the world’s economic growth because of its abundancy with zero carbon pollution. Among the various solar energy conversion technologies, solar thermal technology has attracted a substantial renewed interest due to its diversity and compatibility in various applications. Solar thermal systems employ concentrators, tracking systems and heat engines for electricity generation which lead to high cost and complexity in comparison with photovoltaics; however, it is compatible with distinct thermal energy storage capability and dispatchable electricity which creates a tremendous attraction. Apart from that, employing cost-effective solar selective receiver tube in a concentrating solar thermal (CST) system improves the energy conversion efficiency and directly reduces the cost of technology. In addition, the development of solar receiver tubes by low cost methods which can offer high optical properties and corrosion resistance in an open-air atmosphere would be beneficial for low and medium temperature applications. In this regard, our work opens up an approach which has the potential to achieve cost-effective energy conversion. We have developed a highly selective tandem absorber coating through a facile wet chemical route by a combination of chemical oxidation, sol-gel, and nanoparticle coating methods. The developed tandem absorber coating has gradient refractive index nature on stainless steel (SS 304) and exhibited high optical properties (α ≤ 0.95 & ε ≤ 0.14). The first absorber layer (Cr-Mn-Fe oxides) developed by controlled oxidation of SS 304 in a chemical bath reactor. A second composite layer of ZrO2-SiO2 has been applied on the chemically oxidized substrate by So-gel dip coating method to serve as optical enhancing and corrosion resistant layer. Finally, an antireflective layer (MgF2) has been deposited on the second layer, to achieve > 95% of absorption. The developed tandem layer exhibited good thermal stability up to 250 °C in open air atmospheric condition and superior corrosion resistance (withstands for > 200h in salt spray test (ASTM B117)). After the successful development of a coating with targeted properties at a laboratory scale, a prototype of the 1 m tube has been demonstrated with excellent uniformity and reproducibility. Moreover, it has been validated under standard laboratory test condition as well as in field condition with a comparison of the commercial receiver tube. The presented strategy can be widely adapted to develop highly selective coatings for a variety of CST applications ranging from hot water, solar desalination, and industrial process heat and power generation. The high-performance, cost-effective medium temperature receiver tube technology has attracted many industries, and recently the technology has been transferred to Indian industry.Keywords: concentrated solar thermal system, solar selective coating, tandem absorber, ultralow refractive index
Procedia PDF Downloads 90245 Cultural Adaptation of an Appropriate Intervention Tool for Mental Health among the Mohawk in Quebec
Authors: Liliana Gomez Cardona, Mary McComber, Kristyn Brown, Arlene Laliberté, Outi Linnaranta
Abstract:
The history of colonialism and more contemporary political issues have resulted in the exposure of Kanien'kehá:ka: non (Kanien'kehá:ka of Kahnawake) to challenging and even traumatic experiences. Colonization, religious missions, residential schools as well as economic and political marginalization are the factors that have challenged the wellbeing and mental health of these populations. In psychiatry, screening for mental illness is often done using questionnaires with which the patient is expected to respond to how often he/she has certain symptoms. However, the Indigenous view of mental wellbeing may not fit well with this approach. Moreover, biomedical treatments do not always meet the needs of Indigenous people because they do not understand the culture and traditional healing methods that persist in many communities. Assess whether the questionnaires used to measure symptoms, commonly used in psychiatry are appropriate and culturally safe for the Mohawk in Quebec. Identify the most appropriate tool to assess and promote wellbeing and follow the process necessary to improve its cultural sensitivity and safety for the Mohawk population. Qualitative, collaborative, and participatory action research project which respects First Nations protocols and the principles of ownership, control, access, and possession (OCAP). Data collection based on five focus groups with stakeholders working with these populations and members of Indigenous communities. Thematic analysis of the data collected and emerging through an advisory group that led a revision of the content, use, and cultural and conceptual relevance of the instruments. The questionnaires measuring psychiatric symptoms face significant limitations in the local indigenous context. We present the factors that make these tools not relevant among Mohawks. Although the scale called Growth and Empowerment Measure (GEM) was originally developed among Indigenous in Australia, the Mohawk in Quebec found that this tool comprehends critical aspects of their mental health and wellbeing more respectfully and accurately than questionnaires focused on measuring symptoms. We document the process of cultural adaptation of this tool which was supported by community members to create a culturally safe tool that helps in growth and empowerment. The cultural adaptation of the GEM provides valuable information about the factors affecting wellbeing and contributes to mental health promotion. This process improves mental health services by giving health care providers useful information about the Mohawk population and their clients. We believe that integrating this tool in interventions can help create a bridge to improve communication between the Indigenous cultural perspective of the patient and the biomedical view of health care providers. Further work is needed to confirm the clinical utility of this tool in psychological and psychiatric intervention along with social and community services.Keywords: cultural adaptation, cultural safety, empowerment, Mohawks, mental health, Quebec
Procedia PDF Downloads 155244 Recovering Trust in Institutions through Networked Governance: An Analytical Approach via the Study of the Provincial Government of Gipuzkoa
Authors: Xabier Barandiaran, Igone Guerra
Abstract:
The economic and financial crisis that hit European countries in 2008 revealed the inability of governments to respond unilaterally to the so-called “wicked” problems that affect our societies. Closely linked to this, the increasing disaffection of citizens towards politics has resulted in growing distrust of the citizenry not only in the institutions in general but also in the political system, in particular. Precisely, these two factors provoked the action of the local government of Gipuzkoa (Basque Country) to move from old ways of “doing politics” to a new way of “thinking politics” based on a collaborative approach, in which innovative modes of public decision making are prominent. In this context, in 2015, the initiative Etorkizuna Eraikiz (Building the Future), a contemporary form of networked governance, was launched by the Provincial Government. The paper focuses on the Etorkizuna Eraikiz initiative, a sound commitment from a local government to build jointly with the citizens the future of the territory. This paper will present preliminary results obtained from three different experiences of co-creation developed within Etorkizuna Eraikiz in which the formulation of networked governance is a mandatory pre-requisite. These experiences show how the network building approach among the different agents of the territory as well as the co-creation of public policies is the cornerstone of this challenging mission. Through the analysis of the information and documentation gathered during the four years of Etorkizuna-Eraikiz, and, specifically by delving into the strategy promoted by the initiative, some emerging analytical conclusions resulting from the promotion of this collaborative culture will be presented. For example, some preliminary results have shown a significant positive relationship between shared leadership and the formulation of the public good. In the period 2016-2018, a total of 73 projects were launched and funding by the Provincial Government of Gipuzkoa within the Etorkizuna Eraikiz initiative, that indicates greater engagement of the citizenry in the process of policy-making and therefore improving, somehow, the quality of the public policies. These statements have been supported by the last survey about the perspectives of the citizens toward politics and policies. Some of the more prominent results show us that there is still a high level of distrust in Politics (78,9% of respondents) but a greater trust in institutions such the Political Government of Gipuzkoa (40,8% of respondents declared as “good” the performance of this provincial institution). Regarding the Etorkizuna Eraikiz Initiative, it is being more readily recognized by citizens over this period of time (25,4% of the respondents in June 2018 agreed to know about the initiative giving it a mark of 5,89 ) and thus build trust and a sense of ownership. Although, there is a clear requirement for further research on the linkages between collaborative governance and level of trust, the paper, based on these findings, will provide some managerial and theoretical implications for collaborative governance in the territory.Keywords: network governance, collaborative governance, public sector innovation, citizen participation, trust
Procedia PDF Downloads 123243 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic
Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink
Abstract:
Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction
Procedia PDF Downloads 161242 Enhancing Students' Utilization of Written Corrective Feedback through Teacher-Student Writing Conferences: A Case Study in English Writing Instruction
Authors: Tsao Jui-Jung
Abstract:
Previous research findings have shown that most students do not fully utilize the written corrective feedback provided by teachers (Stone, 2014). This common phenomenon results in the ineffective utilization of teachers' written corrective feedback. As Ellis (2010) points out, the effectiveness of written corrective feedback depends on the level of student engagement with it. Therefore, it is crucial to understand how students utilize the written corrective feedback from their teachers. Previous studies have confirmed the positive impact of teacher-student writing conferences on students' engagement in the writing process and their writing abilities (Hum, 2021; Nosratinia & Nikpanjeh, 2019; Wong, 1996; Yeh, 2016, 2019). However, due to practical constraints such as time limitations, this instructional activity is not fully utilized in writing classrooms (Alfalagg, 2020). Therefore, to address this research gap, the purpose of this study was to explore several aspects of teacher-student writing conferences, including the frequency of meaning negotiation (i.e., comprehension checks, confirmation checks, and clarification checks) and teacher scaffolding techniques (i.e., feedback, prompts, guidance, explanations, and demonstrations) in teacher-student writing conferences, examining students’ self-assessment of their writing strengths and weaknesses in post-conference journals and their experiences with teacher-student writing conferences (i.e., interaction styles, communication levels, how teachers addressed errors, and overall perspectives on the conferences), and gathering insights from their responses to open-ended questions in the final stage of the study (i.e., their preferences and reasons for different written corrective feedback techniques used by teachers and their perspectives and suggestions on teacher-student writing conferences). Data collection methods included transcripts of audio recordings of teacher-student writing conferences, students’ post-conference journals, and open-ended questionnaires. The participants of this study were sophomore students enrolled in an English writing course for a duration of one school year. Key research findings are as follows: Firstly, in terms of meaning negotiation, students attempted to clearly understand the corrective feedback provided by the teacher-researcher twice as often as the teacher-researcher attempted to clearly understand the students' writing content. Secondly, the most commonly used scaffolding technique in the conferences was prompting (indirect feedback). Thirdly, the majority of participants believed that teacher-student writing conferences had a positive impact on their writing abilities. Fourthly, most students preferred direct feedback from the teacher-research as it directly pointed out their errors and saved them time in revision. However, some students still preferred indirect feedback, as they believed it encouraged them to think and self-correct. Based on the research findings, this study proposes effective teaching recommendations for English writing instruction aimed at optimizing teaching strategies and enhancing students' writing abilities.Keywords: written corrective feedback, student engagement, teacher-student writing conferences, action research
Procedia PDF Downloads 79241 Increasing System Adequacy Using Integration of Pumped Storage: Renewable Energy to Reduce Thermal Power Generations Towards RE100 Target, Thailand
Authors: Mathuravech Thanaphon, Thephasit Nat
Abstract:
The Electricity Generating Authority of Thailand (EGAT) is focusing on expanding its pumped storage hydropower (PSH) capacity to increase the reliability of the system during peak demand and allow for greater integration of renewables. To achieve this requirement, Thailand will have to double its current renewable electricity production. To address the challenges of balancing supply and demand in the grid with increasing levels of RE penetration, as well as rising peak demand, EGAT has already been studying the potential for additional PSH capacity for several years to enable an increased share of RE and replace existing fossil fuel-fired generation. In addition, the role that pumped-storage hydropower would play in fulfilling multiple grid functions and renewable integration. The proposed sites for new PSH would help increase the reliability of power generation in Thailand. However, most of the electricity generation will come from RE, chiefly wind and photovoltaic, and significant additional Energy Storage capacity will be needed. In this paper, the impact of integrating the PSH system on the adequacy of renewable rich power generating systems to reduce the thermal power generating units is investigated. The variations of system adequacy indices are analyzed for different PSH-renewables capacities and storage levels. Power Development Plan 2018 rev.1 (PDP2018 rev.1), which is modified by integrating a six-new PSH system and RE planning and development aftermath in 2030, is the very challenge. The system adequacy indices through power generation are obtained using Multi-Objective Genetic Algorithm (MOGA) Optimization. MOGA is a probabilistic heuristic and stochastic algorithm that is able to find the global minima, which have the advantage that the fitness function does not necessarily require the gradient. In this sense, the method is more flexible in solving reliability optimization problems for a composite power system. The optimization with hourly time step takes years of planning horizon much larger than the weekly horizon that usually sets the scheduling studies. The objective function is to be optimized to maximize RE energy generation, minimize energy imbalances, and minimize thermal power generation using MATLAB. The PDP2018 rev.1 was set to be simulated based on its planned capacity stepping into 2030 and 2050. Therefore, the four main scenario analyses are conducted as the target of renewables share: 1) Business-As-Usual (BAU), 2) National Targets (30% RE in 2030), 3) Carbon Neutrality Targets (50% RE in 2050), and 5) 100% RE or full-decarbonization. According to the results, the generating system adequacy is significantly affected by both PSH-RE and Thermal units. When a PSH is integrated, it can provide hourly capacity to the power system as well as better allocate renewable energy generation to reduce thermal generations and improve system reliability. These results show that a significant level of reliability improvement can be obtained by PSH, especially in renewable-rich power systems.Keywords: pumped storage hydropower, renewable energy integration, system adequacy, power development planning, RE100, multi-objective genetic algorithm
Procedia PDF Downloads 58240 Nondestructive Monitoring of Atomic Reactions to Detect Precursors of Structural Failure
Authors: Volodymyr Rombakh
Abstract:
This article was written to substantiate the possibility of detecting the precursors of catastrophic destruction of a structure or device and stopping operation before it. Damage to solids results from breaking the bond between atoms, which requires energy. Modern theories of strength and fracture assume that such energy is due to stress. However, in a letter to W. Thomson (Lord Kelvin) dated December 18, 1856, J.C. Maxwell provided evidence that elastic energy cannot destroy solids. He proposed an equation for estimating a deformable body's energy, equal to the sum of two energies. Due to symmetrical compression, the first term does not change, but the second term is distortion without compression. Both types of energy are represented in the equation as a quadratic function of strain, but Maxwell repeatedly wrote that it is not stress but strain. Furthermore, he notes that the nature of the energy causing the distortion is unknown to him. An article devoted to theories of elasticity was published in 1850. Maxwell tried to express mechanical properties with the help of optics, which became possible only after the creation of quantum mechanics. However, Maxwell's work on elasticity is not cited in the theories of strength and fracture. The authors of these theories and their associates are still trying to describe the phenomena they observe based on classical mechanics. The study of Faraday's experiments, Maxwell's and Rutherford's ideas, made it possible to discover a previously unknown area of electromagnetic radiation. The properties of photons emitted in this reaction are fundamentally different from those of photons emitted in nuclear reactions and are caused by the transition of electrons in an atom. The photons released during all processes in the universe, including from plants and organs in natural conditions; their penetrating power in metal is millions of times greater than that of one of the gamma rays. However, they are not non-invasive. This apparent contradiction is because the chaotic motion of protons is accompanied by the chaotic radiation of photons in time and space. Such photons are not coherent. The energy of a solitary photon is insufficient to break the bond between atoms, one of the stages of which is ionization. The photographs registered the rail deformation by 113 cars, while the Gaiger Counter did not. The author's studies show that the cause of damage to a solid is the breakage of bonds between a finite number of atoms due to the stimulated emission of metastable atoms. The guarantee of the reliability of the structure is the ratio of the energy dissipation rate to the energy accumulation rate, but not the strength, which is not a physical parameter since it cannot be measured or calculated. The possibility of continuous control of this ratio is due to the spontaneous emission of photons by metastable atoms. The article presents calculation examples of the destruction of energy and photographs due to the action of photons emitted during the atomic-proton reaction.Keywords: atomic-proton reaction, precursors of man-made disasters, strain, stress
Procedia PDF Downloads 92239 European Commission Radioactivity Environmental Monitoring Database REMdb: A Law (Art. 36 Euratom Treaty) Transformed in Environmental Science Opportunities
Authors: M. Marín-Ferrer, M. A. Hernández, T. Tollefsen, S. Vanzo, E. Nweke, P. V. Tognoli, M. De Cort
Abstract:
Under the terms of Article 36 of the Euratom Treaty, European Union Member States (MSs) shall periodically communicate to the European Commission (EC) information on environmental radioactivity levels. Compilations of the information received have been published by the EC as a series of reports beginning in the early 1960s. The environmental radioactivity results received from the MSs have been introduced into the Radioactivity Environmental Monitoring database (REMdb) of the Institute for Transuranium Elements of the EC Joint Research Centre (JRC) sited in Ispra (Italy) as part of its Directorate General for Energy (DG ENER) support programme. The REMdb brings to the scientific community dealing with environmental radioactivity topics endless of research opportunities to exploit the near 200 millions of records received from MSs containing information of radioactivity levels in milk, water, air and mixed diet. The REM action was created shortly after Chernobyl crisis to support the EC in its responsibilities in providing qualified information to the European Parliament and the MSs on the levels of radioactive contamination of the various compartments of the environment (air, water, soil). Hence, the main line of REM’s activities concerns the improvement of procedures for the collection of environmental radioactivity concentrations for routine and emergency conditions, as well as making this information available to the general public. In this way, REM ensures the availability of tools for the inter-communication and access of users from the Member States and the other European countries to this information. Specific attention is given to further integrate the new MSs with the existing information exchange systems and to assist Candidate Countries in fulfilling these obligations in view of their membership of the EU. Article 36 of the EURATOM treaty requires the competent authorities of each MS to provide regularly the environmental radioactivity monitoring data resulting from their Article 35 obligations to the EC in order to keep EC informed on the levels of radioactivity in the environment (air, water, milk and mixed diet) which could affect population. The REMdb has mainly two objectives: to keep a historical record of the radiological accidents for further scientific study, and to collect the environmental radioactivity data gathered through the national environmental monitoring programs of the MSs to prepare the comprehensive annual monitoring reports (MR). The JRC continues his activity of collecting, assembling, analyzing and providing this information to public and MSs even during emergency situations. In addition, there is a growing concern with the general public about the radioactivity levels in the terrestrial and marine environment, as well about the potential risk of future nuclear accidents. To this context, a clear and transparent communication with the public is needed. EURDEP (European Radiological Data Exchange Platform) is both a standard format for radiological data and a network for the exchange of automatic monitoring data. The latest release of the format is version 2.0, which is in use since the beginning of 2002.Keywords: environmental radioactivity, Euratom, monitoring report, REMdb
Procedia PDF Downloads 444238 Therapeutic Potential of GSTM2-2 C-Terminal Domain and Its Mutants, F157A and Y160A on the Treatment of Cardiac Arrhythmias: Effect on Ca2+ Transients in Neonatal Ventricular Cardiomyocytes
Authors: R. P. Hewawasam, A. F. Dulhunty
Abstract:
The ryanodine receptor (RyR) is an intracellular ion channel that releases Ca2+ from the sarcoplasmic reticulum and is essential for the excitation-contraction coupling and contraction in striated muscle. Human muscle specific glutathione transferase M2-2 (GSTM2-2) is a highly specific inhibitor of cardiac ryanodine receptor (RyR2) activity. Single channel-lipid bilayer studies and Ca2+ release assays performed using the C-terminal half of the GSTM2-2 and its mutants F157A and Y160A confirmed the ability of the C terminal domain of GSTM2-2 to specifically inhibit the cardiac ryanodine receptor activity. Objective of the present study is to determine the effect of C terminal domain of GSTM2-2 (GSTM2-2C) and the mutants, F157A and Y160A on the Ca2+ transients of neonatal ventricular cardiomyocytes. Primary cardiomyocytes were cultured from neonatal rats. They were treated with GSTM2-2C and the two mutants F157A and Y160A at 15µM and incubated for 2 hours. Then the cells were led with Fluo-4AM, fluorescent Ca2+ indicator, and the field stimulated (1 Hz, 3V and 2ms) cells were excited using the 488 nm argon laser. Contractility of the cells were measured and the Ca2+ transients in the stained cells were imaged using Leica SP5 confocal microscope. Peak amplitude of the Ca2+ transient, rise time and decay time from the peak were measured for each transient. In contrast to GSTM2C which significantly reduced the % shortening (42.8%) in the field stimulated cells, F157A and Y160A failed to reduce the % shortening.Analysis revealed that the average amplitude of the Ca2+ transient was significantly reduced (P<0.001) in cells treated with the wild type GSTM2-2C compared to that of untreated cells. Cells treated with the mutants F157A and Y160A didn’t change the Ca2+ transient significantly compared to the control. A significant increase in the rise time (P< 0.001) and a significant reduction in the decay time (P< 0.001) were observed in cardiomyocytes treated with GSTM2-2C compared to the control but not with F157A and Y160A. These results are consistent with the observation that GSTM2-2C reduced the Ca2+ release from the cardiac SR significantly whereas the mutants, F157A and Y160A didn’t show any effect compared to the control. GSTM2-2C has an isoform-specific effect on the cardiac ryanodine receptor activity and also it inhibits RyR2 channel activity only during diastole. Selective inhibition of RyR2 by GSTM2-2C has significant clinical potential in the treatment of cardiac arrhythmias and heart failure. Since GSTM2-2C-terminal construct has no GST enzyme activity, its introduction to the cardiomyocyte would not exert any unwanted side effects that may alter its enzymatic action. The present study further confirms that GSTM2-2C is capable of decreasing the Ca2+ release from the cardiac SR during diastole. These results raise the future possibility of using GSTM2-2C as a template for therapeutics that can depress RyR2 function when the channel is hyperactive in cardiac arrhythmias and heart failure.Keywords: arrhythmia, cardiac muscle, cardiac ryanodine receptor, GSTM2-2
Procedia PDF Downloads 284237 Nano-Enabling Technical Carbon Fabrics to Achieve Improved Through Thickness Electrical Conductivity in Carbon Fiber Reinforced Composites
Authors: Angelos Evangelou, Katerina Loizou, Loukas Koutsokeras, Orestes Marangos, Giorgos Constantinides, Stylianos Yiatros, Katerina Sofocleous, Vasileios Drakonakis
Abstract:
Owing to their outstanding strength to weight properties, carbon fiber reinforced polymer (CFRPs) composites have attracted significant attention finding use in various fields (sports, automotive, transportation, etc.). The current momentum indicates that there is an increasing demand for their employment in high value bespoke applications such as avionics and electronic casings, damage sensing structures, EMI (electromagnetic interference) structures that dictate the use of materials with increased electrical conductivity both in-plane and through the thickness. Several efforts by research groups have focused on enhancing the through-thickness electrical conductivity of FRPs, in an attempt to combine the intrinsically high relative strengths exhibited with improved z-axis electrical response as well. However, only a limited number of studies deal with printing of nano-enhanced polymer inks to produce a pattern on dry fabric level that could be used to fabricate CFRPs with improved through thickness electrical conductivity. The present study investigates the employment of screen-printing process on technical dry fabrics using nano-reinforced polymer-based inks to achieve the required through thickness conductivity, opening new pathways for the application of fiber reinforced composites in niche products. Commercially available inks and in-house prepared inks reinforced with electrically conductive nanoparticles are employed, printed in different patterns. The aim of the present study is to investigate both the effect of the nanoparticle concentration as well as the droplet patterns (diameter, inter-droplet distance and coverage) to optimize printing for the desired level of conductivity enhancement in the lamina level. The electrical conductivity is measured initially at ink level to pinpoint the optimum concentrations to be employed using a “four-probe” configuration. Upon printing of the different patterns, the coverage of the dry fabric area is assessed along with the permeability of the resulting dry fabrics, in alignment with the fabrication of CFRPs that requires adequate wetting by the epoxy matrix. Results demonstrated increased electrical conductivities of the printed droplets, with increase of the conductivity from the benchmark value of 0.1 S/M to between 8 and 10 S/m. Printability of dense and dispersed patterns has exhibited promising results in terms of increasing the z-axis conductivity without inhibiting the penetration of the epoxy matrix at the processing stage of fiber reinforced composites. The high value and niche prospect of the resulting applications that can stem from CFRPs with increased through thickness electrical conductivities highlights the potential of the presented endeavor, signifying screen printing as the process to to nano-enable z-axis electrical conductivity in composite laminas. This work was co-funded by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation (Project: ENTERPRISES/0618/0013).Keywords: CFRPs, conductivity, nano-reinforcement, screen-printing
Procedia PDF Downloads 153236 Wrestling with Religion: A Theodramatic Exploration of Morality in Popular Culture
Authors: Nicholas Fieseler
Abstract:
The nature of religion implicit in popular culture is relevant both in and out of the university. The traditional rules-based conception of religion and the ethical systems that emerge from them do not necessarily convey the behavior of daily life as it exists apart from spaces deemed sacred. This paper proposes to examine the religion implicit in the popular culture phenomenon of professional wrestling and how that affects the understanding of popular religion. Pro wrestling, while frequently dismissed, offers a unique manner through which to re-examine religion in popular culture. A global phenomenon, pro wrestling occupies a distinct space in numerous countries and presents a legitimate reflection of human behavior cross-culturally on a scale few other phenomena can equal. Given its global viewership of millions, it should be recognized as a significant means of interpreting the human attraction to violence and its association with religion in general. Hans Urs von Balthasar’s theory of Theodrama will be used to interrogate the inchoate religion within pro wrestling. While Balthasar developed theodrama within the confines of Christian theology; theodrama contains remarkable versatility in its potential utility. Since theodrama re-envisions reality as drama, the actions of every human actor on the stage contributes to the play’s development, and all action contains some transcendent value. It is in this sense that even the “low brow” activity of pro wrestling may be understood in religious terms. Moreover, a pro wrestling storyline acts as a play within a play: the struggles in a pro wrestling match reflect the human attitudes toward life as it exists in the sacred and profane realms. The indistinct lines separating traditionally good (face) from traditionally bad (heel)wrestlers mirror the moral ambiguity in which many people interpret life. This blurred distinction between good and bad, and large segments of an audience’s embrace of the heel wrestlers, reveal ethical constraints that guide the everyday values of pro wrestling spectators, a moral ambivalence that is often overlooked by traditional religious systems, and which has hitherto been neglected in the academic literature on pro wrestling. The significance of interpreting the religion implicit in pro wrestling through a the dramatic lens extends beyond pro wrestling specifically and can examine the religion implicit in popular culture in general. The use of theodrama mitigates the rigid separation often ascribed to areas deemed sacred/ profane, ortranscendent / immanent, enabling a re-evaluation of religion and ethical systems as practiced in popular culture. The use of theodrama will be expressed by utilizing the pro wrestling match as a literary text that reflects the society from which it emerges. This analysis will also reveal the complex nature of religion in popular culture and provides new directions for the academic study of religion. This project consciously bridges the academic and popular realms. The goal of the research is not to add only to the academic literature on implicit religion in popular culture but to publish it in a form which speaks to those outside the standard academic audiences for such work.Keywords: ethics, popular religion, professional wrestling, theodrama
Procedia PDF Downloads 142235 Clinical Staff Perceptions of the Quality of End-of-Life Care in an Acute Private Hospital: A Mixed Methods Design
Authors: Rosemary Saunders, Courtney Glass, Karla Seaman, Karen Gullick, Julie Andrew, Anne Wilkinson, Ashwini Davray
Abstract:
Current literature demonstrates that most Australians receive end-of-life care in a hospital setting, despite most hoping to die within their own home. The necessity for high quality end-of-life care has been emphasised by the Australian Commission on Safety and Quality in Health Care and the National Safety and Quality in Health Services Standards depict the requirement for comprehensive care at the end of life (Action 5.20), reinforcing the obligation for continual organisational assessment to determine if these standards are suitably achieved. Limited research exploring clinical staff perspectives of end-of-life care delivery has been conducted within an Australian private health context. This study aimed to investigate clinical staff member perceptions of end-of-life care delivery at a private hospital in Western Australia. The study comprised of a multi-faceted mixed-methods methodology, part of a larger study. Data was obtained from clinical staff utilising surveys and focus groups. A total of 133 questionnaires were completed by clinical staff, including registered nurses (61.4%), enrolled nurses (22.7%), allied health professionals (9.9%), non-palliative care consultants (3.8%) and junior doctors (2.2%). A total of 14.7% of respondents were palliative care ward staff members. Additionally, seven staff focus groups were conducted with physicians (n=3), nurses (n=26) and allied health professionals including social workers (n=1), dietitians (n=2), physiotherapists (n=5) and speech pathologists (n=3). Key findings from the surveys highlighted that the majority of staff agreed it was part of their role to talk to doctors about the care of patients who they thought may be dying, and recognised the importance of communication, appropriate training and support for clinical staff to provide quality end-of-life care. Thematic analysis of the qualitative data generated three key themes: creating the setting which highlighted the importance of adequate resourcing and conducive physical environments for end-of-life care and to support staff and families; planning and care delivery which emphasised the necessity for collaboration between staff, families and patients to develop care plans and treatment directives; and collaborating in end-of-life care, with effective communication and teamwork leading to achievable care delivery expectations. These findings contribute to health professionals better understanding of end-of-life care provision and the importance of collaborating with patients and families in care delivery. It is crucial that health care providers implement strategies to overcome gaps in care, so quality end-of-life care is provided. Findings from this study have been translated into practice, with the development and implementation of resources, training opportunities, support networks and guidelines for the delivery of quality end-of-life care.Keywords: clinical staff, end-of-life care, mixed-methods, private hospital.
Procedia PDF Downloads 155234 A Semi-supervised Classification Approach for Trend Following Investment Strategy
Authors: Rodrigo Arnaldo Scarpel
Abstract:
Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation
Procedia PDF Downloads 90233 On-Farm Biopurification Systems: Fungal Bioaugmentation of Biomixtures For Carbofuran Removal
Authors: Carlos E. Rodríguez-Rodríguez, Karla Ruiz-Hidalgo, Kattia Madrigal-Zúñiga, Juan Salvador Chin-Pampillo, Mario Masís-Mora, Elizabeth Carazo-Rojas
Abstract:
One of the main causes of contamination linked to agricultural activities is the spillage and disposal of pesticides, especially during the loading, mixing or cleaning of agricultural spraying equipment. One improvement in the handling of pesticides is the use of biopurification systems (BPS), simple and cheap degradation devices where the pesticides are biologically degraded at accelerated rates. The biologically active core of BPS is the biomixture, which is constituted by soil pre-exposed to the target pesticide, a lignocellulosic substrate to promote the activity of ligninolitic fungi and a humic component (peat or compost), mixed at a volumetric proportion of 50:25:25. Considering the known ability of lignocellulosic fungi to degrade a wide range of organic pollutants, and the high amount of lignocellulosic waste used in biomixture preparation, the bioaugmentation of biomixtures with these fungi represents an interesting approach for improving biomixtures. The present work aimed at evaluating the effect of the bioaugmentation of rice husk based biomixtures with the fungus Trametes versicolor in the removal of the insectice/nematicide carbofuran (CFN) and to optimize the composition of the biomixture to obtain the best performance in terms of CFN removal and mineralization, reduction in formation of transformation products and decrease in residual toxicity of the matrix. The evaluation of several lignocellulosic residues (rice husk, wood chips, coconut fiber, sugarcane bagasse or newspaper print) revealed the best colonization by T. versicolor in rice husk. Pre-colonized rice husk was then used in the bioaugmentation of biomixtures also containing soil pre-exposed to CFN and either peat (GTS biomixture) or compost (GCS biomixture). After spiking with 10 mg/kg CBF, the efficiency of the biomixture was evaluated through a multi-component approach that included: monitoring of CBF removal and production of CBF transformation products, mineralization of radioisotopically labeled carbofuran (14C-CBF) and changes in the toxicity of the matrix after the treatment (Daphnia magna acute immobilization test). Estimated half-lives of CBF in the biomixtures were 3.4 d and 8.1 d in GTS and GCS, respectively. The transformation products 3-hydroxycarbofuran and 3-ketocarbofuran were detected at the moment of CFN application, however their concentration continuously disappeared. Mineralization of 14C-CFN was also faster in GTS than GCS. The toxicological evaluation showed a complete toxicity removal in the biomixtures after 48 d of treatment. The composition of the GCS biomixture was optimized using a central composite design and response surface methodology. The design variables were the volumetric content of fungally pre-colonized rice husk and the volumetric ratio compost/soil. According to the response models, maximization of CFN removal and mineralization rate, and minimization in the accumulation of transformation products were obtained with an optimized biomixture of composition 30:43:27 (pre-colonized rice husk:compost:soil), which differs from the 50:25:25 composition commonly employed in BPS. Results suggest that fungal bioaugmentation may enhance the performance of biomixtures in CFN removal. Optimization reveals the importance of assessing new biomixture formulations in order to maximize their performance.Keywords: bioaugmentation, biopurification systems, degradation, fungi, pesticides, toxicity
Procedia PDF Downloads 312232 Assessing Measures and Caregiving Experiences of Thai Caregivers of Persons with Dementia
Authors: Piyaorn Wajanatinapart, Diane R. Lauver
Abstract:
The number of persons with dementia (PWD) has increased. Informal caregivers are the major providing care. They can have perceived gains and burdens. Caregivers who reported high in perceived gains may report low in burdens and better health. Gaps of caregiving literature were: no report psychometrics in a few studies and unclear definitions of gains; most studies with no theory-guided and conducting in Western countries; not fully described relationships among caregiving variables: motivations, satisfaction with psychological needs, social support, gains, burdens, and physical and psycho-emotional health. Those gaps were filled by assessing psychometric properties of selected measures, providing clearly definitions of gains, using self-determination theory (SDT) to guide the study, and developing the study in Thailand. The study purposes were to evaluate six measures for internal consistency reliability, content validity, and construct validity. This study also examined relationships of caregiving variables: motivations (controlled and autonomous motivations), satisfaction with psychological needs (autonomy, competency, and relatedness), perceived social support, perceived gains, perceived burdens, and physical and psycho-emotional health. This study was a cross-sectional and correlational descriptive design with two convenience samples. Sample 1 was five Thai experts to assess content validity of measures. Sample 2 was 146 Thai caregivers of PWD to assess construct validity, reliability, and relationships among caregiving variables. Experts rated questionnaires and sent them back via e-mail. Caregivers answered questionnaires at clinics of four Thai hospitals. Data analysis was used descriptive statistics and bivariate and multivariate analyses using the composite indicator structural equation model to control measurement errors. For study results, most caregivers were female (82%), middle age (M =51.1, SD =11.9), and daughters (57%). They provided care for 15 hours/day with 4.6 years. The content validity indices of items and scales were .80 or higher for clarity and relevance. Experts suggested item revisions. Cronbach’s alphas were .63 to .93 of ten subscales of four measures and .26 to .57 of three subscales. The gain scale was acceptable for construct validity. With controlling covariates, controlled motivations, the satisfaction with three subscales of psychological needs, and perceived social support had positive relationships with physical and psycho-emotional health. Both satisfaction with autonomy subscale and perceived social support had negative relationship with perceived burdens. The satisfaction with three subscales of psychological needs had positive relationships among them. Physical and psycho-emotional health subscales had positive relationships with each other. Furthermore, perceived burdens had negative relationships with physical and psycho-emotional health. This study was the first use SDT to describe relationships of caregiving variables in Thailand. Caregivers’ characteristics were consistent with literature. Four measures were valid and reliable except two measures. Breadth knowledge about relationships was provided. Interpretation of study results was cautious because of using same sample to evaluate psychometric properties of measures and relationships of caregiving variables. Researchers could use four measures for further caregiving studies. Using a theory would help describe concepts, propositions, and measures used. Researchers may examine the satisfaction with psychological needs as mediators. Future studies to collect data with caregivers in communities are needed.Keywords: caregivers, caregiving, dementia, measures
Procedia PDF Downloads 308231 Bedouin Dispersion in Israel: Between Sustainable Development and Social Non-Recognition
Authors: Tamir Michal
Abstract:
The subject of Bedouin dispersion has accompanied the State of Israel from the day of its establishment. From a legal point of view, this subject has offered a launchpad for creative judicial decisions. Thus, for example, the first court decision in Israel to recognize affirmative action (Avitan), dealt with a petition submitted by a Jew appealing the refusal of the State to recognize the Petitioner’s entitlement to the long-term lease of a plot designated for Bedouins. The Supreme Court dismissed the petition, holding that there existed a public interest in assisting Bedouin to establish permanent urban settlements, an interest which justifies giving them preference by selling them plots at subsidized prices. In another case (The Forum for Coexistence in the Negev) the Supreme Court extended equitable relief for the purpose of constructing a bridge, even though the construction infringed the Law, in order to allow the children of dispersed Bedouin to reach school. Against this background, the recent verdict, delivered during the Protective Edge military campaign, which dismissed a petition aimed at forcing the State to spread out Protective Structures in Bedouin villages in the Negev against the risk of being hit from missiles launched from Gaza (Abu Afash) is disappointing. Even if, in arguendo, no selective discrimination was involved in the State’s decision not to provide such protection, the decision, and its affirmation by the Court, is problematic when examined through the prism of the Theory of Recognition. The article analyses the issue by tools of theory of Recognition, according to which people develop their identities through mutual relations of recognition in different fields. In the social context, the path to recognition is cognitive respect, which is provided by means of legal rights. By seeing other participants in Society as bearers of rights and obligations, the individual develops an understanding of his legal condition as reflected in the attitude to others. Consequently, even if the Court’s decision may be justified on strict legal grounds, the fact that Jewish settlements were protected during the military operation, whereas Bedouin villages were not, is a setback in the struggle to make the Bedouin citizens with equal rights in Israeli society. As the Court held, ‘Beyond their protective function, the Migunit [Protective Structures] may make a moral and psychological contribution that should not be undervalued’. This contribution is one that the Bedouin did not receive in the Abu Afash verdict. The basic thesis is that the Court’s verdict analyzed above clearly demonstrates that the reliance on classical liberal instruments (e.g., equality) cannot secure full appreciation of all aspects of Bedouin life, and hence it can in fact prejudice them. Therefore, elements of the recognition theory should be added, in order to find the channel for cognitive dignity, thereby advancing the Bedouins’ ability to perceive themselves as equal human beings in the Israeli society.Keywords: bedouin dispersion, cognitive respect, recognition theory, sustainable development
Procedia PDF Downloads 353230 Comparative Effects of Resveratrol and Energy Restriction on Liver Fat Accumulation and Hepatic Fatty Acid Oxidation
Authors: Iñaki Milton-Laskibar, Leixuri Aguirre, Maria P. Portillo
Abstract:
Introduction: Energy restriction is an effective approach in preventing liver steatosis. However, due to social and economic reasons among others, compliance with this treatment protocol is often very poor, especially in the long term. Resveratrol, a natural polyphenolic compound that belongs to stilbene group, has been widely reported to imitate the effects of energy restriction. Objective: To analyze the effects of resveratrol under normoenergetic feeding conditions and under a mild energy restriction on liver fat accumulation and hepatic fatty acid oxidation. Methods: 36 male six-week-old rats were fed a high-fat high-sucrose diet for 6 weeks in order to induce steatosis. Then, rats were divided into four groups and fed a standard diet for 6 additional weeks: control group (C), resveratrol group (RSV, resveratrol 30 mg/kg/d), restricted group (R, 15 % energy restriction) and combined group (RR, 15 % energy restriction and resveratrol 30 mg/kg/d). Liver triacylglycerols (TG) and total cholesterol contents were measured by using commercial kits. Carnitine palmitoyl transferase 1a (CPT 1a) and citrate synthase (CS) activities were measured spectrophotometrically. TFAM (mitochondrial transcription factor A) and peroxisome proliferator-activator receptor alpha (PPARα) protein contents, as well as the ratio acetylated peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC1α)/Total PGC1α were analyzed by Western blot. Statistical analysis was performed by using one way ANOVA and Newman-Keuls as post-hoc test. Results: No differences were observed among the four groups regarding liver weight and cholesterol content, but the three treated groups showed reduced TG when compared to the control group, being the restricted groups the ones showing the lowest values (with no differences between them). Higher CPT 1a and CS activities were observed in the groups supplemented with resveratrol (RSV and RR), with no difference between them. The acetylated PGC1α /total PGC1α ratio was lower in the treated groups (RSV, R and RR) than in the control group, with no differences among them. As far as TFAM protein expression is concerned, only the RR group reached a higher value. Finally, no changes were observed in PPARα protein expression. Conclusions: Resveratrol administration is an effective intervention for liver triacylglycerol content reduction, but a mild energy restriction is even more effective. The mechanisms of action of these two strategies are different. Thus resveratrol, but not energy restriction, seems to act by increasing fatty acid oxidation, although mitochondriogenesis seems not to be induced. When both treatments (resveratrol administration and a mild energy restriction) were combined, no additive or synergic effects were appreciated. Acknowledgements: MINECO-FEDER (AGL2015-65719-R), Basque Government (IT-572-13), University of the Basque Country (ELDUNANOTEK UFI11/32), Institut of Health Carlos III (CIBERobn). Iñaki Milton is a fellowship from the Basque Government.Keywords: energy restriction, fat, liver, oxidation, resveratrol
Procedia PDF Downloads 212