Search results for: long axis of the uterus
595 Hydrological-Economic Modeling of Two Hydrographic Basins of the Coast of Peru
Authors: Julio Jesus Salazar, Manuel Andres Jesus De Lama
Abstract:
There are very few models that serve to analyze the use of water in the socio-economic process. On the supply side, the joint use of groundwater has been considered in addition to the simple limits on the availability of surface water. In addition, we have worked on waterlogging and the effects on water quality (mainly salinity). In this paper, a 'complex' water economy is examined; one in which demands grow differentially not only within but also between sectors, and one in which there are limited opportunities to increase consumptive use. In particular, high-value growth, the growth of the production of irrigated crops of high value within the basins of the case study, together with the rapidly growing urban areas, provides a rich context to examine the general problem of water management at the basin level. At the same time, the long-term aridity of nature has made the eco-environment in the basins located on the coast of Peru very vulnerable, and the exploitation and immediate use of water resources have further deteriorated the situation. The presented methodology is the optimization with embedded simulation. The wide basin simulation of flow and water balances and crop growth are embedded with the optimization of water allocation, reservoir operation, and irrigation scheduling. The modeling framework is developed from a network of river basins that includes multiple nodes of origin (reservoirs, aquifers, water courses, etc.) and multiple demand sites along the river, including places of consumptive use for agricultural, municipal and industrial, and uses of running water on the coast of Peru. The economic benefits associated with water use are evaluated for different demand management instruments, including water rights, based on the production and benefit functions of water use in the urban agricultural and industrial sectors. This work represents a new effort to analyze the use of water at the regional level and to evaluate the modernization of the integrated management of water resources and socio-economic territorial development in Peru. It will also allow the establishment of policies to improve the process of implementation of the integrated management and development of water resources. The input-output analysis is essential to present a theory about the production process, which is based on a particular type of production function. Also, this work presents the Computable General Equilibrium (CGE) version of the economic model for water resource policy analysis, which was specifically designed for analyzing large-scale water management. As to the platform for CGE simulation, GEMPACK, a flexible system for solving CGE models, is used for formulating and solving CGE model through the percentage-change approach. GEMPACK automates the process of translating the model specification into a model solution program.Keywords: water economy, simulation, modeling, integration
Procedia PDF Downloads 155594 Real-Time Monitoring of Complex Multiphase Behavior in a High Pressure and High Temperature Microfluidic Chip
Authors: Renée M. Ripken, Johannes G. E. Gardeniers, Séverine Le Gac
Abstract:
Controlling the multiphase behavior of aqueous biomass mixtures is essential when working in the biomass conversion industry. Here, the vapor/liquid equilibria (VLE) of ethylene glycol, glycerol, and xylitol were studied for temperatures between 25 and 200 °C and pressures of 1 to 10 bar. These experiments were performed in a microfluidic platform, which exhibits excellent heat transfer properties so that equilibrium is reached fast. Firstly, the saturated vapor pressure as a function of the temperature and the substrate mole fraction of the substrate was calculated using AspenPlus with a Redlich-Kwong-Soave Boston-Mathias (RKS-BM) model. Secondly, we developed a high-pressure and high-temperature microfluidic set-up for experimental validation. Furthermore, we have studied the multiphase flow pattern that occurs after the saturation temperature was achieved. A glass-silicon microfluidic device containing a 0.4 or 0.2 m long meandering channel with a depth of 250 μm and a width of 250 or 500 μm was fabricated using standard microfabrication techniques. This device was placed in a dedicated chip-holder, which includes a ceramic heater on the silicon side. The temperature was controlled and monitored by three K-type thermocouples: two were located between the heater and the silicon substrate, one to set the temperature and one to measure it, and the third one was placed in a 300 μm wide and 450 μm deep groove on the glass side to determine the heat loss over the silicon. An adjustable back pressure regulator and a pressure meter were added to control and evaluate the pressure during the experiment. Aqueous biomass solutions (10 wt%) were pumped at a flow rate of 10 μL/min using a syringe pump, and the temperature was slowly increased until the theoretical saturation temperature for the pre-set pressure was reached. First and surprisingly, a significant difference was observed between our theoretical saturation temperature and the experimental results. The experimental values were 10’s of degrees higher than the calculated ones and, in some cases, saturation could not be achieved. This discrepancy can be explained in different ways. Firstly, the pressure in the microchannel is locally higher due to both the thermal expansion of the liquid and the Laplace pressure that has to be overcome before a gas bubble can be formed. Secondly, superheating effects are likely to be present. Next, once saturation was reached, the flow pattern of the gas/liquid multiphase system was recorded. In our device, the point of nucleation can be controlled by taking advantage of the pressure drop across the channel and the accurate control of the temperature. Specifically, a higher temperature resulted in nucleation further upstream in the channel. As the void fraction increases downstream, the flow regime changes along the channel from bubbly flow to Taylor flow and later to annular flow. All three flow regimes were observed simultaneously. The findings of this study are key for the development and optimization of a microreactor for hydrogen production from biomass.Keywords: biomass conversion, high pressure and high temperature microfluidics, multiphase, phase diagrams, superheating
Procedia PDF Downloads 217593 Identifying the Needs for Renewal of Urban Water Infrastructure Systems: Analysis of Material, Age, Types and Areas: Case Study of Linköping in Sweden
Authors: Eman Hegazy, Stefan Anderberg, Joakim Krook
Abstract:
Urban water infrastructure is crucial for efficient and reliable water supply in growing cities. With the growth of cities, the need for maintenance and renewal of these systems increases but often goes unfulfilled due to a variety of reasons, such as limited funding, political priorities, or lack of public awareness. Neglecting the renewal needs of these systems can lead to frequent malfunctions and reduced quality and reliability of water supply, as well as increased costs and health and environmental hazards. It is important for cities to prioritize investment in water infrastructure and develop long-term plans to address renewal needs. Drawing general conclusions about the rate of renewal of urban water infrastructure systems at an international or national level can be challenging due to the influence of local management decisions. In many countries, the responsibility for water infrastructure management lies with the municipal authorities, who are responsible for making decisions about the allocation of resources for repair, maintenance, and renewal. These decisions can vary widely based on factors such as local finances, political priorities, and public perception of the importance of water infrastructure. As a result, it is difficult to make generalizations about the rate of renewal across different countries or regions. In Sweden, the situation is not different, and the information from Svenskt Vatten indicates that the rate of renewal varies across municipalities and can be insufficient, leading to a buildup of maintenance and renewal needs. This study aims to examine the adequacy of the rate of renewal of urban water infrastructure in Linköping case city in Sweden. Using a case study framework, the study will assess the current status of the urban water system and the need for renewal. The study will also consider the role of factors such as proper identification processes, limited funding, competing for political priorities, and local management decisions in contributing to insufficient renewal. The study investigates the following questions: (1) What is the current status of water and sewerage networks in terms of length, age distribution, and material composition, estimated total water leakage in the network per year, damages, leaks, and outages occur per year, both overall and by district? (2) What are the main causes of these damages, leaks, and interruptions, and how are they related to lack of maintenance and renewal? (3) What is the current status of renewal work for the water and sewerage networks, including the renewal rate and changes over time, recent renewal material composition, and the budget allocation for renewal and emergency repairs? (4) What factors influence the need for renewal and what conditions should be considered in the assessment? The findings of the study provide insights into the challenges facing urban water infrastructure and identify strategies for improving the rate of renewal to ensure a reliable and sustainable water supply.Keywords: case study, infrastructure, management, renewal need, Sweden
Procedia PDF Downloads 103592 Generating Ideas to Improve Road Intersections Using Design with Intent Approach
Authors: Omar Faruqe Hamim, M. Shamsul Hoque, Rich C. McIlroy, Katherine L. Plant, Neville A. Stanton
Abstract:
Road safety has become an alarming issue, especially in low-middle income developing countries. The traditional approaches lack the out of the box thinking, making engineers confined to applying usual techniques in making roads safer. A socio-technical approach has recently been introduced in improving road intersections through designing with intent. This Design With Intent (DWI) approach aims to give practitioners a more nuanced approach to design and behavior, working with people, people’s understanding, and the complexities of everyday human experience. It's a collection of design patterns —and a design and research approach— for exploring the interactions between design and people’s behavior across products, services, and environments, both digital and physical. Through this approach, it can be seen that how designing with people in behavior change can be applied to social and environmental problems, as well as commercially. It has a total of 101 cards across eight different lenses, such as architectural, error-proofing, interaction, ludic, perceptual, cognitive, Machiavellian, and security lens each having its own distinct characteristics of extracting ideas from the participant of this approach. For this research purpose, a three-legged accident blackspot intersection of a national highway has been chosen to perform the DWI workshop. Participants from varying fields such as civil engineering, naval architecture and marine engineering, urban and regional planning, and sociology actively participated for a day long workshop. While going through the workshops, the participants were given a preamble of the accident scenario and a brief overview of DWI approach. Design cards of varying lenses were distributed among 10 participants and given an hour and a half for brainstorming and generating ideas to improve the safety of the selected intersection. After the brainstorming session, the participants spontaneously went through roundtable discussions regarding the ideas they have come up with. According to consensus of the forum, ideas were accepted or rejected. These generated ideas were then synthesized and agglomerated to bring about an improvement scheme for the intersection selected in our study. To summarize the improvement ideas from DWI approach, color coding of traffic lanes for separate vehicles, channelizing the existing bare intersection, providing advance warning traffic signs, cautionary signs and educational signs motivating road users to drive safe, using textured surfaces at approach with rumble strips before the approach of intersection were the most significant one. The motive of this approach is to bring about new ideas from the road users and not just depend on traditional schemes to increase the efficiency, safety of roads as well and to ensure the compliance of road users since these features are being generated from the minds of users themselves.Keywords: design with intent, road safety, human experience, behavior
Procedia PDF Downloads 139591 To Access the Knowledge, Awareness and Factors Associated With Diabetes Mellitus in Buea, Cameroon
Authors: Franck Acho
Abstract:
This is a chronic metabolic disorder which is a fast-growing global problem with a huge social, health, and economic consequences. It is estimated that in 2010 there were globally 285 million people (approximately 6.4% of the adult population) suffering from this disease. This number is estimated to increase to 430 million in the absence of better control or cure. An ageing population and obesity are two main reasons for the increase. Diabetes mellitus is a chronic heterogeneous metabolic disorder with a complex pathogenesis. It is characterized by elevated blood glucose levels or hyperglycemia, which results from abnormalities in either insulin secretion or insulin action or both. Hyperglycemia manifests in various forms with a varied presentation and results in carbohydrate, fat, and protein metabolic dysfunctions. Long-term hyperglycemia often leads to various microvascular and macrovascular diabetic complications, which are mainly responsible for diabetes-associated morbidity and mortality. Hyperglycemia serves as the primary biomarker for the diagnosis of diabetes as well. Furthermore, it has been shown that almost 50% of the putative diabetics are not diagnosed until 10 years after onset of the disease, hence the real prevalence of global diabetes must be astronomically high. This study was conducted in a locality to access the level of knowledge, awareness and risk factors associated with people leaving with diabetes mellitus. A month before the screening was to be conducted, a health screening in some selected churches and on the local community radio as well as on relevant WhatsApp groups were advertised. A general health talk was delivered by the head of the screening unit to all attendees who were all educated on the procedure to be carried out with benefits and any possible discomforts after which the attendee’s consent was obtained. Evaluation of the participants for any leads to the diabetes selected for the screening was done by taking adequate history and physical examinations such as excessive thirst, increased urination, tiredness, hunger, unexplained weight loss, feeling irritable or having other mood changes, having blurry vision, having slow-healing sores, getting a lot of infections, such as gum, skin and vaginal infections. Out of the 94 participants the finding show that 78 were females and 16 were males, 70.21% of participants with diabetes were between the ages of 60-69yrs.The study found that only 10.63% of respondents declared a good level of knowledge of diabetes. Out of 3 symptoms of diabetes analyzed in this study, high blood sugar (58.5%) and chronic fatigue (36.17%) were the most recognized. Out of 4 diabetes risk factors analyzed in this study, obesity (21.27%) and unhealthy diet (60.63%) were the most recognized diabetes risk factors, while only 10.6% of respondents indicated tobacco use. The diabetic foot was the most recognized diabetes complication (50.57%), but some the participants indicated vision problems (30.8%),or cardiovascular diseases (20.21%) as diabetes complications.Keywords: diabetes mellitus, non comunicable disease, general health talk, hyperglycemia
Procedia PDF Downloads 56590 Redefining Intellectual Humility in Indian Context: An Experimental Investigation
Authors: Jayashree And Gajjam
Abstract:
Intellectual humility (IH) is defined as a virtuous mean between intellectual arrogance and intellectual self-diffidence by the ‘Doxastic Account of IH’ studied, researched and developed by western scholars not earlier than 2015 at the University of Edinburgh. Ancient Indian philosophical texts or the Upanisads written in the Sanskrit language during the later Vedic period (circa 600-300 BCE) have long addressed the virtue of being humble in several stories and narratives. The current research paper questions and revisits these character traits in an Indian context following an experimental method. Based on the subjective reports of more than 400 Indian teenagers and adults, it argues that while a few traits of IH (such as trustworthiness, respectfulness, intelligence, politeness, etc.) are panhuman and pancultural, a few are not. Some attributes of IH (such as proper pride, open-mindedness, awareness of own strength, etc.) may be taken for arrogance by the Indian population, while other qualities of Intellectual Diffidence such as agreeableness, surrendering can be regarded as the characteristic of IH. The paper then gives the reasoning for this discrepancy that can be traced back to the ancient Indian (Upaniṣadic) teachings that are still prevalent in many Indian families and still anchor their views on IH. The name Upanisad itself means ‘sitting down near’ (to the Guru to gain the Supreme knowledge of the Self and the Universe and setting to rest ignorance) which is equivalent to the three traits among the BIG SEVEN characterized as IH by the western scholars viz. ‘being a good listener’, ‘curious to learn’, and ‘respect to other’s opinion’. The story of Satyakama Jabala (Chandogya Upanisad 4.4-8) who seeks the truth for several years even from the bull, the fire, the swan and waterfowl, suggests nothing but the ‘need for cognition’ or ‘desire for knowledge’. Nachiketa (Katha Upanisad), a boy with a pure mind and heart, follows his father’s words and offers himself to Yama (the God of Death) where after waiting for Yama for three days and nights, he seeks the knowledge of the mysteries of life and death. Although the main aim of these Upaniṣadic stories is to give the knowledge of life and death, the Supreme reality which can be identical with traits such as ‘curious to learn’, one cannot deny that they have a lot more to offer than mere information about true knowledge e.g., ‘politeness’, ‘good listener’, ‘awareness of own limitations’, etc. The possible future scope of this research includes (1) finding other socio-cultural factors that affect the ideas on IH such as age, gender, caste, type of education, highest qualification, place of residence and source of income, etc. which may be predominant in current Indian society despite our great teachings of the Upaniṣads, and (2) to devise different measures to impart IH in Indian children, teenagers, and younger adults for the harmonious future. The current experimental research can be considered as the first step towards these goals.Keywords: ethics and virtue epistemology, Indian philosophy, intellectual humility, upaniṣadic texts in ancient India
Procedia PDF Downloads 92589 Enterprises and Social Impact: A Review of the Changing Landscape
Authors: Suzhou Wei, Isobel Cunningham, Laura Bradley McCauley
Abstract:
Social enterprises play a significant role in resolving social issues in the modern world. In contrast to traditional commercial businesses, their main goal is to address social concerns rather than primarily maximize profits. This phenomenon in entrepreneurship is presenting new opportunities and different operating models and resulting in modified approaches to measure success beyond traditional market share and margins. This paper explores social enterprises to clarify their roles and approaches in addressing grand challenges related to social issues. In doing so, it analyses the key differences between traditional business and social enterprises, such as their operating model and value proposition, to understand their contributions to society. The research presented in this paper responds to calls for research to better understand social enterprises and entrepreneurship but also to explore the dynamics between profit-driven and socially-oriented entities to deliver mutual benefits. This paper, which examines the features of commercial business, suggests their primary focus is profit generation, economic growth and innovation. Beyond the chase of profit, it highlights the critical role of innovation typical of successful businesses. This, in turn, promotes economic growth, creates job opportunities and makes a major positive impact on people's lives. In contrast, the motivations upon which social enterprises are founded relate to a commitment to address social problems rather than maximizing profits. These entities combine entrepreneurial principles with commitments to deliver social impact and grand challenge changes, creating a distinctive category within the broader enterprise and entrepreneurship landscape. The motivations for establishing a social enterprise are diverse, such as encompassing personal fulfillment, a genuine desire to contribute to society and a focus on achieving impactful accomplishments. The paper also discusses the collaboration between commercial businesses and social enterprises, which is viewed as a strategic approach to addressing grand challenges more comprehensively and effectively. Finally, this paper highlights the evolving and diverse expectations placed on all businesses to actively contribute to society beyond profit-making. We conclude that there is an unrealized and underdeveloped potential for collaboration between commercial businesses and social enterprises to produce greater and long-lasting social impacts. Overall, the aim of this research is to encourage more investigation of the complex relationship between economic and social objectives and contributions through a better understanding of how and why businesses might address social issues. Ultimately, the paper positions itself as a tool for understanding the evolving landscape of business engagement with social issues and advocates for collaborative efforts to achieve sustainable and impactful outcomes.Keywords: business, social enterprises, collaboration, social issues, motivations
Procedia PDF Downloads 51588 Problem Solving in Mathematics Education: A Case Study of Nigerian Secondary School Mathematics Teachers’ Conceptions in Relation to Classroom Instruction
Authors: Carol Okigbo
Abstract:
Mathematical problem solving has long been accorded an important place in mathematics curricula at every education level in both advanced and emerging economies. Its classroom approaches have varied, such as teaching for problem-solving, teaching about problem-solving, and teaching mathematics through problem-solving. It requires engaging in tasks for which the solution methods are not eminent, making sense of problems and persevering in solving them by exhibiting processes, strategies, appropriate attitude, and adequate exposure. Teachers play important roles in helping students acquire competency in problem-solving; thus, they are expected to be good problem-solvers and have proper conceptions of problem-solving. Studies show that teachers’ conceptions influence their decisions about what to teach and how to teach. Therefore, how teachers view their roles in teaching problem-solving will depend on their pedagogical conceptions of problem-solving. If teaching problem-solving is a major component of secondary school mathematics instruction, as recommended by researchers and mathematics educators, then it is necessary to establish teachers’ conceptions, what they do, and how they approach problem-solving. This study is designed to determine secondary school teachers’ conceptions regarding mathematical problem solving, its current situation, how teachers’ conceptions relate to their demographics, as well as the interaction patterns in the mathematics classroom. There have been many studies of mathematics problem solving, some of which addressed teachers’ conceptions using single-method approaches, thereby presenting only limited views of this important phenomenon. To address the problem more holistically, this study adopted an integrated mixed methods approach which involved a quantitative survey, qualitative analysis of open-ended responses, and ethnographic observations of teachers in class. Data for the analysis came from a random sample of 327 secondary school mathematics teachers in two Nigerian states - Anambra State and Enugu State who completed a 45-item questionnaire. Ten of the items elicited demographic information, 11 items were open-ended questions, and 25 items were Likert-type questions. Of the 327 teachers who responded to the questionnaires, 37 were randomly selected and observed in their classes. Data analysis using ANOVA, t-tests, chi-square tests, and open coding showed that the teachers had different conceptions about problem-solving, which fall into three main themes: practice on exercises and word application problems, a process of solving mathematical problems, and a way of teaching mathematics. Teachers reported that no period is set aside for problem-solving; typically, teachers solve problems on the board, teach problem-solving strategies, and allow students time to struggle with problems on their own. The result shows a significant difference between male and female teachers’ conception of problems solving, a significant relationship among teachers’ conceptions and academic qualifications, and teachers who have spent ten years or more teaching mathematics were significantly different from the group with seven to nine years of experience in terms of their conceptions of problem-solving.Keywords: conceptions, education, mathematics, problem solving, teacher
Procedia PDF Downloads 76587 Smart Architecture and Sustainability in the Built Environment for the Hatay Refugee Camp
Authors: Ali Mohammed Ali Lmbash
Abstract:
The global refugee crisis points to the vital need for sustainable and resistant solutions to different kinds of problems for displaced persons all over the world. Among the myriads of sustainable concerns, however, there are diverse considerations including energy consumption, waste management, water access, and resiliency of structures. Our research aims to develop distinct ideas for sustainable architecture given the exigent problems in disaster-threatened areas starting with the Hatay Refugee camp in Turkey where the majority of the camp dwellers are Syrian refugees. Commencing community-based participatory research which focuses on the socio-environmental issues of displaced populations, this study will apply two approaches with a specific focus on the Hatay region. The initial experiment uses Richter's predictive model and simulations to forecast earthquake outcomes in refugee campers. The result could be useful in implementing architectural design tactics that enhance structural reliability and ensure the security and safety of shelters through earthquakes. In the second experiment a model is generated which helps us in predicting the quality of the existing water sources and since we understand how greatly water is vital for the well-being of humans, we do it. This research aims to enable camp administrators to employ forward-looking practices while managing water resources and thus minimizing health risks as well as building resilience of the refugees in the Hatay area. On the other side, this research assesses other sustainability problems of Hatay Refugee Camp as well. As energy consumption becomes the major issue, housing developers are required to consider energy-efficient designs as well as feasible integration of renewable energy technologies to minimize the environmental impact and improve the long-term sustainability of housing projects. Waste management is given special attention in this case by imposing recycling initiatives and waste reduction measures to reduce the pace of environmental degradation in the camp's land area. As well, study gives an insight into the social and economic reality of the camp, investigating the contribution of initiatives such as urban agriculture or vocational training to the enhancement of livelihood and community empowerment. In a similar fashion, this study combines the latest research with practical experience in order to contribute to the continuing discussion on sustainable architecture during disaster relief, providing recommendations and info that can be adapted on every scale worldwide. Through collaborative efforts and a dedicated sustainability approach, we can jointly get to the root of the cause and work towards a far more robust and equitable society.Keywords: smart architecture, Hatay Camp, sustainability, machine learning.
Procedia PDF Downloads 54586 Inhibition of Mild Steel Corrosion in Hydrochloric Acid Medium Using an Aromatic Hydrazide Derivative
Authors: Preethi Kumari P., Shetty Prakasha, Rao Suma A.
Abstract:
Mild steel has been widely employed as construction materials for pipe work in the oil and gas production such as down hole tubular, flow lines and transmission pipelines, in chemical and allied industries for handling acids, alkalis and salt solutions due to its excellent mechanical property and low cost. Acid solutions are widely used for removal of undesirable scale and rust in many industrial processes. Among the commercially available acids hydrochloric acid is widely used for pickling, cleaning, de-scaling and acidization of oil process. Mild steel exhibits poor corrosion resistance in presence of hydrochloric acid. The high reactivity of mild steel in presence of hydrochloric acid is due to the soluble nature of ferrous chloride formed and the cementite phase (Fe3C) normally present in the steel is also readily soluble in hydrochloric acid. Pitting attack is also reported to be a major form of corrosion in mild steel in the presence of high concentrations of acids and thereby causing the complete destruction of metal. Hydrogen from acid reacts with the metal surface and makes it brittle and causes cracks, which leads to pitting type of corrosion. The use of chemical inhibitor to minimize the rate of corrosion has been considered to be the first line of defense against corrosion. In spite of long history of corrosion inhibition, a highly efficient and durable inhibitor that can completely protect mild steel in aggressive environment is yet to be realized. It is clear from the literature review that there is ample scope for the development of new organic inhibitors, which can be conveniently synthesized from relatively cheap raw materials and provide good inhibition efficiency with least risk of environmental pollution. The aim of the present work is to evaluate the electrochemical parameters for the corrosion inhibition behavior of an aromatic hydrazide derivative, 4-hydroxy- N '-[(E)-1H-indole-2-ylmethylidene)] benzohydrazide (HIBH) on mild steel in 2M hydrochloric acid using Tafel polarization and electrochemical impedance spectroscopy (EIS) techniques at 30-60 °C. The results showed that inhibition efficiency increased with increase in inhibitor concentration and decreased marginally with increase in temperature. HIBH showed a maximum inhibition efficiency of 95 % at 8×10-4 M concentration at 30 °C. Polarization curves showed that HIBH act as a mixed-type inhibitor. The adsorption of HIBH on mild steel surface obeys the Langmuir adsorption isotherm. The adsorption process of HIBH at the mild steel/hydrochloric acid solution interface followed mixed adsorption with predominantly physisorption at lower temperature and chemisorption at higher temperature. Thermodynamic parameters for the adsorption process and kinetic parameters for the metal dissolution reaction were determined.Keywords: electrochemical parameters, EIS, mild steel, tafel polarization
Procedia PDF Downloads 336585 Developing Sustainable Tourism Practices in Communities Adjacent to Mines: An Exploratory Study in South Africa
Authors: Felicite Ann Fairer-Wessels
Abstract:
There has always been a disparity between mining and tourism mainly due to the socio-economic and environmental impacts of mines on both the adjacent resident communities and the areas taken up by the mining operation. Although heritage mining tourism has been actively and successfully pursued and developed in the UK, largely Wales, and Scandinavian countries, the debate whether active mining and tourism can have a mutually beneficial relationship remains imminent. This pilot study explores the relationship between the ‘to be developed’ future Nokeng Mine and its adjacent community, the rural community of Moloto, will be investigated in terms of whether sustainable tourism and livelihood activities can potentially be developed with the support of the mine. Concepts such as social entrepreneur, corporate social responsibility, sustainable development and triple bottom line are discussed. Within the South African context as a mineral rich developing country, the government has a statutory obligation to empower disenfranchised communities through social and labour plans and policies. All South African mines must preside over a Social and Labour Plan according to the Mineral and Petroleum Resources Development Act, No 28 of 2002. The ‘social’ component refers to the ‘social upliftment’ of communities within or adjacent to any mine; whereas the ‘labour’ component refers to the mine workers sourced from the specific community. A qualitative methodology is followed using the case study as research instrument for the Nokeng Mine and Moloto community with interviews and focus group discussions. The target population comprised of the Moloto Tribal Council members (8 in-depth interviews), the Moloto community members (17: focus groups); and the Nokeng Mine representatives (4 in-depth interviews). In this pilot study two disparate ‘worlds’ are potentially linked: on the one hand, the mine as social entrepreneur that is searching for feasible and sustainable ideas; and on the other hand, the community adjacent to the mine, with potentially sustainable tourism entrepreneurs that can tap into the resources of the mine should their ideas be feasible to build their businesses. Being an exploratory study the findings are limited but indicate that the possible success of tourism and sustainable livelihood activities lies in the fact that both the Mine and Community are keen to work together – the mine in terms of obtaining labour and profit; and the community in terms of improved and sustainable social and economic conditions; with both parties realizing the importance to mitigate negative environmental impacts. In conclusion, a relationship of trust is imperative between a mine and a community before a long term liaison is possible. However whether tourism is a viable solution for the community to engage in is debatable. The community could initially rather pursue the sustainable livelihoods approach and focus on life-supporting activities such as building, gardening, etc. that once established could feed into possible sustainable tourism activities.Keywords: community development, mining tourism, sustainability, South Africa
Procedia PDF Downloads 302584 Analysis of Aspergillus fumigatus IgG Serologic Cut-Off Values to Increase Diagnostic Specificity of Allergic Bronchopulmonary Aspergillosis
Authors: Sushmita Roy Chowdhury, Steve Holding, Sujoy Khan
Abstract:
The immunogenic responses of the lung towards the fungus Aspergillus fumigatus may range from invasive aspergillosis in the immunocompromised, fungal ball or infection within a cavity in the lung in those with structural lung lesions, or allergic bronchopulmonary aspergillosis (ABPA). Patients with asthma or cystic fibrosis are particularly predisposed to ABPA. There are consensus guidelines that have established criteria for diagnosis of ABPA, but uncertainty remains on the serologic cut-off values that would increase the diagnostic specificity of ABPA. We retrospectively analyzed 80 patients with severe asthma and evidence of peripheral blood eosinophilia ( > 500) over the last 3 years who underwent all serologic tests to exclude ABPA. Total IgE, specific IgE and specific IgG levels against Aspergillus fumigatus were measured using ImmunoCAP Phadia-100 (Thermo Fisher Scientific, Sweden). The Modified ISHAM working group 2013 criteria (obligate criteria: asthma or cystic fibrosis, total IgE > 1000 IU/ml or > 417 kU/L and positive specific IgE Aspergillus fumigatus or skin test positivity; with ≥ 2 of peripheral eosinophilia, positive specific IgG Aspergillus fumigatus and consistent radiographic opacities) was used in the clinical workup for the final diagnosis of ABPA. Patients were divided into 3 groups - definite, possible, and no evidence of ABPA. Specific IgG Aspergillus fumigatus levels were not used to assign the patients into any of the groups. Of 80 patients (males 48, females 32; mean age 53.9 years ± SD 15.8) selected for the analysis, there were 30 patients who had positive specific IgE against Aspergillus fumigatus (37.5%). 13 patients fulfilled the Modified ISHAM working group 2013 criteria of ABPA (‘definite’), while 15 patients were ‘possible’ ABPA and 52 did not fulfill the criteria (not ABPA). As IgE levels were not normally distributed, median levels were used in the analysis. Median total IgE levels of patients with definite and possible ABPA were 2144 kU/L and 2597 kU/L respectively (non-significant), while median specific IgE Aspergillus fumigatus at 4.35 kUA/L and 1.47 kUA/L respectively were significantly different (comparison of standard deviations F-statistic 3.2267, significance level p=0.040). Mean levels of IgG anti-Aspergillus fumigatus in the three groups (definite, possible and no evidence of ABPA) were compared using ANOVA (Statgraphics Centurion Professional XV, Statpoint Inc). Mean levels of IgG anti-Aspergillus fumigatus (Gm3) in definite ABPA was 125.17 mgA/L ( ± SD 54.84, with 95%CI 92.03-158.32), while mean Gm3 levels in possible and no ABPA were 18.61 mgA/L and 30.05 mgA/L respectively. ANOVA showed a significant difference between the definite group and the other groups (p < 0.001). This was confirmed using multiple range tests (Fisher's least significant difference procedure). There was no significant difference between the possible ABPA and not ABPA groups (p > 0.05). The study showed that a sizeable proportion of patients with asthma are sensitized to Aspergillus fumigatus in this part of India. A higher cut-off value of Gm3 ≥ 80 mgA/L provides a higher serologic specificity towards definite ABPA. Long-term studies would provide us more information if those patients with 'possible' APBA and positive Gm3 later develop clear ABPA, and are different from the Gm3 negative group in this respect. Serologic testing with clear defined cut-offs are a valuable adjunct in the diagnosis of ABPA.Keywords: allergic bronchopulmonary aspergillosis, Aspergillus fumigatus, asthma, IgE level
Procedia PDF Downloads 211583 Narcissism in the Life of Howard Hughes: A Psychobiographical Exploration
Authors: Alida Sandison, Louise A. Stroud
Abstract:
Narcissism is a personality configuration which has both normal and pathological personality expressions. Narcissism is highly complex, and is linked to a broad field of research. There are both dimensional and categorical conceptualisations of narcissism, and a variety of theoretical formulations that have been put forward to understand the narcissistic personality configuration. Currently, Kernberg’s Object Relations theory is well supported for this purpose. The complexity and particular defense mechanisms at play in the narcissistic personality make it a difficult personality configuration worth further research. Psychobiography as a methodology allows for the exploration of the lived life, and is thus a useful methodology to surmount these inherent challenges. Narcissism has been a focus of academic interest for a long time, and although there is a lot of research done in this area, to the researchers' knowledge, narcissistic dynamics have never been explored within a psychobiographical format. Thus, the primary aim of the research was to explore and describe narcissism in the life of Howard Hughes, with the objective of gaining further insight into narcissism through the use of this unconventional research approach. Hughes was chosen as subject for the study as he is renowned as an eccentric billionaire who had his revolutionary effect on the world, but was concurrently disturbed within his personal pathologies. Hughes was dynamic in three different sectors, namely motion pictures, aviation and gambling. He became more and more reclusive as he entered into middle age. From his early fifties he was agoraphobic, and the social network of connectivity that could reasonably be expected from someone in the top of their field was notably distorted. Due to his strong narcissistic personality configuration, and the interpersonal difficulties he experienced, Hughes represents an ideal figure to explore narcissism. The study used a single case study design, and purposive sampling to select Hughes. Qualitative data was sampled, using secondary data sources. Given that Hughes was a famous figure, there is a plethora of information on his life, which is primarily autobiographical. This includes books written about his life, and archival material in the form of newspaper articles, interviews and movies. Gathered data were triangulated to avoid the effect of author bias, and increase the credibility of the data used. It was collected using Yin’s guidelines for data collection. Data was analysed using Miles and Huberman strategy of data analysis, which consists of three steps, namely, data reduction, data display, and conclusion drawing and verification. Patterns which emerged in the data highlighted the defense mechanisms used by Hughes, in particular that of splitting and projection, in defending his sense of self. These defense mechanisms help us to understand the high levels of entitlement and paranoia experienced by Hughes. Findings provide further insight into his sense of isolation and difference, and the consequent difficulty he experienced in maintaining connections with others. Findings furthermore confirm the effectiveness of Kernberg’s theory in understanding narcissism observing an individual life.Keywords: Howard Hughes, narcissism, narcissistic defenses, object relations
Procedia PDF Downloads 356582 Tax Administration Constraints: The Case of Small and Medium Size Enterprises in Addis Ababa, Ethiopia
Authors: Zeleke Ayalew Alemu
Abstract:
This study aims to investigate tax administration constraints in Addis Ababa with a focus on small and medium-sized enterprises by identifying issues and constraints in tax administration and assessment. The study identifies problems associated with taxpayers and tax-collecting authorities in the city. The research used qualitative and quantitative research designs and employed questionnaires, focus group discussion and key informant interviews for primary data collection and also used secondary data from different sources. The study identified many constraints that taxpayers are facing. Among others, tax administration offices’ inefficiency, reluctance to respond to taxpayers’ questions, limited tax assessment and administration knowledge and skills, and corruption and unethical practices are the major ones. Besides, the tax laws and regulations are complex and not enforced equally and fully on all taxpayers, causing a prevalence of business entities not paying taxes. This apparently results in an uneven playing field. Consequently, the tax system at present is neither fair nor transparent and increases compliance costs. In case of dispute, the appeal process is excessively long and the tax authority’s decision is irreversible. The Value Added Tax (VAT) administration and compliance system is not well designed, and VAT has created economic distortion among VAT-registered and non-registered taxpayers. Cash registration machine administration and the reporting system are big headaches for taxpayers. With regard to taxpayers, there is a lack of awareness of tax laws and documentation. Based on the above and other findings, the study forwarded recommendations, such as, ensuring fairness and transparency in tax collection and administration, enhancing the efficiency of tax authorities by use of modern technologies and upgrading human resources, conducting extensive awareness creation programs, and enforcing tax laws in a fair and equitable manner. The objective of this study is to assess problems, weaknesses and limitations of small and medium-sized enterprise taxpayers, tax authority administrations, and laws as sources of inefficiency and dissatisfaction to forward recommendations that bring about efficient, fair and transparent tax administration. The entire study has been conducted in a participatory and process-oriented manner by involving all partners and stakeholders at all levels. Accordingly, the researcher used participatory assessment methods in generating both secondary and primary data as well as both qualitative and quantitative data on the field. The research team held FGDs with 21 people from Addis Ababa City Administration tax offices and selected medium and small taxpayers. The study team also interviewed 10 KIIs selected from the various segments of stakeholders. The lead, along with research assistants, handled the KIIs using a predesigned semi-structured questionnaire.Keywords: taxation, tax system, tax administration, small and medium enterprises
Procedia PDF Downloads 72581 The Need for Sustaining Hope during Communication of Unfavourable News in the Care of Children with Palliative Care Needs: The Experience of Mothers and Health Professionals in Jordan
Authors: Maha Atout, Pippa Hemingway, Jane Seymour
Abstract:
A preliminary systematic review shows that health professionals experience a tension when communicating with the parents and family members of children with life-threatening and life-limiting conditions. On the one hand, they want to promote open and honest communication, while on the other, they are apprehensive about fostering an unrealistic sense of hope. Defining the boundaries between information that might offer reasonable hope versus that which results in false reassurance is challenging. Some healthcare providers worry that instilling a false sense of hope could motivate parents to seek continued aggressive treatment for their child, which in turn might cause the patient further unnecessary suffering. To date, there has been a lack of research in the Middle East regarding how healthcare providers do or should communicate bad news; in particular, the issue of hope in the field of paediatric palliative care has not been researched thoroughly. This study aims to explore, from the perspective of patients’ mothers, physicians, and nurses, the experience of communicating and receiving bad news in the care of children with palliative care needs. Data were collected using a collective qualitative case study approach across three paediatric units in a Jordanian hospital. Two data collection methods were employed: participant observation and semi-structured interviews. The overall number of cases was 15, with a total of 56 interviews with mothers (n=24), physicians (n=12), and nurses (n=20) completed, as well as 197 observational hours logged. The findings demonstrate that mothers wanted their doctors to provide them with hopeful information about the future progression of their child’s illness. Although some mothers asked their doctors to provide them with honest information regarding the condition of their child, they still considered a sense of hope to be essential for coping with caring for their child. According to mothers, hope was critical to treatment as it helped them to stay committed to the treatment and protected them to some extent from the extreme emotional suffering that would occur if they lost hope. The health professionals agreed with the mothers on the importance of hope, so long as it was congruent with the stage and severity of each patient’s disease. The findings of this study conclude that while parents typically insist on knowing all relevant information when their child is diagnosed with a severe illness, they considered hope to be an essential part of life, and they found it very difficult to handle suffering without any glimmer of it. This study finds that using negative terms has extremely adverse effects on the parents’ emotions. Hence, although the mothers asked the doctors to be as honest as they could, they still wanted the physicians to provide them with a positive message by communicating this information in a sensitive manner including hope.Keywords: health professionals, children, communication, hope, information, mothers, palliative care
Procedia PDF Downloads 217580 Beyond Objectification: Moderation Analysis of Trauma and Overexcitability Dynamics in Women
Authors: Ritika Chaturvedi
Abstract:
Introduction: Sexual objectification, characterized by the reduction of an individual to a mere object of sexual desire, remains a pervasive societal issue with profound repercussions on individual well-being. Such experiences, often rooted in systemic and cultural norms, have long-lasting implications for mental and emotional health. This study aims to explore the intricate relationship between experiences of sexual objectification and insidious trauma, further investigating the potential moderating effects of overexcitabilities as proposed by Dabrowski's theory of positive disintegration. Methodology: The research involved a comprehensive cohort of 204 women, spanning ages from 18 to 65 years. Participants were tasked with completing self-administered questionnaires designed to capture their experiences with sexual objectification. Additionally, the questionnaire assessed symptoms indicative of insidious trauma and explored overexcitabilities across five distinct domains: emotional, intellectual, psychomotor, sensory, and imaginational. Employing advanced statistical techniques, including multiple regression and moderation analysis, the study sought to decipher the intricate interplay among these variables. Findings: The study's results revealed a compelling positive correlation between experiences of sexual objectification and the onset of symptoms indicative of insidious trauma. This correlation underscores the profound and detrimental effects of sexual objectification on an individual's psychological well-being. Interestingly, the moderation analyses introduced a nuanced understanding, highlighting the differential roles of various overexcitabilities. Specifically, emotional, intellectual, and sensual overexcitabilities were found to exacerbate trauma symptomatology. In contrast, psychomotor overexcitability emerged as a protective factor, demonstrating a mitigating influence on the relationship between sexual objectification and trauma. Implications: The study's findings hold significant implications for a diverse array of stakeholders, encompassing mental health practitioners, educators, policymakers, and advocacy groups. The identified moderating effects of overexcitabilities emphasize the need for tailored interventions that consider individual differences in coping and resilience mechanisms. By recognizing the pivotal role of overexcitabilities in modulating the traumatic consequences of sexual objectification, this research advocates for the development of more nuanced and targeted support frameworks. Moreover, the study underscores the importance of continued research endeavors to unravel the intricate mechanisms and dynamics underpinning these relationships. Such endeavors are crucial for fostering the evolution of informed, evidence-based interventions and strategies aimed at mitigating the adverse effects of sexual objectification and promoting holistic well-being.Keywords: sexual objectification, insidious trauma, emotional overexcitability, intellectual overexcitability, sensual overexcitability, psychomotor overexcitability, imaginational overexcitability
Procedia PDF Downloads 46579 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans
Authors: Jian Wu
Abstract:
Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory
Procedia PDF Downloads 186578 Sustainable Production of Pharmaceutical Compounds Using Plant Cell Culture
Authors: David A. Ullisch, Yantree D. Sankar-Thomas, Stefan Wilke, Thomas Selge, Matthias Pump, Thomas Leibold, Kai Schütte, Gilbert Gorr
Abstract:
Plants have been considered as a source of natural substances for ages. Secondary metabolites from plants are utilized especially in medical applications but are more and more interesting as cosmetical ingredients and in the field of nutraceuticals. However, supply of compounds from natural harvest can be limited by numerous factors i.e. endangered species, low product content, climate impacts and cost intensive extraction. Especially in the pharmaceutical industry the ability to provide sufficient amounts of product and high quality are additional requirements which in some cases are difficult to fulfill by plant harvest. Whereas in many cases the complexity of secondary metabolites precludes chemical synthesis on a reasonable commercial basis, plant cells contain the biosynthetic pathway – a natural chemical factory – for a given compound. A promising approach for the sustainable production of natural products can be plant cell fermentation (PCF®). A thoroughly accomplished development process comprises the identification of a high producing cell line, optimization of growth and production conditions, the development of a robust and reliable production process and its scale-up. In order to address persistent, long lasting production, development of cryopreservation protocols and generation of working cell banks is another important requirement to be considered. So far the most prominent example using a PCF® process is the production of the anticancer compound paclitaxel. To demonstrate the power of plant suspension cultures here we present three case studies: 1) For more than 17 years Phyton produces paclitaxel at industrial scale i.e. up to 75,000 L in scale. With 60 g/kg dw this fully controlled process which is applied according to GMP results in outstanding high yields. 2) Thapsigargin is another anticancer compound which is currently isolated from seeds of Thapsia garganica. Thapsigargin is a powerful cytotoxin – a SERCA inhibitor – and the precursor for the derivative ADT, the key ingredient of the investigational prodrug Mipsagargin (G-202) which is in several clinical trials. Phyton successfully generated plant cell lines capable to express this compound. Here we present data about the screening for high producing cell lines. 3) The third case study covers ingenol-3-mebutate. This compound is found in the milky sap of the intact plants of the Euphorbiacae family at very low concentrations. Ingenol-3-mebutate is used in Picato® which is approved against actinic keratosis. Generation of cell lines expressing significant amounts of ingenol-3-mebutate is another example underlining the strength of plant cell culture. The authors gratefully acknowledge Inspyr Therapeutics for funding.Keywords: Ingenol-3-mebutate, plant cell culture, sustainability, thapsigargin
Procedia PDF Downloads 250577 High Efficiency Double-Band Printed Rectenna Model for Energy Harvesting
Authors: Rakelane A. Mendes, Sandro T. M. Goncalves, Raphaella L. R. Silva
Abstract:
The concepts of energy harvesting and wireless energy transfer have been widely discussed in recent times. There are some ways to create autonomous systems for collecting ambient energy, such as solar, vibratory, thermal, electromagnetic, radiofrequency (RF), among others. In the case of the RF it is possible to collect up to 100 μW / cm². To collect and/or transfer energy in RF systems, a device called rectenna is used, which is defined by the junction of an antenna and a rectifier circuit. The rectenna presented in this work is resonant at the frequencies of 1.8 GHz and 2.45 GHz. Frequencies at 1.8 GHz band are e part of the GSM / LTE band. The GSM (Global System for Mobile Communication) is a frequency band of mobile telephony, it is also called second generation mobile networks (2G), it came to standardize mobile telephony in the world and was originally developed for voice traffic. LTE (Long Term Evolution) or fourth generation (4G) has emerged to meet the demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. The 2.45 GHz frequency is part of the ISM (Instrumentation, Scientific and Medical) frequency band, this band is internationally reserved for industrial, scientific and medical development with no need for licensing, and its only restrictions are related to maximum power transfer and bandwidth, which must be kept within certain limits (in Brazil the bandwidth is 2.4 - 2.4835 GHz). The rectenna presented in this work was designed to present efficiency above 50% for an input power of -15 dBm. It is known that for wireless energy capture systems the signal power is very low and varies greatly, for this reason this ultra-low input power was chosen. The Rectenna was built using the low cost FR4 (Flame Resistant) substrate, the antenna selected is a microfita antenna, consisting of a Meandered dipole, and this one was optimized using the software CST Studio. This antenna has high efficiency, high gain and high directivity. Gain is the quality of an antenna in capturing more or less efficiently the signals transmitted by another antenna and/or station. Directivity is the quality that an antenna has to better capture energy in a certain direction. The rectifier circuit used has series topology and was optimized using Keysight's ADS software. The rectifier circuit is the most complex part of the rectenna, since it includes the diode, which is a non-linear component. The chosen diode is the Schottky diode SMS 7630, this presents low barrier voltage (between 135-240 mV) and a wider band compared to other types of diodes, and these attributes make it perfect for this type of application. In the rectifier circuit are also used inductor and capacitor, these are part of the input and output filters of the rectifier circuit. The inductor has the function of decreasing the dispersion effect on the efficiency of the rectifier circuit. The capacitor has the function of eliminating the AC component of the rectifier circuit and making the signal undulating.Keywords: dipole antenna, double-band, high efficiency, rectenna
Procedia PDF Downloads 124576 Basics of Gamma Ray Burst and Its Afterglow
Authors: Swapnil Kumar Singh
Abstract:
Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.Keywords: GRB, synchrotron, X-ray, isotropic energy
Procedia PDF Downloads 88575 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement
Authors: Brittany Richardson, Ying Wang
Abstract:
For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments
Procedia PDF Downloads 134574 Women’s Perceptions of DMPA-SC Self-Injection in Malawi
Authors: Mandayachepa C. Nyando, Lauren Suchman, Innocencia Mtalimanja, Address Malata, Tamanda Jumbe, Martha Kamanga, Peter Waiswa
Abstract:
Background: Subcutaneous depot medroxyprogesterone acetate (DMPA-SC) is a new innovation in contraceptive methods that allow users to inject themselves with a hormonal contraceptive in their own homes. Self-injection (SI) of DMPA-SC has the potential to improve the accessibility of family planning to women who want it and who are capable of injecting themselves. Malawi started implementing this new innovation in 2018. SI was incorporated into the DMPA-SC delivery strategy from its outset. Methodology: This study involved two districts in Malawi where DMPA-SC SI was rolled out: Mulanje and Ntchisi. We used a qualitative cross-sectional study design where 60 in-depth interviews were conducted with women of reproductive age group stratified as 15-45 age band. These included women who were SI users, non-users, and any woman who was on any contraceptive methods. The women participants were tape-recorded, and data were transcribed and then analysed using Dedoose software, where themes were categorised into mother and child themes. Results: Women perceived DMPA SC SI as uniquely private, convenient, and less painful when self-injected. In terms of privacy, women in Mulanje and Ntchisi especially appreciated that self-injecting allowed them to use covertly from partners. Some men do not allow their spouses to use modern contraceptive methods; hence women prefer to use them covertly. “… but I first reach out to men because the strongest power is answered by men (MJ015).” In addition, women reported that SI offers privacy from family/community and less contact with healthcare providers. These aspects of privacy were especially valued in areas where there is a high degree of mistrust around family planning and among those who feel judged or antagonized purchasing contraception, such as young unmarried women. Women also valued the convenience SI provided in terms of their ability to save time by injecting themselves at home rather than visiting a healthcare provider and having more reliable access to contraception, particularly in the face of stockouts. SI allows for stocking up on doses to accommodate shifting work schedules in case of future stockouts or hard times, such as the period of COVID-19, where there was a limitation in the movement of the people. Conclusion: Our findings suggest that SI may meet the needs of many women in Malawi as long as the barriers are eliminated. The barriers women mentioned include fear of self-inject and proper storage of the DMPA SC SI, and these barriers can be eliminated by proper training. The findings also set the scene for policy revision and direction at a national level and integrate the approach with national family planning strategies in Malawi. Findings provide insights that may guide future implementation strategies, strengthen non-clinic family planning access programs and stimulate continued research.Keywords: family planning, Malawi, Sayana press, self-injection
Procedia PDF Downloads 65573 Synthesis of Belite Cements at Low Temperature from Silica Fume and Natural Commercial Zeolite
Authors: Tatiana L. Avalos-Rendon, Elias A. Pasten Chelala, Carlos J. Mendoza EScobedo, Ignacio A. Figueroa, Victor H. Lara, Luis M. Palacios-Romero
Abstract:
The cement industry is facing cost increments in energy supply, requirements for reduction of CO₂, and insufficient supply of raw materials of good quality. According to all these environmental issues, cement industry must change its consumption patterns and reduce CO₂ emissions to the atmosphere. This can be achieved by generating environmental consciousness, which encourages the use of industrial by-products and/or recycling for the production of cement, as well as alternate, environment-friendly methods of synthesis which reduce CO₂. Calcination is the conventional method for the obtainment of Portland cement clinker. This method consists of grinding and mixing of raw materials (limestone, clay, etc.) in an adequate dosage. Resulting mix has a clinkerization temperature of 1450 °C so that the formation of the main component occur: alite (Ca₃SiO₅, C₃S). Considering that the energy required to produce C₃S is 1810 kJ kg -1, calcination method for the obtainment of clinker represents two major disadvantages: long thermal treatment and elevated temperatures of synthesis, both of which cause high emissions of carbon dioxide (CO₂) to the atmosphere. Belite Portland clinker is characterized by having a low content of calcium oxide (CaO), causing the presence of alite to diminish and favoring the formation of belite (β-Ca₂SiO₄, C₂S), so production of clinker requires a reduced energy consumption (1350 kJ kg-1), releasing less CO₂ to the atmosphere. Conventionally, β-Ca₂SiO₄ is synthetized by the calcination of calcium carbonate (CaCO₃) and silicon dioxide (SiO₂) through the reaction in solid state at temperatures greater than 1300 °C. Resulting belite shows low hydraulic reactivity. Therefore, this study concerns a new simple modified combustion method for the synthesis of two belite cements at low temperatures (1000 °C). Silica fume, as subproduct of metallurgic industry and commercial natural zeolite were utilized as raw materials. These are considered low-cost materials and were utilized with no additional purification process. Belite cements properties were characterized by XRD, SEM, EDS and BET techniques. Hydration capacity of belite cements was calculated while the mechanical strength was determined in ordinary Portland cement specimens (PC) with a 10% partial replacement of the belite cements obtained. Results showed belite cements presented relatively high surface áreas, at early ages mechanical strengths similar to those of alite cement and comparable to strengths of belite cements obtained by different synthesis methods. Cements obtained in this work present good hydraulic reactivity properties.Keywords: belite, silica fume, zeolite, hydraulic reactivity
Procedia PDF Downloads 345572 Human 3D Metastatic Melanoma Models for in vitro Evaluation of Targeted Therapy Efficiency
Authors: Delphine Morales, Florian Lombart, Agathe Truchot, Pauline Maire, Pascale Vigneron, Antoine Galmiche, Catherine Lok, Muriel Vayssade
Abstract:
Targeted therapy molecules are used as a first-line treatment for metastatic melanoma with B-Raf mutation. Nevertheless, these molecules can cause side effects to patients and are efficient on 50 to 60 % of them. Indeed, melanoma cell sensitivity to targeted therapy molecules is dependent on tumor microenvironment (cell-cell and cell-extracellular matrix interactions). To better unravel factors modulating cell sensitivity to B-Raf inhibitor, we have developed and compared several melanoma models: from metastatic melanoma cells cultured as monolayer (2D) to a co-culture in a 3D dermal equivalent. Cell response was studied in different melanoma cell lines such as SK-MEL-28 (mutant B-Raf (V600E), sensitive to Vemurafenib), SK-MEL-3 (mutant B-Raf (V600E), resistant to Vemurafenib) and a primary culture of dermal human fibroblasts (HDFn). Assays have initially been performed in a monolayer cell culture (2D), then a second time on a 3D dermal equivalent (dermal human fibroblasts embedded in a collagen gel). All cell lines were treated with Vemurafenib (a B-Raf inhibitor) for 48 hours at various concentrations. Cell sensitivity to treatment was assessed under various aspects: Cell proliferation (cell counting, EdU incorporation, MTS assay), MAPK signaling pathway analysis (Western-Blotting), Apoptosis (TUNEL), Cytokine release (IL-6, IL-1α, HGF, TGF-β, TNF-α) upon Vemurafenib treatment (ELISA) and histology for 3D models. In 2D configuration, the inhibitory effect of Vemurafenib on cell proliferation was confirmed on SK-MEL-28 cells (IC50=0.5 µM), and not on the SK-MEL-3 cell line. No apoptotic signal was detected in SK-MEL-28-treated cells, suggesting a cytostatic effect of the Vemurafenib rather than a cytotoxic one. The inhibition of SK-MEL-28 cell proliferation upon treatment was correlated with a strong expression decrease of phosphorylated proteins involved in the MAPK pathway (ERK, MEK, and AKT/PKB). Vemurafenib (from 5 µM to 10 µM) also slowed down HDFn proliferation, whatever cell culture configuration (monolayer or 3D dermal equivalent). SK-MEL-28 cells cultured in the dermal equivalent were still sensitive to high Vemurafenib concentrations. To better characterize all cell population impacts (melanoma cells, dermal fibroblasts) on Vemurafenib efficacy, cytokine release is being studied in 2D and 3D models. We have successfully developed and validated a relevant 3D model, mimicking cutaneous metastatic melanoma and tumor microenvironment. This 3D melanoma model will become more complex by adding a third cell population, keratinocytes, allowing us to characterize the epidermis influence on the melanoma cell sensitivity to Vemurafenib. In the long run, the establishment of more relevant 3D melanoma models with patients’ cells might be useful for personalized therapy development. The authors would like to thank the Picardie region and the European Regional Development Fund (ERDF) 2014/2020 for the funding of this work and Oise committee of "La ligue contre le cancer".Keywords: 3D human skin model, melanoma, tissue engineering, vemurafenib efficiency
Procedia PDF Downloads 304571 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models
Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg
Abstract:
Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction
Procedia PDF Downloads 309570 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 195569 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements
Authors: Mohammad R. Bhuyan, Mohammad J. Khattak
Abstract:
Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement
Procedia PDF Downloads 166568 Using Balanced Scorecard Performance Metrics in Gauging the Delivery of Stakeholder Value in Higher Education: the Assimilation of Industry Certifications within a Business Program Curriculum
Authors: Thomas J. Bell III
Abstract:
This paper explores the value of assimilating certification training within a traditional course curriculum. This innovative approach is believed to increase stakeholder value within the Computer Information System program at Texas Wesleyan University. Stakeholder value is obtained from increased job marketability and critical thinking skills that create employment-ready graduates. This paper views value as first developing the capability to earn an industry-recognized certification, which provides the student with more job placement compatibility while allowing the use of critical thinking skills in a liberal arts business program. Graduates with industry-based credentials are often given preference in the hiring process, particularly in the information technology sector. And without a pioneering curriculum that better prepares students for an ever-changing employment market, its educational value is dubiously questioned. Since certifications are trending in the hiring process, academic programs should explore the viability of incorporating certification training into teaching pedagogy and courses curriculum. This study will examine the use of the balanced scorecard across four performance dimensions (financial, customer, internal process, and innovation) to measure the stakeholder value of certification training within a traditional course curriculum. The balanced scorecard as a strategic management tool may provide insight for leveraging resource prioritization and decisions needed to achieve various curriculum objectives and long-term value while meeting multiple stakeholders' needs, such as students, universities, faculty, and administrators. The research methodology will consist of quantitative analysis that includes (1) surveying over one-hundred students in the CIS program to learn what factor(s) contributed to their certification exam success or failure, (2) interviewing representatives from the Texas Workforce Commission to identify the employment needs and trends in the North Texas (Dallas/Fort Worth) area, (3) reviewing notable Workforce Innovation and Opportunity Act publications on training trends across several local business sectors, and (4) analyzing control variables to identify specific correlations between industry alignment and job placement to determine if a correlation exists. These findings may provide helpful insight into impactful pedagogical teaching techniques and curriculum that positively contribute to certification credentialing success. And should these industry-certified students land industry-related jobs that correlate with their certification credential value, arguably, stakeholder value has been realized.Keywords: certification exam teaching pedagogy, exam preparation, testing techniques, exam study tips, passing certification exams, embedding industry certification and curriculum alignment, balanced scorecard performance evaluation
Procedia PDF Downloads 108567 Comparison of Gestational Diabetes Influence on the Ultrastructure of Rectus Abdominis Muscle in Women and Rats
Authors: Giovana Vesentini, Fernanda Piculo, Gabriela Marini, Debora Damasceno, Angelica Barbosa, Selma Martheus, Marilza Rudge
Abstract:
Problem statement: Skeletal muscle is highly adaptable, muscle fiber composition and size can respond to a variety of stimuli, such physiologic, as pregnancy, and metabolic abnormalities, as Diabetes mellitus. This study aimed to analyze the effects of pregnancy-associated diabetes on the rectus abdominis muscle (RA), and to compare this changes in rats and women. Methods: Female Wistar rats were maintained under controlled conditions and distributed in Pregnant (P) and Long-term mild pregnant diabetic (LTMd) (n=3 r/group). Diabetes in rats was induced by streptozotocin (100mg/Kg, sc) on the first day of life, for a hyperglycemic state between 120-300 mg/dL in adult life. Female rats were mated overnight, at day 21 of pregnancy were anesthetized, and killed for the harvesting of maternal RA. Pregnant women who attended the Diabetes Prenatal Care Clinic of Botucatu Medical School were distributed in Pregnant non-diabetic (Pnd) and Gestational Diabetic (GDM) (n=3 w/group). The diagnosis of GDM was established according to ADA’s criteria (2016). The harvesting of RA was during the cesarean section. Transversal cross-sections of the RA of both women and rats were analyzed by transmission electron microscopy. All procedures were approved by the Ethics Committee on Animal Experiments of the Botucatu Medical School (Protocol Number 1003/2013) and by the Botucatu Medical School Ethical Committee for Human Research in Medical Sciences (CAAE: 41570815.0.0000.5411). Results: The photomicrographs of the RA of rats revealed disorganized Z lines, thinning sarcomeres, and a usual quantity of intermyofibrillar mitochondria in the P group. The LTMd group showed swollen sarcoplasmic reticulum, dilated T tubes and areas with sarcomere disruption. The ultrastructural analysis of Pnd non-diabetic women in the RA showed well-organized myofibrils forming intact sarcomeres, organized Z lines and a normal distribution of intermyofibrillar mitochondria. The GDM group revealed increase in intermyofibrillar mitochondria, areas with sarcomere disruption and increased lipid droplets. Conclusion: Pregnancy and diabetes induce adaptations in the ultrastructure of the rectus abdominis muscle for both women and rats, changing the architectural design of these tissues. However, in rats these changes are more severe maybe because, besides the high blood glucose levels, the quadrupedal animal may suffer an excessive mechanical tension during pregnancy by gravity. Probably, these findings may suggest that these alterations are a risk factor that contributes to the development of muscle dysfunction in women with GDM and may motivate treatment strategies in these patients.Keywords: gestational diabetes, muscle dysfunction, pregnancy, rectus abdominis
Procedia PDF Downloads 292566 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions
Authors: Pirta Palola, Richard Bailey, Lisa Wedding
Abstract:
Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.Keywords: economics of biodiversity, environmental valuation, natural capital, value function
Procedia PDF Downloads 194