Search results for: environmentally friendly IT systems
431 Evaluation of Kabul BRT Route Network with Application of Integrated Land-use and Transportation Model
Authors: Mustafa Mutahari, Nao Sugiki, Kojiro Matsuo
Abstract:
The four decades of war, lack of job opportunities, poverty, lack of services, and natural disasters in different provinces of Afghanistan have contributed to a rapid increase in the population of Kabul, the capital city of Afghanistan. Population census has not been conducted since 1979, the first and last population census in Afghanistan. However, according to population estimations by Afghan authorities, the population of Kabul has been estimated at more than 4 million people, whereas the city was designed for two million people. Although the major transport mode of Kabul residents is public transport, responsible authorities within the country failed to supply the required means of transportation systems for the city. Besides, informal resettlement, lack of intersection control devices, presence of illegal vendors on streets, presence of illegal and unstandardized on-street parking and bus stops, driver`s unprofessional behavior, weak traffic law enforcement, and blocked roads and sidewalks have contributed to the extreme traffic congestion of Kabul. In 2018, the government of Afghanistan approved the Kabul city Urban Design Framework (KUDF), a vision towards the future of Kabul, which provides strategies and design guidance at different scales to direct urban development. Considering traffic congestion of the city and its budget limitations, the KUDF proposes a BRT route network with seven lines to reduce the traffic congestion, and it is said to facilitate more than 50% of Kabul population to benefit from this service. Based on the KUDF, it is planned to increase the BRT mode share from 0% to 17% and later to 30% in medium and long-term planning scenarios, respectively. Therefore, a detailed research study is needed to evaluate the proposed system before the implementation stage starts. The integrated land-use transport model is an effective tool to evaluate the Kabul BRT because of its future assessment capabilities that take into account the interaction between land use and transportation. This research aims to analyze and evaluate the proposed BRT route network with the application of an integrated land-use and transportation model. The research estimates the population distribution and travel behavior of Kabul within small boundary scales. The actual road network and land-use detailed data of the city are used to perform the analysis. The BRT corridors are evaluated not only considering its impacts on the spatial interactions in the city`s transportation system but also on the spatial developments. Therefore, the BRT are evaluated with the scenarios of improving the Kabul transportation system based on the distribution of land-use or spatial developments, planned development typology and population distribution of the city. The impacts of the new improved transport system on the BRT network are analyzed and the BRT network is evaluated accordingly. In addition, the research also focuses on the spatial accessibility of BRT stops, corridors, and BRT line beneficiaries, and each BRT stop and corridor are evaluated in terms of both access and geographic coverage, as well.Keywords: accessibility, BRT, integrated land-use and transport model, travel behavior, spatial development
Procedia PDF Downloads 221430 Building Carbon Footprint Comparison between Building Permit, as Built, as Built with Circular Material Usage
Authors: Kadri-Ann Kertsmik, Martin Talvik, Kimmo Lylykangas, Simo Ilomets, Targo Kalamees
Abstract:
This study compares the building carbon footprint (CF) values for a case study of a private house located in a cold climate, using the Level(s) methodology. It provides a framework for measuring the environmental performance of buildings throughout their life cycle, taking into account various factors. The study presents the results of the three scenarios, comparing their carbon emissions and highlighting the benefits of circular material usage. The construction process was thoroughly documented, and all materials and components (including minuscule mechanical fasteners, each meter of cable, a kilogram of mortar, and the component of HVAC systems, among other things) delivered to the construction site were noted. Transportation distances of each delivery, the fuel consumption of construction machines, and electricity consumption for temporary heating and electrical tools were also monitored. Using the detailed data on material and energy resources, the CF was calculated for two scenarios: one where circular material usage was applied and another where virgin materials were used instead of reused ones. The results were compared with the CF calculated based on the building permit design model using the Level(s) methodology. To study the range of possible results in the early stage of CF assessment, the same building permit design was given to several experts. Results showed that embodied carbon values for a built scenario were significantly lower than the values predicted by the building permit stage as a result of more precise material quantities, as the calculation methodology is designed to overestimate the CF. Moreover, designers made an effort to reduce the building's CF by reusing certain materials such as ceramic tiles, lightweight concrete blocks, and timber during the construction process. However, in a cold climate context where operational energy (B6) continues to dominate, the total building CF value changes between the three scenarios were less significant. The calculation for the building permit project was performed by several experts, and CF results were in the same range. It alludes that, for the first estimation of preliminary building CF, using average values proves to be an appropriate method for the Estonian national carbon footprint estimation phase during building permit application. The study also identified several opportunities for reducing the carbon footprint of the building, such as reusing materials from other construction sites, preferring local material producers, and reducing wastage on site. The findings suggest that using circular materials can significantly reduce the carbon footprint of buildings. Overall, the study highlights the importance of using a comprehensive approach to measure the environmental performance of buildings, taking into account both the project and the actually built house. It also emphasises the need for ongoing monitoring for designing the building and construction site waste. The study also gives some examples of how to enable future circularity of building components and materials, e.g., building in layers, using wood as untreated, etc.Keywords: carbon footprint, circular economy, sustainable construction, level(s) methodology
Procedia PDF Downloads 86429 Organic Light Emitting Devices Based on Low Symmetry Coordination Structured Lanthanide Complexes
Authors: Zubair Ahmed, Andrea Barbieri
Abstract:
The need to reduce energy consumption has prompted a considerable research effort for developing alternative energy-efficient lighting systems to replace conventional light sources (i.e., incandescent and fluorescent lamps). Organic light emitting device (OLED) technology offers the distinctive possibility to fabricate large area flat devices by vacuum or solution processing. Lanthanide β-diketonates complexes, due to unique photophysical properties of Ln(III) ions, have been explored as emitting layers in OLED displays and in solid-state lighting (SSL) in order to achieve high efficiency and color purity. For such applications, the excellent photoluminescence quantum yield (PLQY) and stability are the two key points that can be achieved simply by selecting the proper organic ligands around the Ln ion in a coordination sphere. Regarding the strategies to enhance the PLQY, the most common is the suppression of the radiationless deactivation pathways due to the presence of high-frequency oscillators (e.g., OH, –CH groups) around the Ln centre. Recently, a different approach to maximize the PLQY of Ln(β-DKs) has been proposed (named 'Escalate Coordination Anisotropy', ECA). It is based on the assumption that coordinating the Ln ion with different ligands will break the centrosymmetry of the molecule leading to less forbidden transitions (loosening the constraints of the Laporte rule). The OLEDs based on such complexes are available, but with low efficiency and stability. In order to get efficient devices, there is a need to develop some new Ln complexes with enhanced PLQYs and stabilities. For this purpose, the Ln complexes, both visible and (NIR) emitting, of variant coordination structures based on the various fluorinated/non-fluorinated β-diketones and O/N-donor neutral ligands were synthesized using a one step in situ method. In this method, the β-diketones, base, LnCl₃.nH₂O and neutral ligands were mixed in a 3:3:1:1 M ratio in ethanol that gave air and moisture stable complexes. Further, they were characterized by means of elemental analysis, NMR spectroscopy and single crystal X-ray diffraction. Thereafter, their photophysical properties were studied to select the best complexes for the fabrication of stable and efficient OLEDs. Finally, the OLEDs were fabricated and investigated using these complexes as emitting layers along with other organic layers like NPB,N,N′-Di(1-naphthyl)-N,N′-diphenyl-(1,1′-biphenyl)-4,4′-diamine (hole-transporting layer), BCP, 2,9-Dimethyl-4,7-diphenyl-1,10-phenanthroline (hole-blocker) and Alq3 (electron-transporting layer). The layers were sequentially deposited under high vacuum environment by thermal evaporation onto ITO glass substrates. Moreover, co-deposition techniques were used to improve charge transport in the devices and to avoid quenching phenomena. The devices show strong electroluminescence at 612, 998, 1064 and 1534 nm corresponding to ⁵D₀ →⁷F₂(Eu), ²F₅/₂ → ²F₇/₂ (Yb), ⁴F₃/₂→ ⁴I₉/₂ (Nd) and ⁴I1₃/₂→ ⁴I1₅/₂ (Er). All the devices fabricated show good efficiency as well as stability.Keywords: electroluminescence, lanthanides, paramagnetic NMR, photoluminescence
Procedia PDF Downloads 121428 Climate Change and Health: Scoping Review of Scientific Literature 1990-2015
Authors: Niamh Herlihy, Helen Fischer, Rainer Sauerborn, Anneliese Depoux, Avner Bar-Hen, Antoine Flauhault, Stefanie Schütte
Abstract:
In the recent decades, there has been an increase in the number of publications both in the scientific and grey literature on the potential health risks associated with climate change. Though interest in climate change and health is growing, there are still many gaps to adequately assess our future health needs in a warmer world. Generating a greater understanding of the health impacts of climate change could be a key step in inciting the changes necessary to decelerate global warming and to target new strategies to mitigate the consequences on health systems. A long term and broad overview of existing scientific literature in the field of climate change and health is currently missing in order to ensure that all priority areas are being adequately addressed. We conducted a scoping review of published peer-reviewed literature on climate change and health from two large databases, PubMed and Web of Science, between 1990 and 2015. A scoping review allowed for a broad analysis of this complex topic on a meta-level as opposed to a thematically refined literature review. A detailed search strategy including specific climate and health terminology was used to search the two databases. Inclusion and exclusion criteria were applied in order to capture the most relevant literature on the human health impact of climate change within the chosen timeframe. Two reviewers screened the papers independently and any differences arising were resolved by a third party. Data was extracted, categorized and coded both manually and using R software. Analytics and infographics were developed from results. There were 7269 articles identified between the two databases following the removal of duplicates. After screening of the articles by both reviewers 3751 were included. As expected, preliminary results indicate that the number of publications on the topic has increased over time. Geographically, the majority of publications address the impact of climate change and health in Europe and North America, This is particularly alarming given that countries in the Global South will bear the greatest health burden. Concerning health outcomes, infectious diseases, particularly dengue fever and other mosquito transmitted infections are the most frequently cited. We highlight research gaps in certain areas e.g climate migration and mental health issues. We are developing a database of the identified climate change and health publications and are compiling a report for publication and dissemination of the findings. As health is a major co-beneficiary to climate change mitigation strategies, our results may serve as a useful source of information for research funders and investors when considering future research needs as well as the cost-effectiveness of climate change strategies. This study is part of an interdisciplinary project called 4CHealth that confronts results of the research done on scientific, political and press literature to better understand how the knowledge on climate change and health circulates within those different fields and whether and how it is translated to real world change.Keywords: climate change, health, review, mapping
Procedia PDF Downloads 317427 The South African Polycentric Water Resource Governance-Management Nexus: Parlaying an Institutional Agent and Structured Social Engagement
Authors: J. H. Boonzaaier, A. C. Brent
Abstract:
South Africa, a water scarce country, experiences the phenomenon that its life supporting natural water resources is seriously threatened by the users that are totally dependent on it. South Africa is globally applauded to have of the best and most progressive water laws and policies. There are however growing concerns regarding natural water resource quality deterioration and a critical void in the management of natural resources and compliance to policies due to increasing institutional uncertainties and failures. These are in accordance with concerns of many South African researchers and practitioners that call for a change in paradigm from talk to practice and a more constructive, practical approach to governance challenges in the management of water resources. A qualitative theory-building case study through longitudinal action research was conducted from 2014 to 2017. The research assessed whether a strategic positioned institutional agent can be parlayed to facilitate and execute WRM on catchment level by engaging multiple stakeholders in a polycentric setting. Through a critical realist approach a distinction was made between ex ante self-deterministic human behaviour in the realist realm, and ex post governance-management in the constructivist realm. A congruence analysis, including Toulmin’s method of argumentation analysis, was utilised. The study evaluated the unique case of a self-steering local water management institution, the Impala Water Users Association (WUA) in the Pongola River catchment in the northern part of the KwaZulu-Natal Province of South Africa. Exploiting prevailing water resource threats, it expanded its ancillary functions from 20,000 to 300,000 ha. Embarking on WRM activities, it addressed natural water system quality assessments, social awareness, knowledge support, and threats, such as: soil erosion, waste and effluent into water systems, coal mining, and water security dimensions; through structured engagement with 21 different catchment stakeholders. By implementing a proposed polycentric governance-management model on a catchment scale, the WUA achieved to fill the void. It developed a foundation and capacity to protect the resilience of the natural environment that is critical for freshwater resources to ensure long-term water security of the Pongola River basin. Further work is recommended on appropriate statutory delegations, mechanisms of sustainable funding, sufficient penetration of knowledge to local levels to catalyse behaviour change, incentivised support from professionals, back-to-back expansion of WUAs to alleviate scale and cost burdens, and the creation of catchment data monitoring and compilation centres.Keywords: institutional agent, water governance, polycentric water resource management, water resource management
Procedia PDF Downloads 138426 An Assessment of Health Hazards in Urban Communities: A Study of Spatial-Temporal Variations of Dengue Epidemic in Colombo, Sri Lanka
Authors: U. Thisara G. Perera, C. M. Kanchana N. K. Chandrasekara
Abstract:
Dengue is an epidemic which is spread by Aedes Egyptai and Aedes Albopictus mosquitoes. The cases of dengue show a dramatic growth rate of the epidemic in urban and semi urban areas spatially in tropical and sub-tropical regions of the world. Incidence of dengue has become a prominent reason for hospitalization and deaths in Asian countries, including Sri Lanka. During the last decade the dengue epidemic began to spread from urban to semi-urban and then to rural settings of the country. The highest number of dengue infected patients was recorded in Sri Lanka in the year 2016 and the highest number of patients was identified in Colombo district. Together with the commercial, industrial, and other supporting services, the district suffers from rapid urbanization and high population density. Thus, drainage and waste disposal patterns of the people in this area exert an additional pressure to the environment. The district is situated in the wet zone and thus low lying lands constitute the largest portion of the district. This situation additionally facilitates mosquito breeding sites. Therefore, the purpose of the present study was to assess the spatial and temporal distribution patterns of dengue epidemic in Kolonnawa MOH area (Medical Officer of Health) in the district of Colombo. The study was carried out using 615 recorded dengue cases in Kollonnawa MOH area during the south east monsoon season from May to September 2016. The Moran’s I and Kernel density estimation were used as analytical methods. The analysis of data was accomplished through the integrated use of ArcGIS 10.1 software packages along with Microsoft Excel analytical tool. Field observation was also carried out for verification purposes during the study period. Results of the Moran’s I index indicates that the spatial distribution of dengue cases showed a cluster distribution pattern across the area. Kernel density estimation emphasis that dengue cases are high where the population has gathered, especially in areas comprising housing schemes. Results of the Kernel Density estimation further discloses that hot spots of dengue epidemic are located in the western half of the Kolonnawa MOH area, which is close to the Colombo municipal boundary and there is a significant relationship with high population density and unplanned urban land use practices. Results of the field observation confirm that the drainage systems in these areas function poorly and careless waste disposal methods of the people further encourage mosquito breeding sites. This situation has evolved harmfully from a public health issue to a social problem, which ultimately impacts on the economy and social lives of the country.Keywords: Dengue epidemic, health hazards, Kernel density, Moran’s I, Sri Lanka
Procedia PDF Downloads 300425 Strategic Interventions to Address Health Workforce and Current Disease Trends, Nakuru, Kenya
Authors: Paul Moses Ndegwa, Teresia Kabucho, Lucy Wanjiru, Esther Wanjiru, Brian Githaiga, Jecinta Wambui
Abstract:
Health outcome has improved in the country since 2013 following the adoption of the new constitution in Kenya with devolved governance with administration and health planning functions transferred to county governments. 2018-2022 development agenda prioritized universal healthcare coverage, food security, and nutrition, however, the emergence of Covid-19 and the increase of non-communicable diseases pose a challenge and constrain in an already overwhelmed health system. A study was conducted July-November 2021 to establish key challenges in achieving universal healthcare coverage within the county and best practices for improved non-communicable disease control. 14 health workers ranging from nurses, doctors, public health officers, clinical officers, and pharmaceutical technologists were purposely engaged to provide critical information through questionnaires by a trained duo observing ethical procedures on confidentiality. Data analysis. Communicable diseases are major causes of morbidity and mortality. Non-communicable diseases contribute to approximately 39% of deaths. More than 45% of the population does not have access to safe drinking water. Study noted geographic inequality with respect to distribution and use of health resources including competing non-health priorities. 56% of health workers are nurses, 13% clinical officers, 7% doctors, 9%public health workers, 2% are pharmaceutical technologists. Poor-quality data limits the validity of disease-burdened estimates and research activities. Risk factors include unsafe water, sanitation, hand washing, unsafe sex, and malnutrition. Key challenge in achieving universal healthcare coverage is the rise in the relative contribution of non-communicable diseases. Improve targeted disease control with effective and equitable resource allocation. Develop high infectious disease control mechanisms. Improvement of quality data for decision making. Strengthen electronic data-capture systems. Increase investments in the health workforce to improve health service provision and achievement of universal health coverage. Create a favorable environment to retain health workers. Fill in staffing gaps resulting in shortages of doctors (7%). Develop a multi-sectional approach to health workforce planning and management. Need to invest in mechanisms that generate contextual evidence on current and future health workforce needs. Ensure retention of qualified, skilled, and motivated health workforce. Deliver integrated people-centered health services.Keywords: multi-sectional approach, equity, people-centered, health workforce retention
Procedia PDF Downloads 113424 Evaluation in Vitro and in Silico of Pleurotus ostreatus Capacity to Decrease the Amount of Low-Density Polyethylene Microplastics Present in Water Sample from the Middle Basin of the Magdalena River, Colombia
Authors: Loren S. Bernal., Catalina Castillo, Carel E. Carvajal, José F. Ibla
Abstract:
Plastic pollution, specifically microplastics, has become a significant issue in aquatic ecosystems worldwide. The large amount of plastic waste carried by water tributaries has resulted in the accumulation of microplastics in water bodies. The polymer aging process caused by environmental influences such as photodegradation and chemical degradation of additives leads to polymer embrittlement and properties change that require degradation or reduction procedures in rivers. However, there is a lack of such procedures for freshwater entities that develop over extended periods. The aim of this study is evaluate the potential of Pleurotus ostreatus a fungus, in reducing lowdensity polyethylene microplastics present in freshwater samples collected from the middle basin of the Magdalena River in Colombia. The study aims to evaluate this process both in vitro and in silico by identifying the growth capacity of Pleurotus ostreatus in the presence of microplastics and identifying the most likely interactions of Pleurotus ostreatus enzymes and their affinity energies. The study follows an engineering development methodology applied on an experimental basis. The in vitro evaluation protocol applied in this study focused on the growth capacity of Pleurotus ostreatus on microplastics using enzymatic inducers. In terms of in silico evaluation, molecular simulations were conducted using the Autodock 1.5.7 program to calculate interaction energies. The molecular dynamics were evaluated by using the myPresto Portal and GROMACS program to calculate radius of gyration and Energies.The results of the study showed that Pleurotus ostreatus has the potential to degrade low-density polyethylene microplastics. The in vitro evaluation revealed the adherence of Pleurotus ostreatus to LDPE using scanning electron microscopy. The best results were obtained with enzymatic inducers as a MnSO4 generating the activation of laccase or manganese peroxidase enzymes in the degradation process. The in silico modelling demonstrated that Pleurotus ostreatus was able to interact with the microplastics present in LDPE, showing affinity energies in molecular docking and molecular dynamics shown a minimum energy and the representative radius of gyration between each enzyme and its substract. The study contributes to the development of bioremediation processes for the removal of microplastics from freshwater sources using the fungus Pleurotus ostreatus. The in silico study provides insights into the affinity energies of Pleurotus ostreatus microplastic degrading enzymes and their interaction with low-density polyethylene. The study demonstrated that Pleurotus ostreatus can interact with LDPE microplastics, making it a good agent for the development of bioremediation processes that aid in the recovery of freshwater sources. The results of the study suggested that bioremediation could be a promising approach to reduce microplastics in freshwater systems.Keywords: bioremediation, in silico modelling, microplastics, Pleurotus ostreatus
Procedia PDF Downloads 114423 The Roles of Mandarin and Local Dialect in the Acquisition of L2 English Consonants Among Chinese Learners of English: Evidence From Suzhou Dialect Areas
Authors: Weijing Zhou, Yuting Lei, Francis Nolan
Abstract:
In the domain of second language acquisition, whenever pronunciation errors or acquisition difficulties are found, researchers habitually attribute them to the negative transfer of the native language or local dialect. To what extent do Mandarin and local dialects affect English phonological acquisition for Chinese learners of English as a foreign language (EFL)? Little evidence, however, has been found via empirical research in China. To address this core issue, the present study conducted phonetic experiments to explore the roles of local dialects and Mandarin in Chinese EFL learners’ acquisition of L2 English consonants. Besides Mandarin, the sole national language in China, Suzhou dialect was selected as the target local dialect because of its distinct phonology from Mandarin. The experimental group consisted of 30 junior English majors at Yangzhou University, who were born and lived in Suzhou, acquired Suzhou Dialect since their early childhood, and were able to communicate freely and fluently with each other in Suzhou Dialect, Mandarin as well as English. The consonantal target segments were all the consonants of English, Mandarin and Suzhou Dialect in typical carrier words embedded in the carrier sentence Say again. The control group consisted of two Suzhou Dialect experts, two Mandarin radio broadcasters, and two British RP phoneticians, who served as the standard speakers of the three languages. The reading corpus was recorded and sampled in the phonetic laboratories at Yangzhou University, Soochow University and Cambridge University, respectively, then transcribed, segmented and analyzed acoustically via Praat software, and finally analyzed statistically via EXCEL and SPSS software. The main findings are as follows: First, in terms of correct acquisition rates (CARs) of all the consonants, Mandarin ranked top (92.83%), English second (74.81%) and Suzhou Dialect last (70.35%), and significant differences were found only between the CARs of Mandarin and English and between the CARs of Mandarin and Suzhou Dialect, demonstrating Mandarin was overwhelmingly more robust than English or Suzhou Dialect in subjects’ multilingual phonological ecology. Second, in terms of typical acoustic features, the average duration of all the consonants plus the voice onset time (VOT) of plosives, fricatives, and affricatives in 3 languages were much longer than those of standard speakers; the intensities of English fricatives and affricatives were higher than RP speakers but lower than Mandarin and Suzhou Dialect standard speakers; the formants of English nasals and approximants were significantly different from those of Mandarin and Suzhou Dialects, illustrating the inconsistent acoustic variations between the 3 languages. Thirdly, in terms of typical pronunciation variations or errors, there were significant interlingual interactions between the 3 consonant systems, in which Mandarin consonants were absolutely dominant, accounting for the strong transfer from L1 Mandarin to L2 English instead of from earlier-acquired L1 local dialect to L2 English. This is largely because the subjects were knowingly exposed to Mandarin since their nursery and were strictly required to speak in Mandarin through all the formal education periods from primary school to university.Keywords: acquisition of L2 English consonants, role of Mandarin, role of local dialect, Chinese EFL learners from Suzhou Dialect areas
Procedia PDF Downloads 95422 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks
Authors: Mazarine Roquet, Pierre Dewallef
Abstract:
The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating
Procedia PDF Downloads 83421 A Socio-Spatial Analysis of Financialization and the Formation of Oligopolies in Brazilian Basic Education
Authors: Gleyce Assis Da Silva Barbosa
Abstract:
In recent years, we have witnessed a vertiginous growth of large education companies. Daughters of national and world capital, these companies expand both through consolidated physical networks in the form of branches spread across the territory and through institutional networks such as business networks through mergers, acquisitions, creation of new companies and influence. They do this by incorporating small, medium and large schools and universities, teaching systems and other products and services. They are also able to weave their webs directly or indirectly in philanthropic circles, limited partnerships, family businesses and even in public education through various mechanisms of outsourcing, privatization and commercialization of products for the sector. Although the growth of these groups in basic education seems to us a recent phenomenon in peripheral countries such as Brazil, its diffusion is closely linked to higher education conglomerates and other sectors of the economy forming oligopolies, which began to expand in the 1990s with strong state support and through political reforms that redefined its role, transforming it into a fundamental agent in the formation of guidelines to boost the incorporation of neoliberal logic. This expansion occurred through the objectification of education, commodifying it and transforming students into consumer clients. Financial power combined with the neo-liberalization of state public policies allowed the profusion of social exclusion, the increase of individuals without access to basic services, deindustrialization, automation, capital volatility and the indetermination of the economy; in addition, this process causes capital to be valued and devalued at rates never seen before, which together generates various impacts such as the precariousness of work. Understanding the connection between these processes, which engender the economy, allows us to see their consequences in labor relations and in the territory. In this sense, it is necessary to analyze the geographic-economic context and the role of the facilitating agents of this process, which can give us clues about the ongoing transformations and the directions of education in the national and even international scenario since this process is linked to the multiple scales of financial globalization. Therefore, the present research has the general objective of analyzing the socio-spatial impacts of financialization and the formation of oligopolies in Brazilian basic education. For this, the survey of laws, data, and public policies on the subject in question was used as a methodology. As a methodology, the work was based on some data from these companies available on websites for investors. Survey of information from global and national companies that operate in Brazilian basic education. In addition to mapping the expansion of educational oligopolies using public data on the location of schools. With this, the research intends to provide information about the ongoing commodification process in the country. Discuss the consequences of the oligopolization of education, considering the impacts that financialization can bring to teaching work.Keywords: financialization, oligopolies, education, Brazil
Procedia PDF Downloads 64420 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security
Authors: D. Pugazhenthi, B. Sree Vidya
Abstract:
Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification
Procedia PDF Downloads 259419 Empowering Indigenous Epistemologies in Geothermal Development
Authors: Te Kīpa Kēpa B. Morgan, Oliver W. Mcmillan, Dylan N. Taute, Tumanako N. Fa'aui
Abstract:
Epistemologies are ways of knowing. Indigenous Peoples are aware that they do not perceive and experience the world in the same way as others. So it is important when empowering Indigenous epistemologies, such as that of the New Zealand Māori, to also be able to represent a scientific understanding within the same analysis. A geothermal development assessment tool has been developed by adapting the Mauri Model Decision Making Framework. Mauri is a metric that is capable of representing the change in the life-supporting capacity of things and collections of things. The Mauri Model is a method of grouping mauri indicators as dimension averages in order to allow holistic assessment and also to conduct sensitivity analyses for the effect of worldview bias. R-shiny is the coding platform used for this Vision Mātauranga research which has created an expert decision support tool (DST) that combines a stakeholder assessment of worldview bias with an impact assessment of mauri-based indicators to determine the sustainability of proposed geothermal development. The initial intention was to develop guidelines for quantifying mātauranga Māori impacts related to geothermal resources. To do this, three typical scenarios were considered: a resource owner wishing to assess the potential for new geothermal development; another party wishing to assess the environmental and cultural impacts of the proposed development; an assessment that focuses on the holistic sustainability of the resource, including its surface features. Indicator sets and measurement thresholds were developed that are considered necessary considerations for each assessment context and these have been grouped to represent four mauri dimensions that mirror the four well-being criteria used for resource management in Aotearoa, New Zealand. Two case studies have been conducted to test the DST suitability for quantifying mātauranga Māori and other biophysical factors related to a geothermal system. This involved estimating mauri0meter values for physical features such as temperature, flow rate, frequency, colour, and developing indicators to also quantify qualitative observations about the geothermal system made by Māori. A retrospective analysis has then been conducted to verify different understandings of the geothermal system. The case studies found that the expert DST is useful for geothermal development assessment, especially where hapū (indigenous sub-tribal grouping) are conflicted regarding the benefits and disadvantages of their’ and others’ geothermal developments. These results have been supplemented with evaluations for the cumulative impacts of geothermal developments experienced by different parties using integration techniques applied to the time history curve of the expert DST worldview bias weighted plotted against the mauri0meter score. Cumulative impacts represent the change in resilience or potential of geothermal systems, which directly assists with the holistic interpretation of change from an Indigenous Peoples’ perspective.Keywords: decision support tool, holistic geothermal assessment, indigenous knowledge, mauri model decision-making framework
Procedia PDF Downloads 186418 Nursing Education in the Pandemic Time: Case Study
Authors: Jaana Sepp, Ulvi Kõrgemaa, Kristi Puusepp, Õie Tähtla
Abstract:
COVID-19 was officially recognized as a pandemic in late 2019 by the WHO, and it has led to changes in the education sector. Educational institutions were closed, and most schools adopted distance learning. Estonia is known as a digitally well-developed country. Based on that, in the pandemic time, nursing education continued, and new technological solutions were implemented. To provide nursing education, special focus was paid on quality and flexibility. The aim of this paper is to present administrative, digital, and technological solutions which support Estonian nursing educators to continue the study process in the pandemic time and to develop a sustainable solution for nursing education for the future. This paper includes the authors’ analysis of the documents and decisions implemented in the institutions through the pandemic time. It is a case study of Estonian nursing educators. Results of the analysis show that the implementation of distance learning principles challenges the development of innovative strategies and technics for the assessment of student performance and educational outcomes and implement new strategies to encourage student engagement in the virtual classroom. Additionally, hospital internships were canceled, and the simulation approach was deeply implemented as a new opportunity to develop and assess students’ practical skills. There are many other technical and administrative changes that have also been carried out, such as students’ support and assessment systems, the designing and conducting of hybrid and blended studies, etc. All services were redesigned and made more available, individual, and flexible. Hence, the feedback system was changed, the information was collected in parallel with educational activities. Experiences of nursing education during the pandemic time are widely presented in scientific literature. However, to conclude our study, authors have found evidence that solutions implemented in Estonian nursing education allowed the students to graduate within the nominal study period without any decline in education quality. Operative information system and flexibility provided the minimum distance between the students, support, and academic staff, and likewise, the changes were implemented quickly and efficiently. Institution memberships were updated with the appropriate information, and it positively affected their satisfaction, motivation, and commitment. We recommend that the feedback process and the system should be permanently changed in the future to place all members in the same information area, redefine the hospital internship process, implement hybrid learning, as well as to improve the communication system between stakeholders inside and outside the organization. The main limitation of this study relates to the size of Estonia. Nursing education is provided by two institutions only, and similarly, the number of students is low. The result could be generated to the institutions with a similar size and administrative system. In the future, the relationship between nurses’ performance and organizational outcomes should be deeply investigated and influences of the pandemic time education analyzed at workplaces.Keywords: hybrid learning, nursing education, nursing, COVID-19
Procedia PDF Downloads 120417 Interplay of Material and Cycle Design in a Vacuum-Temperature Swing Adsorption Process for Biogas Upgrading
Authors: Federico Capra, Emanuele Martelli, Matteo Gazzani, Marco Mazzotti, Maurizio Notaro
Abstract:
Natural gas is a major energy source in the current global economy, contributing to roughly 21% of the total primary energy consumption. Production of natural gas starting from renewable energy sources is key to limit the related CO2 emissions, especially for those sectors that heavily rely on natural gas use. In this context, biomethane produced via biogas upgrading represents a good candidate for partial substitution of fossil natural gas. The upgrading process of biogas to biomethane consists in (i) the removal of pollutants and impurities (e.g. H2S, siloxanes, ammonia, water), and (ii) the separation of carbon dioxide from methane. Focusing on the CO2 removal process, several technologies can be considered: chemical or physical absorption with solvents (e.g. water, amines), membranes, adsorption-based systems (PSA). However, none emerged as the leading technology, because of (i) the heterogeneity in plant size, ii) the heterogeneity in biogas composition, which is strongly related to the feedstock type (animal manure, sewage treatment, landfill products), (iii) the case-sensitive optimal tradeoff between purity and recovery of biomethane, and iv) the destination of the produced biomethane (grid injection, CHP applications, transportation sector). With this contribution, we explore the use of a technology for biogas upgrading and we compare the resulting performance with benchmark technologies. The proposed technology makes use of a chemical sorbent, which is engineered by RSE and consists of Di-Ethanol-Amine deposited on a solid support made of γ-Alumina, to chemically adsorb the CO2 contained in the gas. The material is packed into fixed beds that cyclically undergo adsorption and regeneration steps. CO2 is adsorbed at low temperature and ambient pressure (or slightly above) while the regeneration is carried out by pulling vacuum and increasing the temperature of the bed (vacuum-temperature swing adsorption - VTSA). Dynamic adsorption tests were performed by RSE and were used to tune the mathematical model of the process, including material and transport parameters (i.e. Langmuir isotherms data and heat and mass transport). Based on this set of data, an optimal VTSA cycle was designed. The results enabled a better understanding of the interplay between material and cycle tuning. As exemplary application, the upgrading of biogas for grid injection, produced by an anaerobic digester (60-70% CO2, 30-40% CH4), for an equivalent size of 1 MWel was selected. A plant configuration is proposed to maximize heat recovery and minimize the energy consumption of the process. The resulting performances are very promising compared to benchmark solutions, which make the VTSA configuration a valuable alternative for biomethane production starting from biogas.Keywords: biogas upgrading, biogas upgrading energetic cost, CO2 adsorption, VTSA process modelling
Procedia PDF Downloads 276416 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method
Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili
Abstract:
The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method
Procedia PDF Downloads 198415 Flux-Gate vs. Anisotropic Magneto Resistance Magnetic Sensors Characteristics in Closed-Loop Operation
Authors: Neoclis Hadjigeorgiou, Spyridon Angelopoulos, Evangelos V. Hristoforou, Paul P. Sotiriadis
Abstract:
The increasing demand for accurate and reliable magnetic measurements over the past decades has paved the way for the development of different types of magnetic sensing systems as well as of more advanced measurement techniques. Anisotropic Magneto Resistance (AMR) sensors have emerged as a promising solution for applications requiring high resolution, providing an ideal balance between performance and cost. However, certain issues of AMR sensors such as non-linear response and measurement noise are rarely discussed in the relevant literature. In this work, an analog closed loop compensation system is proposed, developed and tested as a means to eliminate the non-linearity of AMR response, reduce the 1/f noise and enhance the sensitivity of magnetic sensor. Additional performance aspects, such as cross-axis and hysteresis effects are also examined. This system was analyzed using an analytical model and a P-Spice model, considering both the sensor itself as well as the accompanying electronic circuitry. In addition, a commercial closed loop architecture Flux-Gate sensor (calibrated and certified), has been used for comparison purposes. Three different experimental setups have been constructed for the purposes of this work, each one utilized for DC magnetic field measurements, AC magnetic field measurements and Noise density measurements respectively. The DC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to calibrate and characterize the system under consideration. A high-accuracy DC power supply has been used for providing the operating current to the Helmholtz coils. The results were recorded by a multichannel voltmeter The AC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to examine the effective bandwidth not only of the proposed system but also for the Flux-Gate sensor. A voltage controlled current source driven by a function generator has been utilized for the Helmholtz coil excitation. The result was observed by the oscilloscope. The third experimental apparatus incorporated an AC magnetic shielding construction composed of several layers of electric steel that had been demagnetized prior to the experimental process. Each sensor was placed alone and the response was captured by the oscilloscope. The preliminary experimental results indicate that closed loop AMR response presented a maximum deviation of 0.36% with respect to the ideal linear response, while the corresponding values for the open loop AMR system and the Fluxgate sensor reached 2% and 0.01% respectively. Moreover, the noise density of the proposed close loop AMR sensor system remained almost as low as the noise density of the AMR sensor itself, yet considerably higher than that of the Flux-Gate sensor. All relevant numerical data are presented in the paper.Keywords: AMR sensor, chopper, closed loop, electronic noise, magnetic noise, memory effects, flux-gate sensor, linearity improvement, sensitivity improvement
Procedia PDF Downloads 420414 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique
Authors: Harpal Singh, Sakshi Batra
Abstract:
The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.Keywords: discrete wavelet transform, robustness, video watermarking, watermark
Procedia PDF Downloads 224413 Rehabilitation of Orthotropic Steel Deck Bridges Using a Modified Ortho-Composite Deck System
Authors: Mozhdeh Shirinzadeh, Richard Stroetmann
Abstract:
Orthotropic steel deck bridge consists of a deck plate, longitudinal stiffeners under the deck plate, cross beams and the main longitudinal girders. Due to the several advantages, Orthotropic Steel Deck (OSD) systems have been utilized in many bridges worldwide. The significant feature of this structural system is its high load-bearing capacity while having relatively low dead weight. In addition, cost efficiency and the ability of rapid field erection have made the orthotropic steel deck a popular type of bridge worldwide. However, OSD bridges are highly susceptible to fatigue damage. A large number of welded joints can be regarded as the main weakness of this system. This problem is, in particular, evident in the bridges which were built before 1994 when the fatigue design criteria had not been introduced in the bridge design codes. Recently, an Orthotropic-composite slab (OCS) for road bridges has been experimentally and numerically evaluated and developed at Technische Universität Dresden as a part of AIF-FOSTA research project P1265. The results of the project have provided a solid foundation for the design and analysis of Orthotropic-composite decks with dowel strips as a durable alternative to conventional steel or reinforced concrete decks. In continuation, while using the achievements of that project, the application of a modified Ortho-composite deck for an existing typical OSD bridge is investigated. Composite action is obtained by using rows of dowel strips in a clothoid (CL) shape. Regarding Eurocode criteria for different fatigue detail categories of an OSD bridge, the effect of the proposed modification approach is assessed. Moreover, a numerical parametric study is carried out utilizing finite element software to determine the impact of different variables, such as the size and arrangement of dowel strips, the application of transverse or longitudinal rows of dowel strips, and local wheel loads. For the verification of the simulation technique, experimental results of a segment of an OCS deck are used conducted in project P1265. Fatigue assessment is performed based on the last draft of Eurocode 1993-2 (2024) for the most probable detail categories (Hot-Spots) that have been reported in the previous statistical studies. Then, an analytical comparison is provided between the typical orthotropic steel deck and the modified Ortho-composite deck bridge in terms of fatigue issues and durability. The load-bearing capacity of the bridge, the critical deflections, and the composite behavior are also evaluated and compared. Results give a comprehensive overview of the efficiency of the rehabilitation method considering the required design service life of the bridge. Moreover, the proposed approach is assessed with regard to the construction method, details and practical aspects, as well as the economic point of view.Keywords: composite action, fatigue, finite element method, steel deck, bridge
Procedia PDF Downloads 83412 Racial Distress in the Digital Age: A Mixed-Methods Exploration of the Effects of Social Media Exposure to Police Brutality on Black Students
Authors: Amanda M. McLeroy, Tiera Tanksley
Abstract:
The 2020 movement for Black Lives, ignited by anti-Black police brutality and exemplified by the public execution of George Floyd, underscored the dual potential of social media for political activism and perilous exposure to traumatic content for Black students. This study employs Critical Race Technology Theory (CRTT) to scrutinize algorithmic anti-blackness and its impact on Black youth's lives and educational experiences. The research investigates the consequences of vicarious exposure to police brutality on social media among Black adolescents through qualitative interviews and quantitative scale data. The findings reveal an unprecedented surge in exposure to viral police killings since 2020, resulting in profound physical, socioemotional, and educational effects on Black youth. CRTT forms the theoretical basis, challenging the notion of digital technologies as post-racial and neutral, aiming to dismantle systemic biases within digital systems. Black youth, averaging over 13 hours of daily social media use, face constant exposure to graphic images of Black individuals dying. The study connects this exposure to a range of physical, socioemotional, and mental health consequences, emphasizing the urgent need for understanding and support. The research proposes questions to explore the extent of police brutality exposure and its effects on Black youth. Qualitative interviews with high school and college students and quantitative scale data from undergraduates contribute to a nuanced understanding of the impact of police brutality exposure on Black youth. Themes of unprecedented exposure to viral police killings, physical and socioemotional effects, and educational consequences emerge from the analysis. The study uncovers how vicarious experiences of negative police encounters via social media lead to mistrust, fear, and psychosomatic symptoms among Black adolescents. Implications for educators and counselors are profound, emphasizing the cultivation of empathy, provision of mental health support, integration of media literacy education, and encouragement of activism. Recognizing family and community influences is crucial for comprehensive support. Professional development opportunities in culturally responsive teaching and trauma-informed approaches are recommended for educators. In conclusion, creating a supportive educational environment that addresses the emotional impact of social media exposure to police brutality is crucial for the well-being and development of Black adolescents. Counselors, through safe spaces and collaboration, play a vital role in supporting Black youth facing the distressing effects of social media exposure to police brutality.Keywords: black youth, mental health, police brutality, social media
Procedia PDF Downloads 54411 Prevention and Treatment of Hay Fever Prevalence by Natural Products: A Phytochemistry Study on India and Iran
Authors: Tina Naser Torabi
Abstract:
Prevalence of allergy is affected by different factors according to its base and seasonal weather changes, and it also needs various treatments.Although reasons of allergy existence are not clear but generally, allergens cause reaction between antigen and antibody because of their antigenic traits. In this state, allergens cause immune system to make mistake and identify safe material as threat, therefore function of immune system impaired because of histamine secretion. There are different reasons for allergy, but herbal reasons are on top of the list, although animal causes cannot be ignored. Important point is that allergenic compounds, cause making dedicated antibody, so in general every kind of allergy is different from the other one. Therefore, most of the plants in herbal allergenic category can cause various allergies for human beings, such as respiratory allergies, nutritional allergies, injection allergies, infection allergies, touch allergies, that each of them show different symptoms based on the reason of allergy and also each of them requires different prevention and treatment. Geographical condition is another effective factor in allergy. Seasonal changes, weather condition, herbal coverage variety play important roles in different allergies. It goes without saying that humid climate and herbal coverage variety in different seasons especially spring cause most allergies in human beings in Iran and India that are discussed in this article. These two countries are good choices for allergy prevalence because of their condition, various herbal coverage, human and animal factors. Hay fever is one of the allergies, although the reasons of its prevalence are unknown yet. It is one of the most popular allergies in Iran and India because of geographical, human, animal and herbal factors. Hay fever is on top of the list in these two countries. Significant point about these two countries is that herbal factor is the most important factor in prevalence of hay fever. Variety of herbal coverage especially in spring during herbal pollination is the main reason of hay fever prevalence in these two countries. Based on the research result of Pharmacognosy and Phytochemistry, pollination of some plants in spring is major reason of hay fever prevalence in these countries. If airborne pollens in pollination season enter the human body through air, they will cause allergic reactions in eyes, nasal mucosa, lungs, and respiratory system, and if these particles enter the body of potential person through food, they will cause allergic reactions in mouth, stomach, and other digestive systems. Occasionally, chemical materials produced by human body such as Histamine cause problems like: developing of nasal polyps, nasal blockage, sleep disturbance, risk of asthma developing, blood vasodilation, sneezing, eye tears, itching and swelling of eyes and nasal mucosa, Urticaria, decrease in blood pressure, and rarely trauma, anesthesia, anaphylaxis and finally death. This article is going to study the reasons of hay fever prevalence in Iran and India and presents prevention and treatment Method from Phytochemistry and Pharmocognocy point of view by using local natural products in these two countries.Keywords: hay fever, India, Iran, natural treatment, phytochemistry
Procedia PDF Downloads 164410 Characterization of Alloyed Grey Cast Iron Quenched and Tempered for a Smooth Roll Application
Authors: Mohamed Habireche, Nacer E. Bacha, Mohamed Djeghdjough
Abstract:
In the brick industry, smooth double roll crusher is used for medium and fine crushing of soft to medium hard material. Due to opposite inward rotation of the rolls, the feed material is nipped between the rolls and crushed by compression. They are subject to intense wear, known as three-body abrasion, due to the action of abrasive products. The production downtime affecting productivity stems from two sources: the bi-monthly rectification of the roll crushers and their replacement when they are completely worn out. Choosing the right material for the roll crushers should result in longer machine cycles, and reduced repair and maintenance costs. All roll crushers are imported from outside Algeria. This results in sometimes very long delivery times which handicap the brickyards, in particular in respecting delivery times and honored the orders made by customers. The aim of this work is to investigate the effect of alloying additions on microstructure and wear behavior of grey lamellar cast iron for smooth roll crushers in brick industry. The base gray iron was melted in an induction furnace with low frequency at a temperature of 1500 °C, in which return cast iron scrap, new cast iron ingot, and steel scrap were added to the melt to generate the desired composition. The chemical analysis of the bar samples was carried out using Emission Spectrometer Systems PV 8050 Series (Philips) except for the carbon, for which a carbon/sulphur analyser Elementrac CS-i was used. Unetched microstructure was used to evaluate the graphite flake morphology using the image comparison measurement method. At least five different fields were selected for quantitative estimation of phase constituents. The samples were observed under X100 magnification with a Zeiss Axiover T40 MAT optical microscope equipped with a digital camera. SEM microscope equipped with EDS was used to characterize the phases present in the microstructure. The hardness (750 kg load, 5mm diameter ball) was measured with a Brinell testing machine for both treated and as-solidified condition test pieces. The test bars were used for tensile strength and metallographic evaluations. Mechanical properties were evaluated using tensile specimens made as per ASTM E8 standards. Two specimens were tested for each alloy. From each rod, a test piece was made for the tensile test. The results showed that the quenched and tempered alloys had best wear resistance at 400 °C for alloyed grey cast iron (containing 0.62%Mn, 0.68%Cr, and 1.09% Cu) due to fine carbides in the tempered matrix. In quenched and tempered condition, increasing Cu content in cast irons improved its wear resistance moderately. Combined addition of Cu and Cr increases hardness and wear resistance for a quenched and tempered hypoeutectic grey cast iron.Keywords: casting, cast iron, microstructure, heat treating
Procedia PDF Downloads 105409 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor
Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar
Abstract:
Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration
Procedia PDF Downloads 190408 Predictors of Response to Interferone Therapy in Chronic Hepatitis C Virus Infection
Authors: Ali Kassem, Ehab Fawzy, Mahmoud Sef el-eslam, Fatma Salah- Eldeen, El zahraa Mohamed
Abstract:
Introduction: The combination of interferon (INF) and ribavirin is the preferred treatment for chronic hepatitis C viral (HCV) infection. However, nonresponse to this therapy remains common and is associated with several factors such as HCV genotype and HCV viral load in addition to host factors such as sex, HLA type and cytokine polymorphisms. Aim of the work: The aim of this study was to determine predictors of response to (INF) therapy in chronic HCV infected patients treated with INF alpha and ribavirin combination therapy. Patients and Methods: The present study included 110 patients (62 males, 48 females) with chronic HCV infection. Their ages ranged from 20-59 years. Inclusion criteria were organized according to the protocol of the Egyptian National Committee for control of viral hepatitis. Patients included in this study were recruited to receive INF ribavirin combination therapy; 54 patients received pegylated NF α-2a (180 μg) and weight based ribavirin therapy (1000 mg if < 75 kg, 1200 mg if > 75 kg) for 48 weeks and 53 patients received pegylated INF α-2b (1.5 ug/kg/week) and weight based ribavirin therapy (800 mg if < 65 kg, 1000 mg if 65-75 kg and 1200 mg if > 75kg). One hundred and seven liver biopsies were included in the study and submitted to histopathological examination. Hematoxylin and eosin (H&E) stained sections were done to assess both the grade and the stage of chronic viral hepatitis, in addition to the degree of steatosis. Modified hepatic activity index (HAI) grading, modified Ishak staging and Metavir grading and staging systems were used. Laboratory follow up including: HCV PCR at the 12th week to assess the early virologic response (EVR) and at the 24th week were done. At the end of the course: HCV PCR was done at the end of the course and tested 6 months later to document end virologic response (ETR) and sustained virologic response (SVR) respectively. Results One hundred seven patients; 62 males (57.9 %) and 45 females (42.1%) completed the course and included in this study. The age of patients ranged from 20-59 years with a mean of 40.39±10.03 years. Six months after the end of treatment patients were categorized into two groups: Group (1): patients who achieved sustained virological response (SVR). Group (2): patients who didn't achieve sustained virological response (non SVR) including non-responders, breakthrough and relapsers. In our study, 58 (54.2%) patients showed SVR, 18 (16.8%) patients were non-responders, 15 (14%) patients showed break-through and 16 (15 %) patients were relapsers. Univariate binary regression analysis of the possible risk factors of non SVR showed that the significant factors were higher age, higher fasting insulin level, higher Metavir stage and higher grade of hepatic steatosis. Multivariate binary regression analysis showed that the only independent risk factor for non SVR was high fasting insulin level. Conclusion: Younger age, lower Metavir stage, lower steatosis grade and lower fasting insulin level are good predictors of SVR and could be used in predicting the treatment response of pegylated interferon/ribavirin therapy.Keywords: chronic HCV infection, interferon ribavirin combination therapy, predictors to antiviral therapy, treatment response
Procedia PDF Downloads 396407 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 137406 Investigating the Influence of Solidification Rate on the Microstructural, Mechanical and Physical Properties of Directionally Solidified Al-Mg Based Multicomponent Eutectic Alloys Containing High Mg Alloys
Authors: Fatih Kılıç, Burak Birol, Necmettin Maraşlı
Abstract:
The directional solidification process is generally used for homogeneous compound production, single crystal growth, and refining (zone refining), etc. processes. The most important two parameters that control eutectic structures are temperature gradient and grain growth rate which are called as solidification parameters The solidification behavior and microstructure characteristics is an interesting topic due to their effects on the properties and performance of the alloys containing eutectic compositions. The solidification behavior of multicomponent and multiphase systems is an important parameter for determining various properties of these materials. The researches have been conducted mostly on the solidification of pure materials or alloys containing two phases. However, there are very few studies on the literature about multiphase reactions and microstructure formation of multicomponent alloys during solidification. Because of this situation, it is important to study the microstructure formation and the thermodynamical, thermophysical and microstructural properties of these alloys. The production process is difficult due to easy oxidation of magnesium and therefore, there is not a comprehensive study concerning alloys containing high Mg (> 30 wt.% Mg). With the increasing amount of Mg inside Al alloys, the specific weight decreases, and the strength shows a slight increase, while due to formation of β-Al8Mg5 phase, ductility lowers. For this reason, production, examination and development of high Mg containing alloys will initiate the production of new advanced engineering materials. The original value of this research can be described as obtaining high Mg containing (> 30% Mg) Al based multicomponent alloys by melting under vacuum; controlled directional solidification with various growth rates at a constant temperature gradient; and establishing relationship between solidification rate and microstructural, mechanical, electrical and thermal properties. Therefore, within the scope of this research, some > 30% Mg containing ternary or quaternary Al alloy compositions were determined, and it was planned to investigate the effects of directional solidification rate on the mechanical, electrical and thermal properties of these alloys. Within the scope of the research, the influence of the growth rate on microstructure parameters, microhardness, tensile strength, electrical conductivity and thermal conductivity of directionally solidified high Mg containing Al-32,2Mg-0,37Si; Al-30Mg-12Zn; Al-32Mg-1,7Ni; Al-32,2Mg-0,37Fe; Al-32Mg-1,7Ni-0,4Si; Al-33,3Mg-0,35Si-0,11Fe (wt.%) alloys with wide range of growth rate (50-2500 µm/s) and fixed temperature gradient, will be investigated. The work can be planned as; (a) directional solidification of Al-Mg based Al-Mg-Si, Al-Mg-Zn, Al-Mg-Ni, Al-Mg-Fe, Al-Mg-Ni-Si, Al-Mg-Si-Fe within wide range of growth rates (50-2500 µm/s) at a constant temperature gradient by Bridgman type solidification system, (b) analysis of microstructure parameters of directionally solidified alloys by using an optical light microscopy and Scanning Electron Microscopy (SEM), (c) measurement of microhardness and tensile strength of directionally solidified alloys, (d) measurement of electrical conductivity by four point probe technique at room temperature (e) measurement of thermal conductivity by linear heat flow method at room temperature.Keywords: directional solidification, electrical conductivity, high Mg containing multicomponent Al alloys, microhardness, microstructure, tensile strength, thermal conductivity
Procedia PDF Downloads 260405 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling
Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather
Abstract:
New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling
Procedia PDF Downloads 191404 Intriguing Modulations in the Excited State Intramolecular Proton Transfer Process of Chrysazine Governed by Host-Guest Interactions with Macrocyclic Molecules
Authors: Poojan Gharat, Haridas Pal, Sharmistha Dutta Choudhury
Abstract:
Tuning photophysical properties of guest dyes through host-guest interactions involving macrocyclic hosts are the attractive research areas since past few decades, as these changes can directly be implemented in chemical sensing, molecular recognition, fluorescence imaging and dye laser applications. Excited state intramolecular proton transfer (ESIPT) is an intramolecular prototautomerization process display by some specific dyes. The process is quite amenable to tunability by the presence of different macrocyclic hosts. The present study explores the interesting effect of p-sulfonatocalix[n]arene (SCXn) and cyclodextrin (CD) hosts on the excited-state prototautomeric equilibrium of Chrysazine (CZ), a model antitumour drug. CZ exists exclusively in its normal form (N) in the ground state. However, in the excited state, the excited N* form undergoes ESIPT along with its pre-existing intramolecular hydrogen bonds, giving the excited state prototautomer (T*). Accordingly, CZ shows a single absorption band due to N form, but two emission bands due to N* and T* forms. Facile prototautomerization of CZ is considerably inhibited when the dye gets bound to SCXn hosts. However, in spite of lower binding affinity, the inhibition is more profound with SCX6 host as compared to SCX4 host. For CD-CZ system, while prototautomerization process is hindered by the presence of β-CD, it remains unaffected in the presence of γCD. Reduction in the prototautomerization process of CZ by SCXn and βCD hosts is unusual, because T* form is less dipolar in nature than the N*, hence binding of CZ within relatively hydrophobic hosts cavities should have enhanced the prototautomerization process. At the same time, considering the similar chemical nature of two CD hosts, their effect on prototautomerization process of CZ would have also been similar. The atypical effects on the prototautomerization process of CZ by the studied hosts are suggested to arise due to the partial inclusion or external binding of CZ with the hosts. As a result, there is a strong possibility of intermolecular H-bonding interaction between CZ dye and the functional groups present at the portals of SCXn and βCD hosts. Formation of these intermolecular H-bonds effectively causes the pre-existing intramolecular H-bonding network within CZ molecule to become weak, and this consequently reduces the prototautomerization process for the dye. Our results suggest that rather than the binding affinity between the dye and host, it is the orientation of CZ in the case of SCXn-CZ complexes and the binding stoichiometry in the case of CD-CZ complexes that play the predominant role in influencing the prototautomeric equilibrium of the dye CZ. In the case of SCXn-CZ complexes, the results obtained through experimental findings are well supported by quantum chemical calculations. Similarly for CD-CZ systems, binding stoichiometries obtained through geometry optimization studies on the complexes between CZ and CD hosts correlate nicely with the experimental results. Formation of βCD-CZ complexes with 1:1 stoichiometry while formation of γCD-CZ complexes with 1:1, 1:2 and 2:2 stoichiometries are revealed from geometry optimization studies and these results are in good accordance with the observed effects by the βCD and γCD hosts on the ESIPT process of CZ dye.Keywords: intermolecular proton transfer, macrocyclic hosts, quantum chemical studies, photophysical studies
Procedia PDF Downloads 121403 Blending Synchronous with Asynchronous Learning Tools: Students’ Experiences and Preferences for Online Learning Environment in a Resource-Constrained Higher Education Situations in Uganda
Authors: Stephen Kyakulumbye, Vivian Kobusingye
Abstract:
Generally, World over, COVID-19 has had adverse effects on all sectors but with more debilitating effects on the education sector. After reactive lockdowns, education institutions that could continue teaching and learning had to go a distance mediated by digital technological tools. In Uganda, the Ministry of Education thereby issued COVID-19 Online Distance E-learning (ODeL) emergent guidelines. Despite such guidelines, academic institutions in Uganda and similar developing contexts with academically constrained resource environments were caught off-guard and ill-prepared to transform from face-to-face learning to online distance learning mode. Most academic institutions that migrated spontaneously did so with no deliberate tools, systems, strategies, or software to cause active, meaningful, and engaging learning for students. By experience, most of these academic institutions shifted to Zoom and WhatsApp and instead conducted online teaching in real-time than blended synchronous and asynchronous tools. This paper provides students’ experiences while blending synchronous and asynchronous content-creating and learning tools within a technological resource-constrained environment to navigate in such a challenging Uganda context. These conceptual case-based findings, using experience from Uganda Christian University (UCU), point at the design of learning activities with two certain characteristics, the enhancement of synchronous learning technologies with asynchronous ones to mitigate the challenge of system breakdown, passive learning to active learning, and enhances the types of presence (social, cognitive and facilitatory). The paper, both empirical and experiential in nature, uses online experiences from third-year students in Bachelor of Business Administration student lectured using asynchronous text, audio, and video created with Open Broadcaster Studio software and compressed with Handbrake, all open-source software to mitigate disk space and bandwidth usage challenges. The synchronous online engagements with students were a blend of zoom or BigBlueButton, to ensure that students had an alternative just in case one failed due to excessive real-time traffic. Generally, students report that compared to their previous face-to-face lectures, the pre-recorded lectures via Youtube provided them an opportunity to reflect on content in a self-paced manner, which later on enabled them to engage actively during the live zoom and/or BigBlueButton real-time discussions and presentations. The major recommendation is that lecturers and teachers in a resource-constrained environment with limited digital resources like the internet and digital devices should harness this approach to offer students access to learning content in a self-paced manner and thereby enabling reflective active learning through reflective and high-order thinking.Keywords: synchronous learning, asynchronous learning, active learning, reflective learning, resource-constrained environment
Procedia PDF Downloads 138402 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal
Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi
Abstract:
The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.Keywords: low income, subsidy policy, distributed energy resources, energy justice
Procedia PDF Downloads 111