Search results for: sediment capture
122 The Lived Experiences and Coping Strategies of Women with Attention Deficit and Hyperactivity Disorder (ADHD)
Authors: Oli Sophie Meredith, Jacquelyn Osborne, Sarah Verdon, Jane Frawley
Abstract:
PROJECT OVERVIEW AND BACKGROUND: Over one million Australians are affected by ADHD at an economic and social cost of over $20 billion per annum. Despite health outcomes being significantly worse compared with men, women have historically been overlooked in ADHD diagnosis and treatment. While research suggests physical activity and other non-prescription options can help with ADHD symptoms, the frontline response to ADHD remains expensive stimulant medications that can have adverse side effects. By interviewing women with ADHD, this research will examine women’s self-directed approaches to managing symptoms, including alternatives to prescription medications. It will investigate barriers and affordances to potentially helpful approaches and identify any concerning strategies pursued in lieu of diagnosis. SIGNIFICANCE AND INNOVATION: Despite the economic and societal impact of ADHD on women, research investigating how women manage their symptoms is scant. This project is significant because although women’s ADHD symptoms are markedly different to those of men, mainstream treatment has been based on the experiences of men. Further, it is thought that in developing nuanced coping strategies, women may have masked their symptoms. Thus, this project will highlight strategies which women deem effective in ‘thriving’ rather than just ‘hiding’. By investigating the health service use, self-care and physical activity of women with ADHD, this research aligns with a priority research areas as identified by the November 2023 senate ADHD inquiry report. APPROACH AND METHODS: Semi-structured interviews will be conducted with up to 20 women with ADHD. Interviews will be conducted in person and online to capture experience across rural and metropolitan Australia. Participants will be recruited in partnership with the peak representative body, ADHD Australia. The research will use an intersectional framework, and data will be analysed thematically. This project is led by an interdisciplinary and cross-institutional team of women with ADHD. Reflexive interviewing skills will be employed to help interviewees feel more comfortable disclosing their experiences, especially where they share common ground ENGAGEMENT, IMPACT AND BENEFIT: This research will benefit women with ADHD by increasing knowledge of strategies and alternative treatments to prescription medications, reducing the social and economic burden of ADHD on Australia and on individuals. It will also benefit women by identifying risks involved with some self-directed approaches in lieu of medical advice. The project has an accessible impact plan to directly benefit end-users, which includes the development of a podcast and a PDF resource translating findings. The resources will reach a wide audience through ADHD Australia’s extensive national networks. We will collaborate with Charles Sturt’s Accessibility and Inclusion Division of Safety, Security and Well-being to create a targeted resource for students with ADHD.Keywords: ADHD, women's health, self-directed strategies, health service use, physical activity, public health
Procedia PDF Downloads 71121 Managing Human-Wildlife Conflicts Compensation Claims Data Collection and Payments Using a Scheme Administrator
Authors: Eric Mwenda, Shadrack Ngene
Abstract:
Human-wildlife conflicts (HWCs) are the main threat to conservation in Africa. This is because wildlife needs overlap with those of humans. In Kenya, about 70% of wildlife occurs outside protected areas. As a result, wildlife and human range overlap, causing HWCs. The HWCs in Kenya occur in the drylands adjacent to protected areas. The top five counties with the highest incidences of HWC are Taita Taveta, Narok, Lamu, Kajiado, and Laikipia. The common wildlife species responsible for HWCs are elephants, buffaloes, hyenas, hippos, leopards, baboons, monkeys, snakes, and crocodiles. To ensure individuals affected by the conflicts are compensated, Kenya has developed a model of HWC compensation claims data collection and payment. We collected data on HWC from all eight Kenya Wildlife Service (KWS) Conservation Areas from 2009 to 2019. Additional data was collected from stakeholders' consultative workshops held in the Conservation Areas and a literature review regarding payment of injuries and ongoing insurance schemes being practiced in areas. This was followed by the description of the claims administration process and calculation of the pricing of the compensation claims. We further developed a digital platform for data capture and processing of all reported conflict cases and payments. Our product recognized four categories of HWC (i.e., human death and injury, property damage, crop destruction, and livestock predation). Personal bodily injury and human death were provided based on the Continental Scale of Benefits. We proposed a maximum of Kenya Shillings (KES) 3,000,000 for death. Medical, pharmaceutical, and hospital expenses were capped at a maximum of KES 150,000, as well as funeral costs at KES 50,000. Pain and suffering were proposed to be paid for 12 months at the rate of KES 13,500 per month. Crop damage was to be based on farm input costs at a maximum of KES 150,000 per claim. Livestock predation leading to death was based on Tropical Livestock Unit (TLU), which is equivalent to KES 30,000, whick includes Cattle (1 TLU = KES 30,000), Camel (1.4 TLU = KES 42,000), Goat (0.15 TLU = 4,500), Sheep (0.15 TLU = 4,500), and Donkey (0.5 TLU = KES 15,000). Property destruction (buildings, outside structures and harvested crops) was capped at KES 150,000 per any one claim. We conclude that it is possible to use an administrator to collect data on HWC compensation claims and make payments using technology. The success of the new approach will depend on a piloting program. We recommended that a pilot scheme be initiated for eight months in Taita Taveta, Kajiado, Baringo, Laikipia, Narok, and Meru Counties. This will test the claims administration process as well as harmonize data collection methods. The results of this pilot will be crucial in adjusting the scheme before country-wide roll out.Keywords: human-wildlife conflicts, compensation, human death and injury, crop destruction, predation, property destruction
Procedia PDF Downloads 53120 Monitoring of Vector Mosquitors of Diseases in Areas of Energy Employment Influence in the Amazon (Amapa State), Brazil
Authors: Ribeiro Tiago Magalhães
Abstract:
Objective: The objective of this study was to evaluate the influence of a hydroelectric power plant in the state of Amapá, and to present the results obtained by dimensioning the diversity of the main mosquito vectors involved in the transmission of pathogens that cause diseases such as malaria, dengue and leishmaniasis. Methodology: The present study was conducted on the banks of the Araguari River, in the municipalities of Porto Grande and Ferreira Gomes in the southern region of Amapá State. Nine monitoring campaigns were conducted, the first in April 2014 and the last in March 2016. The selection of the catch sites was done in order to prioritize areas with possible occurrence of the species considered of greater importance for public health and areas of contact between the wild environment and humans. Sampling efforts aimed to identify the local vector fauna and to relate it to the transmission of diseases. In this way, three phases of collection were established, covering the schedules of greater hematophageal activity. Sampling was carried out using Shannon Shack and CDC types of light traps and by means of specimen collection with the hold method. This procedure was carried out during the morning (between 08:00 and 11:00), afternoon-twilight (between 15:30 and 18:30) and night (between 18:30 and 22:00). In the specific methodology of capture with the use of the CDC equipment, the delimited times were from 18:00 until 06:00 the following day. Results: A total of 32 species of mosquitoes was identified, and a total of 2,962 specimens was taxonomically subdivided into three genera (Culicidae, Psychodidae and Simuliidae) Psorophora, Sabethes, Simulium, Uranotaenia and Wyeomyia), besides those represented by the family Psychodidae that due to the morphological complexities, allows the safe identification (without the method of diaphanization and assembly of slides for microscopy), only at the taxonomic level of subfamily (Phlebotominae). Conclusion: The nine monitoring campaigns carried out provided the basis for the design of the possible epidemiological structure in the areas of influence of the Cachoeira Caldeirão HPP, in order to point out among the points established for sampling, which would represent greater possibilities, according to the group of identified mosquitoes, of disease acquisition. However, what should be mainly considered, are the future events arising from reservoir filling. This argument is based on the fact that the reproductive success of Culicidae is intrinsically related to the aquatic environment for the development of its larvae until adulthood. From the moment that the water mirror is expanded in new environments for the formation of the reservoir, a modification in the process of development and hatching of the eggs deposited in the substrate can occur, causing a sudden explosion in the abundance of some genera, in special Anopheles, which holds preferences for denser forest environments, close to the water portions.Keywords: Amazon, hydroelectric, power, plants
Procedia PDF Downloads 193119 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities
Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun
Abstract:
As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning
Procedia PDF Downloads 55118 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement
Authors: Rajkumar Ghosh
Abstract:
Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.Keywords: earthquake, out-of-sequence thrust, disaster, human life
Procedia PDF Downloads 74117 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder
Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada
Abstract:
From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation
Procedia PDF Downloads 187116 The Potential Impact of Big Data Analytics on Pharmaceutical Supply Chain Management
Authors: Maryam Ziaee, Himanshu Shee, Amrik Sohal
Abstract:
Big Data Analytics (BDA) in supply chain management has recently drawn the attention of academics and practitioners. Big data refers to a massive amount of data from different sources, in different formats, generated at high speed through transactions in business environments and supply chain networks. Traditional statistical tools and techniques find it difficult to analyse this massive data. BDA can assist organisations to capture, store, and analyse data specifically in the field of supply chain. Currently, there is a paucity of research on BDA in the pharmaceutical supply chain context. In this research, the Australian pharmaceutical supply chain was selected as the case study. This industry is highly significant since the right medicine must reach the right patients, at the right time, in right quantity, in good condition, and at the right price to save lives. However, drug shortages remain a substantial problem for hospitals across Australia with implications on patient care, staff resourcing, and expenditure. Furthermore, a massive volume and variety of data is generated at fast speed from multiple sources in pharmaceutical supply chain, which needs to be captured and analysed to benefit operational decisions at every stage of supply chain processes. As the pharmaceutical industry lags behind other industries in using BDA, it raises the question of whether the use of BDA can improve transparency among pharmaceutical supply chain by enabling the partners to make informed-decisions across their operational activities. This presentation explores the impacts of BDA on supply chain management. An exploratory qualitative approach was adopted to analyse data collected through interviews. This study also explores the BDA potential in the whole pharmaceutical supply chain rather than focusing on a single entity. Twenty semi-structured interviews were undertaken with top managers in fifteen organisations (five pharmaceutical manufacturers, five wholesalers/distributors, and five public hospital pharmacies) to investigate their views on the use of BDA. The findings revealed that BDA can enable pharmaceutical entities to have improved visibility over the whole supply chain and also the market; it enables entities, especially manufacturers, to monitor consumption and the demand rate in real-time and make accurate demand forecasts which reduce drug shortages. Timely and precise decision-making can allow the entities to source and manage their stocks more effectively. This can likely address the drug demand at hospitals and respond to unanticipated issues such as drug shortages. Earlier studies explore BDA in the context of clinical healthcare; however, this presentation investigates the benefits of BDA in the Australian pharmaceutical supply chain. Furthermore, this research enhances managers’ insight into the potentials of BDA at every stage of supply chain processes and helps to improve decision-making in their supply chain operations. The findings will turn the rhetoric of data-driven decision into a reality where the managers may opt for analytics for improved decision-making in the supply chain processes.Keywords: big data analytics, data-driven decision, pharmaceutical industry, supply chain management
Procedia PDF Downloads 106115 Partially Aminated Polyacrylamide Hydrogel: A Novel Approach for Temporary Oil and Gas Well Abandonment
Authors: Hamed Movahedi, Nicolas Bovet, Henning Friis Poulsen
Abstract:
Following the advent of the Industrial Revolution, there has been a significant increase in the extraction and utilization of hydrocarbon and fossil fuel resources. However, a new era has emerged, characterized by a shift towards sustainable practices, namely the reduction of carbon emissions and the promotion of renewable energy generation. Given the substantial number of mature oil and gas wells that have been developed inside the petroleum reservoir domain, it is imperative to establish an environmental strategy and adopt appropriate measures to effectively seal and decommission these wells. In general, the cement plug serves as a material for plugging purposes. Nevertheless, there exist some scenarios in which the durability of such a plug is compromised, leading to the potential escape of hydrocarbons via fissures and fractures within cement plugs. Furthermore, cement is often not considered a practical solution for temporary plugging, particularly in the case of well sites that have the potential for future gas storage or CO2 injection. The Danish oil and gas industry has promising potential as a prospective candidate for future carbon dioxide (CO2) injection, hence contributing to the implementation of carbon capture strategies within Europe. The primary reservoir component consists of chalk, a rock characterized by limited permeability. This work focuses on the development and characterization of a novel hydrogel variant. The hydrogel is designed to be injected via a low-permeability reservoir and afterward undergoes a transformation into a high-viscosity gel. The primary objective of this research is to explore the potential of this hydrogel as a new solution for effectively plugging well flow. Initially, the synthesis of polyacrylamide was carried out using radical polymerization inside the confines of the reaction flask. Subsequently, with the application of the Hoffman rearrangement, the polymer chain undergoes partial amination, facilitating its subsequent reaction with the crosslinker and enabling the formation of a hydrogel in the subsequent stage. The organic crosslinker, glutaraldehyde, was employed in the experiment to facilitate the formation of a gel. This gel formation occurred when the polymeric solution was subjected to heat within a specified range of reservoir temperatures. Additionally, a rheological survey and gel time measurements were conducted on several polymeric solutions to determine the optimal concentration. The findings indicate that the gel duration is contingent upon the starting concentration and exhibits a range of 4 to 20 hours, hence allowing for manipulation to accommodate diverse injection strategies. Moreover, the findings indicate that the gel may be generated in environments characterized by acidity and high salinity. This property ensures the suitability of this substance for application in challenging reservoir conditions. The rheological investigation indicates that the polymeric solution exhibits the characteristics of a Herschel-Bulkley fluid with somewhat elevated yield stress prior to solidification.Keywords: polyacrylamide, hofmann rearrangement, rheology, gel time
Procedia PDF Downloads 75114 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks
Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios
Abstract:
To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand
Procedia PDF Downloads 141113 Climate Change and Health: Scoping Review of Scientific Literature 1990-2015
Authors: Niamh Herlihy, Helen Fischer, Rainer Sauerborn, Anneliese Depoux, Avner Bar-Hen, Antoine Flauhault, Stefanie Schütte
Abstract:
In the recent decades, there has been an increase in the number of publications both in the scientific and grey literature on the potential health risks associated with climate change. Though interest in climate change and health is growing, there are still many gaps to adequately assess our future health needs in a warmer world. Generating a greater understanding of the health impacts of climate change could be a key step in inciting the changes necessary to decelerate global warming and to target new strategies to mitigate the consequences on health systems. A long term and broad overview of existing scientific literature in the field of climate change and health is currently missing in order to ensure that all priority areas are being adequately addressed. We conducted a scoping review of published peer-reviewed literature on climate change and health from two large databases, PubMed and Web of Science, between 1990 and 2015. A scoping review allowed for a broad analysis of this complex topic on a meta-level as opposed to a thematically refined literature review. A detailed search strategy including specific climate and health terminology was used to search the two databases. Inclusion and exclusion criteria were applied in order to capture the most relevant literature on the human health impact of climate change within the chosen timeframe. Two reviewers screened the papers independently and any differences arising were resolved by a third party. Data was extracted, categorized and coded both manually and using R software. Analytics and infographics were developed from results. There were 7269 articles identified between the two databases following the removal of duplicates. After screening of the articles by both reviewers 3751 were included. As expected, preliminary results indicate that the number of publications on the topic has increased over time. Geographically, the majority of publications address the impact of climate change and health in Europe and North America, This is particularly alarming given that countries in the Global South will bear the greatest health burden. Concerning health outcomes, infectious diseases, particularly dengue fever and other mosquito transmitted infections are the most frequently cited. We highlight research gaps in certain areas e.g climate migration and mental health issues. We are developing a database of the identified climate change and health publications and are compiling a report for publication and dissemination of the findings. As health is a major co-beneficiary to climate change mitigation strategies, our results may serve as a useful source of information for research funders and investors when considering future research needs as well as the cost-effectiveness of climate change strategies. This study is part of an interdisciplinary project called 4CHealth that confronts results of the research done on scientific, political and press literature to better understand how the knowledge on climate change and health circulates within those different fields and whether and how it is translated to real world change.Keywords: climate change, health, review, mapping
Procedia PDF Downloads 317112 Strategic Interventions to Address Health Workforce and Current Disease Trends, Nakuru, Kenya
Authors: Paul Moses Ndegwa, Teresia Kabucho, Lucy Wanjiru, Esther Wanjiru, Brian Githaiga, Jecinta Wambui
Abstract:
Health outcome has improved in the country since 2013 following the adoption of the new constitution in Kenya with devolved governance with administration and health planning functions transferred to county governments. 2018-2022 development agenda prioritized universal healthcare coverage, food security, and nutrition, however, the emergence of Covid-19 and the increase of non-communicable diseases pose a challenge and constrain in an already overwhelmed health system. A study was conducted July-November 2021 to establish key challenges in achieving universal healthcare coverage within the county and best practices for improved non-communicable disease control. 14 health workers ranging from nurses, doctors, public health officers, clinical officers, and pharmaceutical technologists were purposely engaged to provide critical information through questionnaires by a trained duo observing ethical procedures on confidentiality. Data analysis. Communicable diseases are major causes of morbidity and mortality. Non-communicable diseases contribute to approximately 39% of deaths. More than 45% of the population does not have access to safe drinking water. Study noted geographic inequality with respect to distribution and use of health resources including competing non-health priorities. 56% of health workers are nurses, 13% clinical officers, 7% doctors, 9%public health workers, 2% are pharmaceutical technologists. Poor-quality data limits the validity of disease-burdened estimates and research activities. Risk factors include unsafe water, sanitation, hand washing, unsafe sex, and malnutrition. Key challenge in achieving universal healthcare coverage is the rise in the relative contribution of non-communicable diseases. Improve targeted disease control with effective and equitable resource allocation. Develop high infectious disease control mechanisms. Improvement of quality data for decision making. Strengthen electronic data-capture systems. Increase investments in the health workforce to improve health service provision and achievement of universal health coverage. Create a favorable environment to retain health workers. Fill in staffing gaps resulting in shortages of doctors (7%). Develop a multi-sectional approach to health workforce planning and management. Need to invest in mechanisms that generate contextual evidence on current and future health workforce needs. Ensure retention of qualified, skilled, and motivated health workforce. Deliver integrated people-centered health services.Keywords: multi-sectional approach, equity, people-centered, health workforce retention
Procedia PDF Downloads 113111 Development of a Social Assistive Robot for Elderly Care
Authors: Edwin Foo, Woei Wen, Lui, Meijun Zhao, Shigeru Kuchii, Chin Sai Wong, Chung Sern Goh, Yi Hao He
Abstract:
This presentation presents an elderly care and assistive social robot development work. We named this robot JOS and he is restricted to table top operation. JOS is designed to have a maximum volume of 3600 cm3 with its base restricted to 250 mm and his mission is to provide companion, assist and help the elderly. In order for JOS to accomplish his mission, he will be equipped with perception, reaction and cognition capability. His appearance will be not human like but more towards cute and approachable type. JOS will also be designed to be neutral gender. However, the robot will still have eyes, eyelid and a mouth. For his eyes and eyelids, they will be built entirely with Robotis Dynamixel AX18 motor. To realize this complex task, JOS will be also be equipped with micro-phone array, vision camera and Intel i5 NUC computer and a powered by a 12 V lithium battery that will be self-charging. His face is constructed using 1 motor each for the eyelid, 2 motors for the eyeballs, 3 motors for the neck mechanism and 1 motor for the lips movement. The vision senor will be house on JOS forehead and the microphone array will be somewhere below the mouth. For the vision system, Omron latest OKAO vision sensor is used. It is a compact and versatile sensor that is only 60mm by 40mm in size and operates with only 5V supply. In addition, OKAO vision sensor is capable of identifying the user and recognizing the expression of the user. With these functions, JOS is able to track and identify the user. If he cannot recognize the user, JOS will ask the user if he would want him to remember the user. If yes, JOS will store the user information together with the capture face image into a database. This will allow JOS to recognize the user the next time the user is with JOS. In addition, JOS is also able to interpret the mood of the user through the facial expression of the user. This will allow the robot to understand the user mood and behavior and react according. Machine learning will be later incorporated to learn the behavior of the user so as to understand the mood of the user and requirement better. For the speech system, Microsoft speech and grammar engine is used for the speech recognition. In order to use the speech engine, we need to build up a speech grammar database that captures the commonly used words by the elderly. This database is built from research journals and literature on elderly speech and also interviewing elderly what do they want to robot to assist them with. Using the result from the interview and research from journal, we are able to derive a set of common words the elderly frequently used to request for the help. It is from this set that we build up our grammar database. In situation where there is more than one person near JOS, he is able to identify the person who is talking to him through an in-house developed microphone array structure. In order to make the robot more interacting, we have also included the capability for the robot to express his emotion to the user through the facial expressions by changing the position and movement of the eyelids and mouth. All robot emotions will be in response to the user mood and request. Lastly, we are expecting to complete this phase of project and test it with elderly and also delirium patient by Feb 2015.Keywords: social robot, vision, elderly care, machine learning
Procedia PDF Downloads 440110 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications
Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky
Abstract:
InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor
Procedia PDF Downloads 254109 Monitoring and Improving Performance of Soil Aquifer Treatment System and Infiltration Basins Performance: North Gaza Emergency Sewage Treatment Plant as Case Study
Authors: Sadi Ali, Yaser Kishawi
Abstract:
As part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is to find non-conventional water resource from treated wastewater to irrigate 1500 hectares and serves over 100,000 inhabitants. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & 9 infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, phase A is functioning since Apr 2009. Since Apr 2009, a monitoring plan is conducted to monitor the infiltration rate (I.R.) of the 9 basins. Nearly 23 million m3 of partially treated wastewater were infiltrated up to Jun 2014. It is important to maintain an acceptable rate to allow the basins to handle the coming quantities (currently 10,000 m3 are pumped an infiltrated daily). The methodology applied was to review and analysis the collected data including the I.R.s, the WW quality and the drying-wetting schedule of the basins. One of the main findings is the relation between the Total Suspended Solids (TSS) at BLWWTP and the I.R. at the basins. Since April 2009, the basins scored an average I.R. of about 2.5 m/day. Since then the records showed a decreasing pattern of the average rate until it reached the lower value of 0.42 m/day in Jun 2013. This was accompanied with an increase of TSS (mg/L) concentration at the source reaching above 200 mg/L. The reducing of TSS concentration directly improved the I.R. (by cleaning the WW source ponds at Biet Lahia WWTP site). This was reflected in an improvement in I.R. in last 6 months from 0.42 m/day to 0.66 m/day then to nearly 1.0 m/day as the average of the last 3 months of 2013. The wetting-drying scheme of the basins was observed (3 days wetting and 7 days drying) besides the rainfall rates. Despite the difficulty to apply this scheme accurately a control of flow to each basin was applied to improve the I.R. The drying-wetting system affected the I.R. of individual basins, thus affected the overall system rate which was recorded and assessed. Also the ploughing activities at the infiltration basins as well were recommended at certain times to retain a certain infiltration level. This breaks the confined clogging layer which prevents the infiltration. It is recommended to maintain proper quality of WW infiltrated to ensure an acceptable performance of IBs. The continual maintenance of settling ponds at BLWWTP, continual ploughing of basins and applying soil treatment techniques at the IBs will improve the I.R.s. When the new WWTP functions a high standard effluent quality (TSS 20mg, BOD 20 mg/l and TN 15 mg/l) will be infiltrated, thus will enhance I.R.s of IBs due to lower organic load.Keywords: SAT, wastewater quality, soil remediation, North Gaza
Procedia PDF Downloads 233108 Development of an Instrument Assessing Participants’ Motivation on Assigning Monetary Value to Quality of Life
Authors: Afentoula Mavrodi, Andreas Georgiou, Georgios Tsiotras, Vassilis Aletras
Abstract:
Placing a monetary value on a quality-adjusted-life-year (QALY) is of utmost importance in economic evaluation. Identifying the population’s preferences is critical in order to understand some of the reasons driving variations in the assigned monetary value. Yet, evidence of the motives behind value assignment to a QALY by the general public is limited. Developing an instrument that would capture the population’s motives could be proven valuable to policy-makers, to guide them in allocating different values to a QALY based on users’ motivations. The aim of this study was to identify the most relevant motives and develop an appropriate instrument to assess them. To design the instrument, we employed: a) the EQ-5D-3L tool to assess participants’ current health status, and b) the Willingness-to-Pay (WTP) approach, within the Contingent Valuation (CV) Method framework, to elicit the monetary value. Advancing the open-ended approach adopted to assess solely protest bidders’ motives; a variety of follow-up item-specific statements were designed (deductive approach), aiming to evaluate motives of both protest bidders and participants willing to pay for the hypothetical treatment under consideration. The initial design of the survey instrument was the outcome of an extensive literature review. This instrument was revised based on 15 semi-structured interviews that took place in September 2018 and a pilot study held during two months (October-November) in 2018. Individuals with different educational, occupational and economical backgrounds and adequate verbal skills were recruited to complete the semi-structured interviews. The follow-up motivation statements of both protest bidders and those willing to pay were revised and rephrased after the semi-structured interviews. In total 4 statements for protest bidders and 3 statements for those willing to pay for the treatment were chosen to be included in the survey tool. Using the CATI (Computer Assisted Telephone Interview) method, a randomly selected sample of 97 persons living in Thessaloniki, Greece, completed the questionnaire on two occasions over a period of 4 weeks. Based on pilot study results, a test-retest reliability assessment was performed using the intra-class correlation coefficient (ICC). All statements formulated for protest bidders showed acceptable reliability (ICC values of 0.84 (95% CI: 0.67, 0.92) and above). Similarly, all statements for those willing to pay for the treatment showed high reliability (ICC values of 0.86 (95% CI: 0.78, 0.91) and above). Overall, the instrument designed in this study was reliable with regards to the item-specific statements assessing participants’ motivation. Validation of the instrument will take place in a future study. For a holistic WTP per QALY instrument, participants’ motivation must be addressed broadly. The instrument developed in this study captured a variety of motives and provided insight with regards to the method through which the latter are evaluated. Last but not least, it extended motive assessment to all study participants and not only protest bidders.Keywords: contingent valuation method, instrument, motives, quality-adjusted life-year, willingness-to-pay
Procedia PDF Downloads 135107 The Saudi Arabia 2030 Strategy: Translation Reception and Translator Readiness
Authors: Budur Alsulami
Abstract:
One of the aims of the recently implemented Saudi Arabia Vision 2030 strategy is focused on strengthening education, entertainment, and tourism to attract international visitors to the country. To promote and increase the tourism sector, tourism translation can serve the tourism industry by translating various materials that promote the country’s tourism such as brochures, catalogues, and websites. In order to achieve the goal of enhancing tourism in Saudi Arabia, promotional texts related to tourism and Saudi culture will need to be translated into English and addressed to non-Arabic-speaking potential tourists. This research aims to measure student readiness to be professional translators who can introduce and promote Saudi Arabia to non-Arabic-speaking tourists. The study will also evaluate students' abilities to promote and convey Saudi culture to non-Arabic tourists by translating tourism texts. Translating tourism materials demands considerable effort and specific translation skills to capture tourists' interest and encourage visits. Numerous scholars have explored challenges in translating tourism promotional materials, focusing on translation methods, cultural issues, course design, and necessary knowledge for tourism translation. Based on these insights, experts recommend that translators prioritize audience expectations, cultural appropriateness, and linguistic conventions while revising course syllabi to include practical skills. This research aims to assess students' readiness to become professional translators aligned with Vision 2030 tourism goals. To accomplish this, in the first stage of the project, twenty students from two Saudi Arabian Universities who have completed at least two years of Translation Studies were invited to translate two tourism texts of 300 words each. These tourism texts contain information about famous tourist sights and traditional food in Saudi Arabia and contained cultural terms and heritage information. The students then completed a questionnaire about the challenges of the text and the process of their translation, and then participated in a semi-structured interview. In the second stage of the project, the students’ translations will be evaluated by a qualified National Accreditation Authority of Translators and Interpreters (NAATI) examiner applying the NAATI rubrics. Finally, these translations will be read and assessed by fifteen to twenty native and near-native readers of English, who will evaluate the quality of the translations based on their understanding and perception of these texts. Results analysed to date suggest that a number of student translators faced challenges such as choosing a suitable translation method, omitting some key terms or words during the translation process, and managing their time, all of which may indicate a lack of practice in translating texts of this nature and lack of awareness regarding translation strategies most suitable for the genre.Keywords: Saudi Arabia Vision 2030, translation, tourism, reader reception, culture, heritage, translator training/competencies
Procedia PDF Downloads 4106 Monitoring and Improving Performance of Soil Aquifer Treatment System and Infiltration Basins of North Gaza Emergency Sewage Treatment Plant as Case Study
Authors: Sadi Ali, Yaser Kishawi
Abstract:
As part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely covers the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is to find non-conventional water resource from treated wastewater to irrigate 1500 hectares and serves over 100,000 inhabitants. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & 9 infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, phase A is functioning since Apr 2009. Since Apr 2009, a monitoring plan is conducted to monitor the infiltration rate (I.R.) of the 9 basins. Nearly 23 million m3 of partially treated wastewater were infiltrated up to Jun 2014. It is important to maintain an acceptable rate to allow the basins to handle the coming quantities (currently 10,000 m3 are pumped an infiltrated daily). The methodology applied was to review and analysis the collected data including the I.R.s, the WW quality and the drying-wetting schedule of the basins. One of the main findings is the relation between the Total Suspended Solids (TSS) at BLWWTP and the I.R. at the basins. Since April 2009, the basins scored an average I.R. of about 2.5 m/day. Since then the records showed a decreasing pattern of the average rate until it reached the lower value of 0.42 m/day in Jun 2013. This was accompanied with an increase of TSS (mg/L) concentration at the source reaching above 200 mg/L. The reducing of TSS concentration directly improved the I.R. (by cleaning the WW source ponds at Biet Lahia WWTP site). This was reflected in an improvement in I.R. in last 6 months from 0.42 m/day to 0.66 m/day then to nearly 1.0 m/day as the average of the last 3 months of 2013. The wetting-drying scheme of the basins was observed (3 days wetting and 7 days drying) besides the rainfall rates. Despite the difficulty to apply this scheme accurately a control of flow to each basin was applied to improve the I.R. The drying-wetting system affected the I.R. of individual basins, thus affected the overall system rate which was recorded and assessed. Also the ploughing activities at the infiltration basins as well were recommended at certain times to retain a certain infiltration level. This breaks the confined clogging layer which prevents the infiltration. It is recommended to maintain proper quality of WW infiltrated to ensure an acceptable performance of IBs. The continual maintenance of settling ponds at BLWWTP, continual ploughing of basins and applying soil treatment techniques at the IBs will improve the I.R.s. When the new WWTP functions a high standard effluent quality (TSS 20mg, BOD 20 mg/l, and TN 15 mg/l) will be infiltrated, thus will enhance I.R.s of IBs due to lower organic load.Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, North Gaza
Procedia PDF Downloads 246105 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 194104 Exploration of Barriers and Challenges to Innovation Process for SMEs: Possibilities to Promote Cooperation Between Scientific and Business Institutions to Address it
Authors: Indre Brazauskaite, Vilte Auruskeviciene
Abstract:
Significance of the study is outlined through current strategic management challenges faced by SMEs. First, innovation is recognized as competitive advantage in the market, having ever changing market conditions. It is of constant interest from both practitioners and academics to capture and capitalize on business opportunities or mitigate the foreseen risks. Secondly, it is recognized that integrated system is needed for proper implementation of innovation process, especially during the period of business incubation, associated with relatively high risks of new product failure. Finally, ability to successful commercialize innovations leads to tangible business results that allow to grow organizations further. This is particularly relevant to SMEs due to limited structures, resources, or capabilities. Cooperation between scientific and business institutions could be a tool of mutual interest to observe, address, and further develop innovations during the incubation period, which is the most demanding and challenging during the innovation process. Material aims to address the following problematics: i) indicate the major barriers and challenges in innovation process that SMEs are facing, ii) outline the possibilities for these barriers and challenges to be addressed by cooperation between scientific and business institutions. Basis for this research is stage-by-stage integrated innovation management process which presents existing challenges and needed aid in operational decision making. The stage-by-stage innovation management process exploration highlights relevant research opportunities that have high practical relevance in the field. It is expected to reveal the possibility for business incubation programs that could combine interest from both – practices and academia. Methodology. Scientific meta-analysis of to-date scientific literature that explores innovation process. Research model is built on the combination of stage-gate model and lean six sigma approach. It outlines the following steps: i) pre-incubation (discovery and screening), ii) incubation (scoping, planning, development, and testing), and iii) post-incubation (launch and commercialization) periods. Empirical quantitative research is conducted to address barriers and challenges related to innovation process among SMEs that limits innovations from successful launch and commercialization and allows to identify potential areas for cooperation between scientific and business institutions. Research sample, high level decision makers representing trading SMEs, are approached with structured survey based on the research model to investigate the challenges associated with each of the innovation management step. Expected findings. First, the current business challenges in the innovation process are revealed. It will outline strengths and weaknesses of innovation management practices and systems across SMEs. Secondly, it will present material for relevant business case investigation for scholars to serve as future research directions. It will contribute to a better understanding of quality innovation management systems. Third, it will contribute to the understanding the need for business incubation systems for mutual contribution from practices and academia. It can increase relevance and adaptation of business research.Keywords: cooperation between scientific and business institutions, innovation barriers and challenges, innovation measure, innovation process, SMEs
Procedia PDF Downloads 148103 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach
Authors: Jianli Jiang, Bai-Chen Xie
Abstract:
The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.Keywords: spatial network DEA, environmental efficiency, sustainable development, power system
Procedia PDF Downloads 107102 Clinical Validation of an Automated Natural Language Processing Algorithm for Finding COVID-19 Symptoms and Complications in Patient Notes
Authors: Karolina Wieczorek, Sophie Wiliams
Abstract:
Introduction: Patient data is often collected in Electronic Health Record Systems (EHR) for purposes such as providing care as well as reporting data. This information can be re-used to validate data models in clinical trials or in epidemiological studies. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. Mentioning a disease in a discharge letter does not necessarily mean that a patient suffers from this disease. Many of them discuss a diagnostic process, different tests, or discuss whether a patient has a certain disease. The COVID-19 dataset in this study used natural language processing (NLP), an automated algorithm which extracts information related to COVID-19 symptoms, complications, and medications prescribed within the hospital. Free-text patient clinical patient notes are rich sources of information which contain patient data not captured in a structured form, hence the use of named entity recognition (NER) to capture additional information. Methods: Patient data (discharge summary letters) were exported and screened by an algorithm to pick up relevant terms related to COVID-19. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. A list of 124 Systematized Nomenclature of Medicine (SNOMED) Clinical Terms has been provided in Excel with corresponding IDs. Two independent medical student researchers were provided with a dictionary of SNOMED list of terms to refer to when screening the notes. They worked on two separate datasets called "A” and "B”, respectively. Notes were screened to check if the correct term had been picked-up by the algorithm to ensure that negated terms were not picked up. Results: Its implementation in the hospital began on March 31, 2020, and the first EHR-derived extract was generated for use in an audit study on June 04, 2020. The dataset has contributed to large, priority clinical trials (including International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) by bulk upload to REDcap research databases) and local research and audit studies. Successful sharing of EHR-extracted datasets requires communicating the provenance and quality, including completeness and accuracy of this data. The results of the validation of the algorithm were the following: precision (0.907), recall (0.416), and F-score test (0.570). Percentage enhancement with NLP extracted terms compared to regular data extraction alone was low (0.3%) for relatively well-documented data such as previous medical history but higher (16.6%, 29.53%, 30.3%, 45.1%) for complications, presenting illness, chronic procedures, acute procedures respectively. Conclusions: This automated NLP algorithm is shown to be useful in facilitating patient data analysis and has the potential to be used in more large-scale clinical trials to assess potential study exclusion criteria for participants in the development of vaccines.Keywords: automated, algorithm, NLP, COVID-19
Procedia PDF Downloads 101101 From Core to Hydrocarbon: Reservoir Sedimentology, Facies Analysis and Depositional Model of Early Oligocene Mahuva Formation in Tapti Daman Block, Western Offshore Basin, India
Authors: Almas Rajguru
Abstract:
The Oligocene succession of the Tapti- Daman area is one of the established petroleum plays in Tapti-Daman block of the Mumbai Offshore Basin. Despite good control and production history, the sand geometry and continuity of reservoir character of these sediments are less understood as most reservoirs are thin and fall below seismic resolution. The present work focuses on a detailed analysis of the Early Oligocene Mahuva Formation at the reservoir scale through laboratory studies (sedimentology and biostratigraphy) of core and sidewall cores in integration with electro logs for firming up facies’ distribution, micro-depositional environment and sequence stratigraphy, diagenesis and reservoir characterization from seventeen wells from North Tapti-C-37 area in Tapti Daman Block, WOB. The thick shale/claystone with thin interbeds of sandstone and siltstones of deeper marine in the lower part of Mahuva Fm represents deposition in a transgressive regime. The overlying interbedded sandstone, glauconitic-siltstone/fine-grained sandstone, and thin beds of packstone/grainstone within highly fissile shale were deposited in a prograding tide-dominated delta during late-rise normal regression. Nine litho facies (F1-F9) representing deposition in various microenvironments of the tide-dominated delta are identified based on their characteristic sediment texture, structure and microfacies. Massive, gritty sandstone (F1) with poorly sorted sands lithic fragments with calcareous and Fe-rich matrix represents channel fill sediments. High-angle cross-stratified sandstone (F2) deposited in rapidly shifting/migrating bars under strong tidal currents. F3 records the laterally accreted tidal-channel point bars. F3 (low-angle cross-stratified to parallel bedded sandstone) and F4 (Clean sandstone) are often associated with F2 in a tidal bar complex. F5 (interbedded thin sand and mud) and F6 (bioturbated sandstone) represent tidal flat deposits. High energy open marine carbonate shoals (F8) and fossiliferous sandstone in offshore bars (F7) represent deepening up facies. Shallow marine standstill conditions facilitated the deposition of thick shale (F9) beds. The reservoir facies (F1-F6) are commonly poorly to moderately sorted; bimodal, immature sandstone represented by quartz-wacke. The framework grains are sub-angular to sub-rounded, medium to coarse-grained (occasionally gritty) embedded within argillaceous (kaolinite/chlorite/chamosite) to highly Fe-rich matrix (sideritic). The facies F7 and F8, representing the sandy packstone and grainstone facies, respectively, exhibit poor reservoir characteristics due to sanitization, diagenetic compaction and matrix-filled intergranular spaces. The various diagenetic features such as the presence of authigenic clays (kaolinite/dickite/smectite); ferruginous minerals like siderite, pyrite, hematite and other iron oxides; bioturbations; glauconite; calcite and quartz cementation, precipitation of gypsum, pressure solution and other compaction effects are identified. These diagenetic features, wherever present, have reduced porosity and permeability thereby adversely affecting reservoir quality. Tidal bar sandstones possess good reservoir characteristics such as moderate to good sorting, fair to good porosity and geometry that facilitates efficient lateral extension and vertical thickness of reservoir. The sand bodies of F2, F3 and F4 facies of Well L, M and Q deposited in a tidal bar complex exhibit good reservoir quality represented by relatively cleaner, poorly burrowed, loose, friable sandstone with good porosity. Sandstone facies around these wells could prove a potential hydrocarbon reservoir and could be considered for further exploration.Keywords: reservoir sedimentology, facies analysis, HST, tide dominated delta, tidal bars
Procedia PDF Downloads 89100 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning
Authors: Ali Kazemi
Abstract:
The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis
Procedia PDF Downloads 5799 Strengthening Facility-Based Systems to Improve Access to In-Patient Care for Sick Newborns in Brong Ahafo Region, Ghana
Authors: Paulina Clara Appiah, Kofi Issah, Timothy Letsa, Kennedy Nartey, Amanua Chinbuah, Adoma Dwomo-Fokuo, Jacqeline G. Asibey
Abstract:
Background: The Every Newborn Action Plan provides evidence–based interventions to end preventable deaths in high burden countries. Brong Ahafo Region is one of ten regions in Ghana with less than half of its district hospitals having sick newborn units. Facility-based neonatal care is not prioritized and under-funded, and there is also inadequate knowledge and competence to manage the sick. The aim of this intervention was to make available in–patient care for sick newborns in all 19 district hospitals through the strengthening of facility-based systems. Methods: With the development and dissemination of the National Newborn Strategy and Action Plan 2014-2018, the country was able to attract PATH which provided the region with basic resuscitation equipment, supported hospital providers’ capacity building in Helping Babies Breathe, Essential Care of Every Baby, Infection Prevention and Management and held a symposia on managing the sick newborn. Newborn advocacy was promoted through newborn champions at the facility and community levels. Hospital management was then able to mobilize resources from communities, corporate organizations and from internally generated funds; created or expanded sick newborn care units and provided essential medicines and equipment. Kangaroo Mother Care was initiated in 6 hospitals. Pediatric specialist outreach services initiated comprised telephone consultations, teaching ward rounds and participating in perinatal death audits meetings. Newborn data capture and management was improved through the provision and training on the use of standard registers provided from the national level. Results: From February 2015 to November 2017, hospitals with sick newborn units increased from 7 to 19 (37%-100%). 180 pieces each of newborn ventilation bags and masks size 0, 1 and penguin suction bulbs were distributed to the hospitals, in addition to 20 newborn mannequin sets and 90 small clinical reminder posters. 802 providers (96.9%) were trained in resuscitation, of which 96% were successfully followed up in 6 weeks, 91% in 6 months and 80% in 12 months post-training. 53 clinicians (65%) were trained and mentored to manage sick newborns. 56 specialist teaching ward rounds were conducted. Data completeness improved from 92.6% - 99.9%. Availability of essential medicines improved from 11% to 100%. Number of hospital cots increased from 116 to 248 (214%). Cot occupancy rate increased from 57.4% to 92.5%. Hospitals with phototherapy equipment increased from 0 to 12 (63%). Hospitals with incubators increased from 1 to 12 (5%-63%). Newborn deaths among admissions reduced from 6.3% to 5.4%. Conclusion: Access to in-patient care increased significantly. Newborn advocacy successfully mobilized resources required for strengthening facility –based systems.Keywords: facility-based systems, Ghana, in-patient care, newborn advocacy
Procedia PDF Downloads 24798 Border Security: Implementing the “Memory Effect” Theory in Irregular Migration
Authors: Iliuta Cumpanasu, Veronica Oana Cumpanasu
Abstract:
This paper focuses on studying the conjunction between the new emerged theory of “Memory Effect” in Irregular Migration and Related Criminality and the notion of securitization, and its impact on border management, bringing about a scientific advancement in the field by identifying the patterns corresponding to the linkage of the two concepts, for the first time, and developing a theoretical explanation, with respect to the effects of the non-military threats on border security. Over recent years, irregular migration has experienced a significant increase worldwide. The U.N.'s refugee agency reports that the number of displaced people is at its highest ever - surpassing even post-World War II numbers when the world was struggling to come to terms with the most devastating event in history. This is also the fresh reality within the core studied coordinate, the Balkan Route of Irregular Migration, which starts from Asia and Africa and continues to Turkey, Greece, North Macedonia or Bulgaria, Serbia, and ends in Romania, where thousands of migrants find themselves in an irregular situation concerning their entry to the European Union, with its important consequences concerning the related criminality. The data from the past six years was collected by making use of semi-structured interviews with experts in the field of migration and desk research within some organisations involved in border security, pursuing the gathering of genuine insights from the aforementioned field, which was constantly addressed the existing literature and subsequently subjected to the mixed methods of analysis, including the use of the Vector Auto-Regression estimates model. Thereafter, the analysis of the data followed the processes and outcomes in Grounded Theory, and a new Substantive Theory emerged, explaining how the phenomena of irregular migration and cross-border criminality are the decisive impetus for implementing the concept of securitization in border management by using the proposed pattern. The findings of the study are therefore able to capture an area that has not yet benefitted from a comprehensive approach in the scientific community, such as the seasonality, stationarity, dynamics, predictions, or the pull and push factors in Irregular Migration, also highlighting how the recent ‘Pandemic’ interfered with border security. Therefore, the research uses an inductive revelatory theoretical approach which aims at offering a new theory in order to explain a phenomenon, triggering a practically handy contribution for the scientific community, research institutes or Academia and also usefulness to organizational practitioners in the field, among which UN, IOM, UNHCR, Frontex, Interpol, Europol, or national agencies specialized in border security. The scientific outcomes of this study were validated on June 30, 2021, when the author defended his dissertation for the European Joint Master’s in Strategic Border Management, a two years prestigious program supported by the European Commission and Frontex Agency and a Consortium of six European Universities and is currently one of the research objectives of his pending PhD research at the West University Timisoara.Keywords: migration, border, security, memory effect
Procedia PDF Downloads 9197 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model
Authors: Muhammad Karim Ahmadzai
Abstract:
Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis
Procedia PDF Downloads 11596 Harnessing Renewable Energy as a Strategy to Combating Climate Change in Sub Saharan Africa
Authors: Gideon Nyuimbe Gasu
Abstract:
Sub Saharan Africa is at a critical point, experiencing rapid population growth, particularly in urban areas and young growing force. At the same time, the growing risk of catastrophic global climate change threatens to weaken food production system, increase intensity and frequency of drought, flood, and fires and undermine gains on development and poverty reduction. Although the region has the lowest per capital greenhouse gas emission level in the world, it will need to join global efforts to address climate change, including action to avoid significant increases and to encourage a green economy. Thus, there is a need for the concept of 'greening the economy' as was prescribed at Rio Summit of 1992. Renewable energy is one of the criterions to achieve this laudable goal of maintaining a green economy. There is need to address climate change while facilitating continued economic growth and social progress as energy today is critical to economic growth. Fossil fuels remain the major contributor of greenhouse gas emission. Thus, cleaner technologies such as carbon capture storage, renewable energy have emerged to be commercially competitive. This paper sets out to examine how to achieve a low carbon economy with minimal emission of carbon dioxide and other greenhouse gases which is one of the outcomes of implementing a green economy. Also, the paper examines the different renewable energy sources such as nuclear, wind, hydro, biofuel, and solar voltaic as a panacea to the looming climate change menace. Finally, the paper assesses the different renewable energy and energy efficiency as a propeller to generating new sources of income and jobs and in turn reduces carbon emission. The research shall engage qualitative, evaluative and comparative methods. The research will employ both primary and secondary sources of information. The primary sources of information shall be drawn from the sub Saharan African region and the global environmental organizations, energy legislation, policies and related industries and the judicial processes. The secondary sources will be made up of some books, journal articles, commentaries, discussions, observations, explanations, expositions, suggestions, prescriptions and other material sourced from the internet on renewable energy as a panacea to climate change. All information obtained from these sources will be subject to content analysis. The research result will show that the entire planet is warming as a result of the activities of mankind which is clear evidence that the current development is fundamentally unsustainable. Equally, the study will reveal that a low carbon development pathway in the sub Saharan African region should be embraced to minimize emission of greenhouse gases such as using renewable energy rather than coal, oil, and gas. The study concludes that until adequate strategies are devised towards the use of renewable energy the region will continue to add and worsen the current climate change menace and other adverse environmental conditions.Keywords: carbon dioxide, climate change, legislation/law, renewable energy
Procedia PDF Downloads 22595 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 29694 Longitudinal impact on Empowerment for Ugandan Women with Post-Primary Education
Authors: Shelley Jones
Abstract:
Assumptions abound that education for girls will, as a matter of course, lead to their economic empowerment as women; yet. little is known about the ways in which schooling for girls, who traditionally/historically would not have had opportunities for post-primary, or perhaps even primary education – such as the participants in this study based in rural Uganda - in reality, impacts their economic situations. There is a need forlongitudinal studies in which women share experiences, understandings, and reflections of their lives that can inform our knowledge of this. In response, this paper reports on stage four of a longitudinal case study (2004-2018) focused on education and empowerment for girls and women in rural Uganda, in which 13 of the 15 participants from the original study participated. This paper understands empowerment as not simply increased opportunities (e.g., employment) but also real gains in power, freedoms that enable agentive action, and authentic and viable choices/alternatives that offer ‘exit options’ from unsatisfactory situations. As with the other stages, this study used a critical, postmodernist, global feminist ethnographic methodology, multimodal and qualitative data collection. Participants participated in interviews, focus group discussions, and a two-day workshop, which explored their understandings of how/if they understood post-primary education to have contributed to their economic empowerment. A constructivist grounded theory approach was used for data analysis to capture major themes. Findings indicate that although all participants believe that post-primary education provided them with economic opportunities they would not have had otherwise, the parameters of their economic empowerment were severely constrained by historic and extant sociocultural, economic, political, and institutional structures that continue to disempower girls and women, as well as additional financial responsibilities that they assumed to support others. Even though the participants had post-primary education, and they were able to obtain employment or operate their own businesses that they would not likely have been able to do without post-primary education, the majority of the participants’ incomes were not sufficient to elevate them financially above the extreme poverty level, especially as many were single mothers and the sole income earners in their households. Furthermore, most deemed their working conditions unsatisfactory and their positions precarious; they also experienced sexual harassment and abuse in the labour force. Additionally, employment for the participants resulted in a double work burden: long days at work, surrounded by many hours of domestic work at home (which, even if they had spousal partners, still fell almost exclusively to women). In conclusion, although the participants seem to have experienced some increase in economic empowerment, largely due to skills, knowledge, and qualifications gained at the post-primary level, numerous barriers prevented them from maximizing their capabilities and making significant gains in empowerment. There is need, in addition to providing education (primary, secondary, and tertiary) to girls, to address systemic gender inequalities that mitigate against women’s empowerment, as well as opportunities and freedom for women to come together and demand fair pay, reasonable working conditions, and benefits, freedom from gender-based harassment and assault in the workplace, as well as advocate for equal distribution of domestic work as a cultural change.Keywords: girls' post-primary education, women's empowerment, uganda, employment
Procedia PDF Downloads 14693 Li-Ion Batteries vs. Synthetic Natural Gas: A Life Cycle Analysis Study on Sustainable Mobility
Authors: Guido Lorenzi, Massimo Santarelli, Carlos Augusto Santos Silva
Abstract:
The growth of non-dispatchable renewable energy sources in the European electricity generation mix is promoting the research of technically feasible and cost-effective solutions to make use of the excess energy, produced when the demand is low. The increasing intermittent renewable capacity is becoming a challenge to face especially in Europe, where some countries have shares of wind and solar on the total electricity produced in 2015 higher than 20%, with Denmark around 40%. However, other consumption sectors (mainly transportation) are still considerably relying on fossil fuels, with a slow transition to other forms of energy. Among the opportunities for different mobility concepts, electric (EV) and biofuel-powered vehicles (BPV) are the options that currently appear more promising. The EVs are targeting mainly the light duty users because of their zero (Full electric) or reduced (Hybrid) local emissions, while the BPVs encourage the use of alternative resources with the same technologies (thermal engines) used so far. The batteries which are applied to EVs are based on ions of Lithium because of their overall good performance in energy density, safety, cost and temperature performance. Biofuels, instead, can be various and the major difference is in their physical state (liquid or gaseous). In this study gaseous biofuels are considered and, more specifically, Synthetic Natural Gas (SNG) produced through a process of Power-to-Gas consisting in an electrochemical upgrade (with Solid Oxide Electrolyzers) of biogas with CO2 recycling. The latter process combines a first stage of electrolysis, where syngas is produced, and a second stage of methanation in which the product gas is turned into methane and then made available for consumption. A techno-economic comparison between the two alternatives is possible, but it does not capture all the different aspects involved in the two routes for the promotion of a more sustainable mobility. For this reason, a more comprehensive methodology, i.e. Life Cycle Assessment, is adopted to describe the environmental implications of using excess electricity (directly or indirectly) for new vehicle fleets. The functional unit of the study is 1 km and the two options are compared in terms of overall CO2 emissions, both considering Cradle to Gate and Cradle to Grave boundaries. Showing how production and disposal of materials affect the environmental performance of the analyzed routes is useful to broaden the perspective on the impacts that different technologies produce, in addition to what is emitted during the operational life. In particular, this applies to batteries for which the decommissioning phase has a larger impact on the environmental balance compared to electrolyzers. The lower (more than one order of magnitude) energy density of Li-ion batteries compared to SNG implies that for the same amount of energy used, more material resources are needed to obtain the same effect. The comparison is performed in an energy system that simulates the Western European one, in order to assess which of the two solutions is more suitable to lead the de-fossilization of the transport sector with the least resource depletion and the mildest consequences for the ecosystem.Keywords: electrical energy storage, electric vehicles, power-to-gas, life cycle assessment
Procedia PDF Downloads 177