Search results for: adaptive estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2878

Search results for: adaptive estimation

88 Techno Economic Analysis of CAES Systems Integrated into Gas-Steam Combined Plants

Authors: Coriolano Salvini

Abstract:

The increasing utilization of renewable energy sources for electric power production calls for the introduction of energy storage systems to match the electric demand along the time. Although many countries are pursuing as a final goal a “decarbonized” electrical system, in the next decades the traditional fossil fuel fed power plant still will play a relevant role in fulfilling the electric demand. Presently, such plants provide grid ancillary services (frequency control, grid balance, reserve, etc.) by adapting the output power to the grid requirements. An interesting option is represented by the possibility to use traditional plants to improve the grid storage capabilities. The present paper is addressed to small-medium size systems suited for distributed energy storage. The proposed Energy Storage System (ESS) is based on a Compressed Air Energy Storage (CAES) integrated into a Gas-Steam Combined Cycle (GSCC) or a Gas Turbine based CHP plants. The systems can be incorporated in an ex novo built plant or added to an already existing one. To avoid any geological restriction related to the availability of natural compressed air reservoirs, artificial storage is addressed. During the charging phase, electric power is absorbed from the grid by an electric driven intercooled/aftercooled compressor. In the course of the discharge phase, the compressed stored air is sent to a heat transfer device fed by hot gas taken upstream the Heat Recovery Steam Generator (HRSG) and subsequently expanded for power production. To maximize the output power, a staged reheated expansion process is adopted. The specific power production related to the kilogram per second of exhaust gas used to heat the stored air is two/three times larger than that achieved if the gas were used to produce steam in the HRSG. As a result, a relevant power augmentation is attained with respect to normal GSCC plant operations without additional use of fuel. Therefore, the excess of output power can be considered “fuel free” and the storage system can be compared to “pure” ESSs such as electrochemical, pumped hydro or adiabatic CAES. Representative cases featured by different power absorption, production capability, and storage capacity have been taken into consideration. For each case, a technical optimization aimed at maximizing the storage efficiency has been carried out. On the basis of the resulting storage pressure and volume, number of compression and expansion stages, air heater arrangement and process quantities found for each case, a cost estimation of the storage systems has been performed. Storage efficiencies from 0.6 to 0.7 have been assessed. Capital costs in the range of 400-800 €/kW and 500-1000 €/kWh have been estimated. Such figures are similar or lower to those featuring alternative storage technologies.

Keywords: artificial air storage reservoir, compressed air energy storage (CAES), gas steam combined cycle (GSCC), techno-economic analysis

Procedia PDF Downloads 214
87 Prevalence and Associated Risk Factors of Age-Related Macular Degeneration in the Retina Clinic at a Tertiary Center in Makkah Province, Saudi Arabia: A Retrospective Record Review

Authors: Rahaf Mandura, Fatmah Abusharkh, Layan Kurdi, Rahaf Shigdar, Khadijah Alattas

Abstract:

Introduction: Age-related macular degeneration (AMD) in older individuals are serious health issues that severely impact the quality of life of millions globally. In 2020, the fourth leading cause of blindness worldwide was AMD. The global prevalence of AMD is estimated to be around 8.7%. AMD is a progressive disease involving the macular region of the retina, and it has a complex pathophysiology. RPE cell dysfunction plays a crucial step in the pathway leading to irreversible degeneration of photoreceptors with yellowish lipid-rich, protein-containing drusen deposits accumulating between Bruch's membrane and the RPE. Furthermore, lipofuscinogenesis, drusogenesis, inflammation, and neovascularization are four main processes responsible for the formation of the two types of AMD: the wet (exudative, neovascular) and dry (non-exudative, geographic atrophy) types. We retrospectively evaluated the prevalence of AMD among patients visiting the retina clinic at King Abdulaziz University Hospital (Jeddah, Makkah Province, Saudi Arabia) to identify the commonly associated risk factors of AMD. Methods: The records of 3,067 individuals from 2017 to 2021 were reviewed. Of these, 1,935 satisfied the inclusion criteria and were included in this study. We excluded all patient below 18 years, and those who did not undergo fundus imaging or attend their booked appointments, follow-ups, treatments, and referrals were excluded. Results: The prevalence of AMD among the patients was 4%. The age of patients with AMD was significantly greater than those without AMD (72.4 ± 9.8 years vs. 57.2 ± 15.5 years; p < 0.001). Participants with a family history of AMD tended to have the disease more than those without such a history (85.7% vs. 45%; p = 0.043). Ex- and current smokers were more likely to have AMD than non-smokers (34% and 18.6% vs. 7.2%; p < 0.001). Patients with hypertension and those without type 1 diabetes were at a higher risk of developing AMD than those without hypertension (5.5% vs. 2.8%; p = 0.002) and those with type 1 diabetes (4.2% vs. 0.8%; p = 0.040). In contrast, sex, nationality, type 2 diabetes, and abnormal lipid profile were not significantly associated with AMD. Regarding the clinical characteristics of AMD cases, most cases (70.4%) were of the dry type and affected both eyes (77.2%). The disease duration was ≥5 years in 43.1% of the patients. The most frequent chronic diseases associated with AMD were type 2 diabetes (69.1%), hypertension (61.7%), and dyslipidemia (18.5%). Conclusion: In summary, our single tertiary center study showed that AMD is widely prevalent in Jeddah, Saudi Arabia (4%) and linked to a wide range of risk factors. Some of these are modifiable risk factors that can be adjusted to help reduce AMD occurrence. Furthermore, this study has shown the importance of screening and follow-up of family members of patients with AMD to promote early detection and intervention of AMD. We recommend conducting further research on AMD in Saudi Arabia. Concerning the study design, a community-based cross-sectional study would be more helpful for assessing the disease's prevalence. Finally, recruiting a larger sample size is required for more accurate estimation.

Keywords: age related macular degeneration, prevelence, risk factor, dry AMD

Procedia PDF Downloads 42
86 Characterization of Alloyed Grey Cast Iron Quenched and Tempered for a Smooth Roll Application

Authors: Mohamed Habireche, Nacer E. Bacha, Mohamed Djeghdjough

Abstract:

In the brick industry, smooth double roll crusher is used for medium and fine crushing of soft to medium hard material. Due to opposite inward rotation of the rolls, the feed material is nipped between the rolls and crushed by compression. They are subject to intense wear, known as three-body abrasion, due to the action of abrasive products. The production downtime affecting productivity stems from two sources: the bi-monthly rectification of the roll crushers and their replacement when they are completely worn out. Choosing the right material for the roll crushers should result in longer machine cycles, and reduced repair and maintenance costs. All roll crushers are imported from outside Algeria. This results in sometimes very long delivery times which handicap the brickyards, in particular in respecting delivery times and honored the orders made by customers. The aim of this work is to investigate the effect of alloying additions on microstructure and wear behavior of grey lamellar cast iron for smooth roll crushers in brick industry. The base gray iron was melted in an induction furnace with low frequency at a temperature of 1500 °C, in which return cast iron scrap, new cast iron ingot, and steel scrap were added to the melt to generate the desired composition. The chemical analysis of the bar samples was carried out using Emission Spectrometer Systems PV 8050 Series (Philips) except for the carbon, for which a carbon/sulphur analyser Elementrac CS-i was used. Unetched microstructure was used to evaluate the graphite flake morphology using the image comparison measurement method. At least five different fields were selected for quantitative estimation of phase constituents. The samples were observed under X100 magnification with a Zeiss Axiover T40 MAT optical microscope equipped with a digital camera. SEM microscope equipped with EDS was used to characterize the phases present in the microstructure. The hardness (750 kg load, 5mm diameter ball) was measured with a Brinell testing machine for both treated and as-solidified condition test pieces. The test bars were used for tensile strength and metallographic evaluations. Mechanical properties were evaluated using tensile specimens made as per ASTM E8 standards. Two specimens were tested for each alloy. From each rod, a test piece was made for the tensile test. The results showed that the quenched and tempered alloys had best wear resistance at 400 °C for alloyed grey cast iron (containing 0.62%Mn, 0.68%Cr, and 1.09% Cu) due to fine carbides in the tempered matrix. In quenched and tempered condition, increasing Cu content in cast irons improved its wear resistance moderately. Combined addition of Cu and Cr increases hardness and wear resistance for a quenched and tempered hypoeutectic grey cast iron.

Keywords: casting, cast iron, microstructure, heat treating

Procedia PDF Downloads 105
85 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 337
84 Numerical Simulation of Hydraulic Fracture Propagation in Marine-continental Transitional Tight Sandstone Reservoirs by Boundary Element Method: A Case Study of Shanxi Formation in China

Authors: Jiujie Cai, Fengxia LI, Haibo Wang

Abstract:

After years of research, offshore oil and gas development now are shifted to unconventional reservoirs, where multi-stage hydraulic fracturing technology has been widely used. However, the simulation of complex hydraulic fractures in tight reservoirs is faced with geological and engineering difficulties, such as large burial depths, sand-shale interbeds, and complex stress barriers. The objective of this work is to simulate the hydraulic fracture propagation in the tight sandstone matrix of the marine-continental transitional reservoirs, where the Shanxi Formation in Tianhuan syncline of the Dongsheng gas field was used as the research target. The characteristic parameters of the vertical rock samples with rich beddings were clarified through rock mechanics experiments. The influence of rock mechanical parameters, vertical stress difference of pay-zone and bedding layer, and fracturing parameters (such as injection rates, fracturing fluid viscosity, and number of perforation clusters within single stage) on fracture initiation and propagation were investigated. In this paper, a 3-D fracture propagation model was built to investigate the complex fracture propagation morphology by boundary element method, considering the strength of bonding surface between layers, vertical stress difference and fracturing parameters (such as injection rates, fluid volume and viscosity). The research results indicate that on the condition of vertical stress difference (3 MPa), the fracture height can break through and enter the upper interlayer when the thickness of the overlying bedding layer is 6-9 m, considering effect of the weak bonding surface between layers. The fracture propagates within the pay zone when overlying interlayer is greater than 13 m. Difference in fluid volume distribution between clusters could be more than 20% when the stress difference of each cluster in the segment exceeds 2MPa. Fracture cluster in high stress zones cannot initiate when the stress difference in the segment exceeds 5MPa. The simulation results of fracture height are much higher if the effect of weak bonding surface between layers is not involved. By increasing the injection rates, increasing fracturing fluid viscosity, and reducing the number of clusters within single stage can promote the fracture height propagation through layers. Optimizing the perforation position and reducing the number of perforations can promote the uniform expansion of fractures. Typical curves of fracture height estimation were established for the tight sandstone of the Lower Permian Shanxi Formation. The model results have good consistency with micro-seismic monitoring results of hydraulic fracturing in Well 1HF.

Keywords: fracture propagation, boundary element method, fracture height, offshore oil and gas, marine-continental transitional reservoirs, rock mechanics experiment

Procedia PDF Downloads 127
83 Phenotype and Psychometric Characterization of Phelan-Mcdermid Syndrome Patients

Authors: C. Bel, J. Nevado, F. Ciceri, M. Ropacki, T. Hoffmann, P. Lapunzina, C. Buesa

Abstract:

Background: The Phelan-McDermid syndrome (PMS) is a genetic disorder caused by the deletion of the terminal region of chromosome 22 or mutation of the SHANK3 gene. Shank3 disruption in mice leads to dysfunction of synaptic transmission, which can be restored by epigenetic regulation with both Lysine Specific Demethylase 1 (LSD1) inhibitors. PMS subjects result in a variable degree of intellectual disability, delay or absence of speech, autistic spectrum disorders symptoms, low muscle tone, motor delays and epilepsy. Vafidemstat is an LSD1 inhibitor in Phase II clinical development with a well-established and favorable safety profile, and data supporting the restoration of memory and cognition defects as well as reduction of agitation and aggression in several animal models and clinical studies. Therefore, vafidemstat has the potential to become a first-in-class precision medicine approach to treat PMS patients. Aims: The goal of this research is to perform an observational trial to psychometrically characterize individuals carrying deletions in SHANK3 and build a foundation for subsequent precision psychiatry clinical trials with vafidemstat. Methodology: This study is characterizing the clinical profile of 20 to 40 subjects, > 16-year-old, with genotypically confirmed PMS diagnosis. Subjects will complete a battery of neuropsychological scales, including the Repetitive Behavior Questionnaire (RBQ), Vineland Adaptive Behavior Scales, Escala de Observación para el Diagnostico del Autismo (Autism Diagnostic Observational Scale) (ADOS)-2, the Battelle Developmental Inventory and the Behavior Problems Inventory (BPI). Results: By March 2021, 19 patients have been enrolled. Unsupervised hierarchical clustering of the results obtained so far identifies 3 groups of patients, characterized by different profiles of cognitive and behavioral scores. The first cluster is characterized by low Battelle age, high ADOS and low Vineland, RBQ and BPI scores. Low Vineland, RBQ and BPI scores are also detected in the second cluster, which in contrast has high Battelle age and low ADOS scores. The third cluster is somewhat in the middle for the Battelle, Vineland and ADOS scores while displaying the highest levels of aggression (high BPI) and repeated behaviors (high RBQ). In line with the observation that female patients are generally affected by milder forms of autistic symptoms, no male patients are present in the second cluster. Dividing the results by gender highlights that male patients in the third cluster are characterized by a higher frequency of aggression, whereas female patients from the same cluster display a tendency toward higher repetitive behavior. Finally, statistically significant differences in deletion sizes are detected comparing the three clusters (also after correcting for gender), and deletion size appears to be positively correlated with ADOS and negatively correlated with Vineland A and C scores. No correlation is detected between deletion size and the BPI and RBQ scores. Conclusions: Precision medicine may open a new way to understand and treat Central Nervous System disorders. Epigenetic dysregulation has been proposed to be an important mechanism in the pathogenesis of schizophrenia and autism. Vafidemstat holds exciting therapeutic potential in PMS, and this study will provide data regarding the optimal endpoints for a future clinical study to explore vafidemstat ability to treat shank3-associated psychiatric disorders.

Keywords: autism, epigenetics, LSD1, personalized medicine

Procedia PDF Downloads 165
82 Assessing Moisture Adequacy over Semi-arid and Arid Indian Agricultural Farms using High-Resolution Thermography

Authors: Devansh Desai, Rahul Nigam

Abstract:

Crop water stress (W) at a given growth stage starts to set in as moisture availability (M) to roots falls below 75% of maximum. It has been found that ratio of crop evapotranspiration (ET) and reference evapotranspiration (ET0) is an indicator of moisture adequacy and is strongly correlated with ‘M’ and ‘W’. The spatial variability of ET0 is generally less over an agricultural farm of 1-5 ha than ET, which depends on both surface and atmospheric conditions, while the former depends only on atmospheric conditions. Solutions from surface energy balance (SEB) and thermal infrared (TIR) remote sensing are now known to estimate latent heat flux of ET. In the present study, ET and moisture adequacy index (MAI) (=ET/ET0) have been estimated over two contrasting western India agricultural farms having rice-wheat system in semi-arid climate and arid grassland system, limited by moisture availability. High-resolution multi-band TIR sensing observations at 65m from ECOSTRESS (ECOsystemSpaceborne Thermal Radiometer Experiment on Space Station) instrument on-board International Space Station (ISS) were used in an analytical SEB model, STIC (Surface Temperature Initiated Closure) to estimate ET and MAI. The ancillary variables used in the ET modeling and MAI estimation were land surface albedo, NDVI from close-by LANDSAT data at 30m spatial resolution, ET0 product at 4km spatial resolution from INSAT 3D, meteorological forcing variables from short-range weather forecast on air temperature and relative humidity from NWP model. Farm-scale ET estimates at 65m spatial resolution were found to show low RMSE of 16.6% to 17.5% with R2 >0.8 from 18 datasets as compared to reported errors (25 – 30%) from coarser-scale ET at 1 to 8 km spatial resolution when compared to in situ measurements from eddy covariance systems. The MAI was found to show lower (<0.25) and higher (>0.5) magnitudes in the contrasting agricultural farms. The study showed the potential need of high-resolution high-repeat spaceborne multi-band TIR payloads alongwith optical payload in estimating farm-scale ET and MAI for estimating consumptive water use and water stress. A set of future high-resolution multi-band TIR sensors are planned on-board Indo-French TRISHNA, ESA’s LSTM, NASA’s SBG space-borne missions to address sustainable irrigation water management at farm-scale to improve crop water productivity. These will provide precise and fundamental variables of surface energy balance such as LST (Land Surface Temperature), surface emissivity, albedo and NDVI. A synchronization among these missions is needed in terms of observations, algorithms, product definitions, calibration-validation experiments and downstream applications to maximize the potential benefits.

Keywords: thermal remote sensing, land surface temperature, crop water stress, evapotranspiration

Procedia PDF Downloads 70
81 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 48
80 Climate Change Effects on Western Coastal Groundwater in Yemen (1981-2020)

Authors: Afrah S. M. Al-Mahfadi

Abstract:

Climate change is a global issue that has significant impacts on water resources, resulting in environmental, economic, and political consequences. Groundwater reserves, particularly in coastal areas, are facing depletion, leading to serious problems in regions such as Yemen. This study focuses on the western coastal region of Yemen, which already faces risks such as water crises, food insecurity, and widespread poverty. Climate change exacerbates these risks by causing high temperatures, sea level rise, inadequate sea level rise, and inadequate environmental policies. Research Aim: The aim of this research is to provide a comprehensive overview of the impact of climate change on the western coastal region of Yemen. Specifically, the study aims to analyze the relationship between climate change and the loss of fresh groundwater resources in this area. Methodology: The research utilizes a combination of a literature review and three case studies conducted through site visits. Arch-GIS mapping is employed to analyze and visualize the relationship between climate change and the depletion of fresh groundwater resources. Additionally, data on precipitation from 1981 to 2020 and scenarios of projected sea level rise (SLR) are considered. Findings: The study reveals several future issues resulting from climate change. It is projected that the annual temperature will increase while the rainfall rate will decrease. Furthermore, the sea level is expected to rise by approximately 0.30 to 0.72 meters by 2100. These factors contribute to the loss of wetlands, the retreat of shorelines and estuaries, and the intrusion of seawater into the coastal aquifer, rendering drinking water from wells increasingly saline. Data Collection and Analysis Procedures: Data for this research are collected through a literature review, including studies on climate change impacts in coastal areas and the hydrogeology of the study region. Furthermore, three case studies are conducted through site visits. Arch-GIS mapping techniques are utilized to analyze the relationship between climate change and the loss of fresh groundwater resources. Historical precipitation data from 1981 to 2020 and scenarios of projected sea level rise are also analyzed. Questions Addressed: (1) What is the impact of climate change on the western coastal region of Yemen? (2) How does climate change affect the availability of fresh groundwater resources in this area? Conclusion: The study concludes that the western coastal region of Yemen is facing significant challenges due to climate change. The projected increase in temperature, decrease in rainfall, and rise in sea levels have severe implications, such as the loss of wetlands, shorelines, and estuaries. Additionally, the intrusion of seawater into the coastal aquifer further exacerbates the issue of saline drinking water. Urgent measures are needed to address climate change, including improving water management, implementing integrated coastal zone planning, raising awareness among stakeholders, and implementing emergency projects to mitigate the impacts. Recommendations: To mitigate the adverse effects of climate change, several recommendations are provided. These include improving water management practices, developing integrated coastal zone planning strategies, raising awareness among all stakeholders, improving health and education, and implementing emergency projects to combat climate change. These measures aim to enhance adaptive capacity and resilience in the face of future climate change impacts.

Keywords: climate change, groundwater, coastal wetlands, Yemen

Procedia PDF Downloads 65
79 Wellbeing Effects from Family Literacy Education: An Ecological Study

Authors: Jane Furness, Neville Robertson, Judy Hunter, Darrin Hodgetts, Linda Nikora

Abstract:

Background and significance: This paper describes the first use of community psychology theories to investigate family-focused literacy education programmes, enabling a wide range of wellbeing effects of such programmes to be identified for the first time. Evaluations of family literacy programmes usually focus on the economic advantage of gains in literacy skills. By identifying other effects on aspects of participants’ lives that are important to them, and how they occur, understanding of how such programmes contribute to wellbeing and social justice is augmented. Drawn from community psychology, an ecological systems-based, culturally adaptive framework for personal, relational and collective wellbeing illuminated outcomes of family literacy programmes that enhanced wellbeing and quality of life for adult participants, their families and their communities. All programmes, irrespective of their institutional location, could be similarly scrutinized. Methodology: The study traced the experiences of nineteen adult participants in four family-focused literacy programmes located in geographically and culturally different communities throughout New Zealand. A critical social constructionist paradigm framed this interpretive study. Participants were mainly Māori, Pacific islands, or European New Zealanders. Seventy-nine repeated conversational interviews were conducted over 18 months with the adult participants, programme staff and people who knew the participants well. Twelve participant observations of programme sessions were conducted, and programme documentation was reviewed. Latent theoretical thematic analysis of data drew on broad perspectives of literacy and ecological systems theory, network theory and holistic, integrative theories of wellbeing. Steps taken to co-construct meaning with participants included the repeated conversational interviews and participant checking of interview transcripts and section drafts. The researcher (this paper’s first author) followed methodological guidelines developed by indigenous peoples for non-indigenous researchers. Findings: The study found that the four family literacy programmes, differing in structure, content, aims and foci, nevertheless shared common principles and practices that reflected programme staff’s overarching concern for people’s wellbeing along with their desire to enhance literacy abilities. A human rights and strengths-based based view of people based on respect for diverse culturally based values and practices were evident in staff expression of their values and beliefs and in their practices. This enacted stance influenced the outcomes of programme participation for the adult participants, their families and their communities. Alongside the literacy and learning gains identified, participants experienced positive social and relational events and changes, affirmation and strengthening of their culturally based values, and affirmation and building of positive identity. Systemically, interconnectedness of programme effects with participants’ personal histories and circumstances; the flow on of effects to other aspects of people’s lives and to their families and communities; and the personalised character of the pathways people journeyed towards enhanced wellbeing were identified. Concluding statement: This paper demonstrates the critical contribution of community psychology to a fuller understanding of family-focused educational programme outcomes than has been previously attainable, the meaning of these broader outcomes to people in their lives, and their role in wellbeing and social justice.

Keywords: community psychology, ecological theory, family literacy education, flow on effects, holistic wellbeing

Procedia PDF Downloads 254
78 The Efficacy of Government Strategies to Control COVID 19: Evidence from 22 High Covid Fatality Rated Countries

Authors: Imalka Wasana Rathnayaka, Rasheda Khanam, Mohammad Mafizur Rahman

Abstract:

TheCOVID-19 pandemic has created unprecedented challenges to both the health and economic states in countries around the world. This study aims to evaluate the effectiveness of governments' decisions to mitigate the risks of COVID-19 through proposing policy directions to reduce its magnitude. The study is motivated by the ongoing coronavirus outbreaks and comprehensive policy responses taken by countries to mitigate the spread of COVID-19 and reduce death rates. This study contributes to filling the knowledge by exploiting the long-term efficacy of extensive plans of governments. This study employs a Panel autoregressive distributed lag (ARDL) framework. The panels incorporate both a significant number of variables and fortnightly observations from22 countries. The dependent variables adopted in this study are the fortnightly death rates and the rates of the spread of COVID-19. Mortality rate and the rate of infection data were computed based on the number of deaths and the number of new cases per 10000 people.The explanatory variables are fortnightly values of indexes taken to investigate the efficacy of government interventions to control COVID-19. Overall government response index, Stringency index, Containment and health index, and Economic support index were selected as explanatory variables. The study relies on the Oxford COVID-19 Government Measure Tracker (OxCGRT). According to the procedures of ARDL, the study employs (i) the unit root test to check stationarity, (ii) panel cointegration, and (iii) PMG and ARDL estimation techniques. The study shows that the COVID-19 pandemic forced immediate responses from policymakers across the world to mitigate the risks of COVID-19. Of the four types of government policy interventions: (i) Stringency and (ii) Economic Support have been most effective and reveal that facilitating Stringency and financial measures has resulted in a reduction in infection and fatality rates, while (iii) Government responses are positively associated with deaths but negatively with infected cases. Even though this positive relationship is unexpected to some extent in the long run, social distancing norms of the governments have been broken by the public in some countries, and population age demographics would be a possible reason for that result. (iv) Containment and healthcare improvements reduce death rates but increase the infection rates, although the effect has been lower (in absolute value). The model implies that implementation of containment health practices without association with tracing and individual-level quarantine does not work well. The policy implication based on containment health measures must be applied together with targeted, aggressive, and rapid containment to extensively reduce the number of people infected with COVID 19. Furthermore, the results demonstrate that economic support for income and debt relief has been the key to suppressing the rate of COVID-19 infections and fatality rates.

Keywords: COVID-19, infection rate, deaths rate, government response, panel data

Procedia PDF Downloads 76
77 Ruminal Fermentation of Biologically Active Nitrate- and Nitro-Containing Forages

Authors: Robin Anderson, David Nisbet

Abstract:

Nitrate, 3-nitro-1-propionic acid (NPA) and 3-nitro-1-propanol (NPOH) are biologically active chemicals that can accumulate naturally in rangeland grasses forages consumed by grazing cattle, sheep and goats. While toxic to livestock if accumulations and amounts consumed are high enough, particularly in animals having no recent exposure to the forages, these chemicals are known to be potent inhibitors of methane-producing bacteria inhabiting the rumen. Consequently, there is interest in examining their potential use as anti-methanogenic compounds to decrease methane emissions by grazing ruminants. Presently, rumen microbes, collected freshly from a cannulated Holstein cow maintained on 50:50 corn based concentrate:alfalfa diet were mixed (10 mL fluid) in 18 x 150 mm crimp top tubes with 0.5 of high nitrate-containing barley (Hordeum vulgare; containing 272 µmol nitrate per g forage dry matter), and NPA- or NPOH- containing milkvetch forages (Astragalus canadensis and Astragalus miser containing 80 and 174 soluble µmol NPA or NPOH/g forage dry matter respectively). Incubations containing 0.5 g alfalfa (Medicago sativa) were used as controls. Tubes (3 per each respective forage) were capped and incubated anaerobically (using oxygen free carbon dioxide) for 24 h at 39oC after which time amounts of total gas produced were measured via volume displacement and headspace samples were analyzed by gas chromatography to determine concentrations of hydrogen and methane. Fluid samples were analyzed by gas chromatography to measure accumulations of fermentation acids. A completely randomized analysis of variance revealed that the nitrate-containing barley and both the NPA- and the NPOH-containing milkvetches significantly decreased methane production, by > 50%, when compared to methane produced by populations incubated similarly with alfalfa (70.4 ± 3.6 µmol/ml incubation fluid). Accumulations of hydrogen, which are typically increased when methane production is inhibited, by incubations with the nitrate-containing barley and the NPA- and NPOH-containing milkvetches did not differ from accumulations observed in the alfalfa controls (0.09 ± 0.04 µmol/mL incubation fluid). Accumulations of fermentation acids produced in the incubations containing the high-nitrate barley and the NPA- and NPOH-containing milkvetches likewise did not differ from accumulations observed in incubations containing alfalfa (123.5 ± 10.8, 36.0 ± 3.0, 17.1 ± 1.5, 3.5 ± 0.3, 2.3 ± 0.2, 2.2 ± 0.2 µmol/mL incubation fluid for acetate, propionate, butyrate, valerate, isobutyrate, and isovalerate, respectively). This finding indicates the microbial populations did not compensate for the decreased methane production via compensatory changes in production of fermentative acids. Stoichiometric estimation of fermentation balance revealed that > 77% of reducing equivalents generated during fermentation of the forages were recovered in fermentation products and the recoveries did not differ between the alfalfa incubations and those with the high-nitrate barley or the NPA- or NPOH-containing milkvetches. Stoichiometric estimates of amounts of hexose fermented similarly did not differ between the nitrate-, NPA and NPOH-containing incubations and those with the alfalfa, averaging 99.6 ± 37.2 µmol hexose consumed/mL of incubation fluid. These results suggest that forages containing nitrate, NPA or NPOH may be useful to reduce methane emissions of grazing ruminants provided risks of toxicity can be effectively managed.

Keywords: nitrate, nitropropanol, nitropropionic acid, rumen methane emissions

Procedia PDF Downloads 128
76 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá

Authors: Dayron Camilo Bermudez Mendoza

Abstract:

Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.

Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility

Procedia PDF Downloads 58
75 Triple Immunotherapy to Overcome Immune Evasion by Tumors in a Melanoma Mouse Model

Authors: Mary-Ann N. Jallad, Dalal F. Jaber, Alexander M. Abdelnoor

Abstract:

Introduction: Current evidence confirms that both innate and adaptive immune systems are capable of recognizing and abolishing malignant cells. The emergence of cancerous tumors in patients is, therefore, an indication that certain cancer cells can resist elimination by the immune system through a process known as “immune evasion”. In fact, cancer cells often exploit regulatory mechanisms to escape immunity. Such mechanisms normally exist to control the immune responses and prohibit exaggerated or autoimmune reactions. Recently, immunotherapies have shown promising yet limited results. Therefore this study investigates several immunotherapeutic combinations and devises a triple immunotherapy which harnesses the innate and acquired immune responses towards the annihilation of malignant cells through overcoming their ability of immune evasion, consequently hampering malignant progression and eliminating established tumors. The aims of the study are to rule out acute/chronic toxic effects of the proposed treatment combinations, to assess the effect of these combinations on tumor growth and survival rates, and to investigate potential mechanisms underlying the phenotypic results through analyzing serum levels of anti-tumor cytokines, angiogenic factors and tumor progression indicator, and the tumor-infiltrating immune-cells populations. Methodology: For toxicity analysis, cancer-free C57BL/6 mice are randomized into 9 groups: Group 1 untreated, group 2 treated with sterile saline (solvent of used treatments), group 3 treated with Monophosphoryl-lipid-A, group 4 with anti-CTLA4-antibodies, group 5 with 1-Methyl-Tryptophan (Indolamine-Dioxygenase-1 inhibitor), group 6 with both MPLA and anti-CTLA4-antibodies, group 7 with both MPLA and 1-MT, group 8 with both anti-CTLA4-antibodies and 1-MT, and group 9 with all three: MPLA, anti-CTLA4-antibodies and 1-MT. Mice are monitored throughout the treatment period and for three following months. At that point, histological sections from their main organs are assessed. For tumor progression and survival analysis, a murine melanoma model is generated by injecting analogous mice with B16F10 melanoma cells. These mice are segregated into the listed nine groups. Their tumor size and survival are monitored. For a depiction of underlying mechanisms, melanoma-bearing mice from each group are sacrificed at several time-points. Sera are tested to assess the levels of Interleukin-12 (IL-12), Vascular-Endothelial-Growth Factor (VEGF), and S100B. Furthermore, tumors are excised for analysis of infiltrated immune cell populations including T-cells, macrophages, natural killer cells and immune-regulatory cells. Results: Toxicity analysis shows that all treated groups present no signs of neither acute nor chronic toxicity. Their appearance and weights were comparable to those of control groups throughout the treatment period and for the following 3 months. Moreover, histological sections from their hearts, kidneys, lungs, and livers were normal. Work is ongoing for completion of the remaining study aims. Conclusion: Toxicity was the major concern for the success of the proposed comprehensive combinational therapy. Data generated so far ruled out any acute or chronic toxic effects. Consequently, ongoing work is quite promising and may significantly contribute to the development of more effective immunotherapeutic strategies for the treatment of cancer patients.

Keywords: cancer immunotherapy, check-point blockade, combination therapy, melanoma

Procedia PDF Downloads 122
74 The Bidirectional Effect between Parental Burnout and the Child’s Internalized and/or Externalized Behaviors

Authors: Aline Woine, Moïra Mikolajczak, Virginie Dardier, Isabelle Roskam

Abstract:

Background information: Becoming a parent is said to be the happiest event one can ever experience in one’s life. This popular (and almost absolute) truth–which no reasonable and decent human being would ever dare question on pain of being singled out as a bad parent–contrasts with the nuances that reality offers. Indeed, while many parents do thrive in their parenting role, some others falter and become progressively overwhelmed by their parenting role, ineluctably caught in a spiral of exhaustion. Parental burnout (henceforth PB) sets in when parental demands (stressors) exceed parental resources. While it is now generally acknowledged that PB affects the parent’s behavior in terms of neglect and violence toward their offspring, little is known about the impact that the syndrome might have on the children’s internalized (anxious and depressive symptoms, somatic complaints, etc.) and/or externalized (irritability, violence, aggressiveness, conduct disorder, oppositional disorder, etc.) behaviors. Furthermore, at the time of writing, to our best knowledge, no research has yet tested the reverse effect, namely, that of the child's internalized and/or externalized behaviors on the onset and/or maintenance of parental burnout symptoms. Goals and hypotheses: The present pioneering research proposes to fill an important gap in the existing literature related to PB by investigating the bidirectional effect between PB and the child’s internalized and/or externalized behaviors. Relying on a cross-lagged longitudinal study with three waves of data collection (4 months apart), our study tests a transactional model with bidirectional and recursive relations between observed variables and at the three waves, as well as autoregressive paths and cross-sectional correlations. Methods: As we write this, wave-two data are being collected via Qualtrics, and we expect a final sample of about 600 participants composed of French-speaking (snowball sample) and English-speaking (Prolific sample) parents. Structural equation modeling is employed using Stata version 17. In order to retain as much statistical power as possible, we use all available data and therefore apply the maximum likelihood with a missing value (mlmv) as the method of estimation to compute the parameter estimates. To limit (in so far is possible) the shared method variance bias in the evaluation of the child’s behavior, the study relies on a multi-informant evaluation approach. Expected results: We expect our three-wave longitudinal study to show that PB symptoms (measured at T1) raise the occurrence/intensity of the child’s externalized and/or internalized behaviors (measured at T2 and T3). We further expect the child’s occurrence/intensity of externalized and/or internalized behaviors (measured at T1) to augment the risk for PB (measured at T2 and T3). Conclusion: Should our hypotheses be confirmed, our results will make an important contribution to the understanding of both PB and children’s behavioral issues, thereby opening interesting theoretical and clinical avenues.

Keywords: exhaustion, structural equation modeling, cross-lagged longitudinal study, violence and neglect, child-parent relationship

Procedia PDF Downloads 73
73 Measurement and Modelling of HIV Epidemic among High Risk Groups and Migrants in Two Districts of Maharashtra, India: An Application of Forecasting Software-Spectrum

Authors: Sukhvinder Kaur, Ashok Agarwal

Abstract:

Background: For the first time in 2009, India was able to generate estimates of HIV incidence (the number of new HIV infections per year). Analysis of epidemic projections helped in revealing that the number of new annual HIV infections in India had declined by more than 50% during the last decade (GOI Ministry of Health and Family Welfare, 2010). Then, National AIDS Control Organisation (NACO) planned to scale up its efforts in generating projections through epidemiological analysis and modelling by taking recent available sources of evidence such as HIV Sentinel Surveillance (HSS), India Census data and other critical data sets. Recently, NACO generated current round of HIV estimates-2012 through globally recommended tool “Spectrum Software” and came out with the estimates for adult HIV prevalence, annual new infections, number of people living with HIV, AIDS-related deaths and treatment needs. State level prevalence and incidence projections produced were used to project consequences of the epidemic in spectrum. In presence of HIV estimates generated at state level in India by NACO, USIAD funded PIPPSE project under the leadership of NACO undertook the estimations and projections to district level using same Spectrum software. In 2011, adult HIV prevalence in one of the high prevalent States, Maharashtra was 0.42% ahead of the national average of 0.27%. Considering the heterogeneity of HIV epidemic between districts, two districts of Maharashtra – Thane and Mumbai were selected to estimate and project the number of People-Living-with-HIV/AIDS (PLHIV), HIV-prevalence among adults and annual new HIV infections till 2017. Methodology: Inputs in spectrum included demographic data from Census of India since 1980 and sample registration system, programmatic data on ‘Alive and on ART (adult and children)’,‘Mother-Baby pairs under PPTCT’ and ‘High Risk Group (HRG)-size mapping estimates’, surveillance data from various rounds of HSS, National Family Health Survey–III, Integrated Biological and Behavioural Assessment and Behavioural Sentinel Surveillance. Major Findings: Assuming current programmatic interventions in these districts, an estimated decrease of 12% points in Thane and 31% points in Mumbai among new infections in HRGs and migrants is observed from 2011 by 2017. Conclusions: Project also validated decrease in HIV new infection among one of the high risk groups-FSWs using program cohort data since 2012 to 2016. Though there is a decrease in HIV prevalence and new infections in Thane and Mumbai, further decrease is possible if appropriate programme response, strategies and interventions are envisaged for specific target groups based on this evidence. Moreover, evidence need to be validated by other estimation/modelling techniques; and evidence can be generated for other districts of the state, where HIV prevalence is high and reliable data sources are available, to understand the epidemic within the local context.

Keywords: HIV sentinel surveillance, high risk groups, projections, new infections

Procedia PDF Downloads 211
72 Closing down the Loop Holes: How North Korea and Other Bad Actors Manipulate Global Trade in Their Favor

Authors: Leo Byrne, Neil Watts

Abstract:

In the complex and evolving landscape of global trade, maritime sanctions emerge as a critical tool wielded by the international community to curb illegal activities and alter the behavior of non-compliant states and entities. These sanctions, designed to restrict or prohibit trade by sea with sanctioned jurisdictions, entities, or individuals, face continuous challenges due to the sophisticated evasion tactics employed by countries like North Korea. As the Democratic People's Republic of Korea (DPRK) diverts significant resources to circumvent these measures, understanding the nuances of their methodologies becomes imperative for maintaining the integrity of global trade systems. The DPRK, one of the most sanctioned nations globally, has developed an intricate network to facilitate its trade in illicit goods, ensuring the flow of revenue from designated activities continues unabated. Given its geographic and economic conditions, North Korea predominantly relies on maritime routes, utilizing foreign ports to route its illicit trade. This reliance on the sea is exploited through various sophisticated methods, including the use of front companies, falsification of documentation, commingling of bulk cargos, and physical alterations to vessels. These tactics enable the DPRK to navigate through the gaps in regulatory frameworks and lax oversight, effectively undermining international sanctions regimes Maritime sanctions carry significant implications for global trade, imposing heightened risks in the maritime domain. The deceptive practices employed not only by the DPRK but also by other high-risk jurisdictions, necessitate a comprehensive understanding of UN targeted sanctions. For stakeholders in the maritime sector—including maritime authorities, vessel owners, shipping companies, flag registries, and financial institutions serving the shipping industry—awareness and compliance are paramount. Violations can lead to severe consequences, including reputational damage, sanctions, hefty fines, and even imprisonment. To mitigate risks associated with these deceptive practices, it is crucial for maritime sector stakeholders to employ rigorous due diligence and regulatory compliance screening measures. Effective sanctions compliance serves as a protective shield against legal, financial, and reputational risks, preventing exploitation by international bad actors. This requires not only a deep understanding of the sanctions landscape but also the capability to identify and manage risks through informed decision-making and proactive risk management practices. As the DPRK and other sanctioned entities continue to evolve their sanctions evasion tactics, the international community must enhance its collective efforts to demystify and counter these practices. By leveraging more stringent compliance measures, stakeholders can safeguard against the illicit use of the maritime domain, reinforcing the effectiveness of maritime sanctions as a tool for global security. This paper seeks to dissect North Korea's adaptive strategies in the face of maritime sanctions. By examining up-to-date, geographically, and temporally relevant case studies, it aims to shed light on the primary nodes through which Pyongyang evades sanctions and smuggles goods via third-party ports. The goal is to propose multi-level interaction strategies, ranging from governmental interventions to localized enforcement mechanisms, to counteract these evasion tactics.

Keywords: maritime, maritime sanctions, international sanctions, compliance, risk

Procedia PDF Downloads 70
71 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 240
70 Identity and Mental Adaptation of Deaf and Hard-of-Hearing Students

Authors: N. F. Mikhailova, M. E. Fattakhova, M. A. Mironova, E. V. Vyacheslavova

Abstract:

For the mental and social adaptation of the deaf and hard-of-hearing people, cultural and social aspects - the formation of identity (acculturation) and educational conditions – are highly significant. We studied 137 deaf and hard-of-hearing students in different educational situations. We used these methods: Big Five (Costa & McCrae, 1997), TRF (Becker, 1989), WCQ (Lazarus & Folkman, 1988), self-esteem, and coping strategies (Jambor & Elliott, 2005), self-stigma scale (Mikhailov, 2008). Type of self-identification of students depended on the degree of deafness, type of education, method of communication in the family: large hearing loss, education in schools for deaf, and gesture communication increased the likelihood of a 'deaf' acculturation. Less hearing loss, inclusive education in public school or school for the hearing-impaired, mixed communication in the family contributed to the formation of 'hearing' acculturation. The choice of specific coping depended on the degree of deafness: a large hearing loss increased coping 'withdrawal into the deaf world' and decreased 'bicultural skills' coping. People with mild hearing loss tended to cover-up it. In the context of ongoing discussion, we researched personality characteristics in deaf and hard on-hearing students, coping and other deafness associated factors depending on their acculturation type. Students who identified themselves with the 'hearing world' had a high self-esteem, a higher level of extraversion, self-awareness, personal resources, willingness to cooperate, better psychological health, emotional stability, higher ability to empathy, a greater satiety of life with feelings and sense and high sense of self-worth. They also actively used strategies, problem-solving, acceptance of responsibility, positive revaluation. Student who limited themselves within the culture of deaf people had more severe hearing loss and accordingly had more communication barriers. Lack of use or seldom use of coping strategies by these students point at decreased level of stress in their life. Their self-esteem have not been challenged in the specific social environment of the students with the same severity of defect, and thus this environment provided sense of comfort (we can assume that from the high scores on psychological health, personality resources, and emotional stability). Students with bicultural acculturation had higher level of psychological resources - they used Positive Reappraisal coping more often and had a higher level of psychological health. Lack of belonging to certain culture (marginality) leads to personality disintegration, social and psychological disadaptation: deaf and hard-of-hearing students with marginal identification had a lower self-estimation level, worse psychological health and personal resources, lower level of extroversion, self-confidence and life satisfaction. They, in fact, become 'risk group' (many of them dropped out of universities, divorced, and one even ended up in the ranks of ISIS). All these data argue the importance of cultural 'anchor' for people with hearing deprivation. Supported by the RFBR No 19-013-00406.

Keywords: acculturation, coping, deafness, marginality

Procedia PDF Downloads 204
69 Ethnic Identity as an Asset: Linking Ethnic Identity, Perceived Social Support, and Mental Health among Indigenous Adults in Taiwan

Authors: A.H.Y. Lai, C. Teyra

Abstract:

In Taiwan, there are 16 official indigenous groups, accounting for 2.3% of the total population. Like other indigenous populations worldwide, indigenous peoples in Taiwan have poorer mental health because of their history of oppression and colonisation. Amid the negative narratives, the ethnic identity of cultural minorities is their unique psychological and cultural asset. Moreover, positive socialisation is found to be related to strong ethnic identity. Based on Phinney’s theory on ethnic identity development and social support theory, this study adopted a strength-based approach conceptualising ethnic identity as the central organising principle that linked perceived social support and mental health among indigenous adults in Taiwan. Aims. Overall aim is to examine the effect of ethnic identity and social support on mental health. Specific aims were to examine : (1) the association between ethnic identity and mental health; (2) the association between perceived social support and mental health ; (3) the indirect effect of ethnic identity linking perceived social support and mental health. Methods. Participants were indigenous adults in Taiwan (n=200; mean age=29.51; Female=31%, Male=61%, Others=8%). A cross-sectional quantitative design was implemented using data collected in the year 2020. Respondent-driven sampling was used. Standardised measurements were: Ethnic Identity Scale(6-item); Social Support Questionnaire-SF(6 items); Patient Health Questionnaire(9-item); and Generalised Anxiety Disorder(7-item). Covariates were age, gender and economic satisfaction. A four-stage structural equation modelling (SEM) with robust maximin likelihood estimation was employed using Mplus8.0. Step 1: A measurement model was built and tested using confirmatory factor analysis (CFA). Step 2: Factor covariates were re-specified as direct effects in the SEM. Covariates were added. The direct effects of (1) ethnic identity and social support on depression and anxiety and (2) social support on ethnic identity were tested. The indirect effect of ethnic identity was examined with the bootstrapping technique. Results. The CFA model showed satisfactory fit statistics: x^2(df)=869.69(608), p<.05; Comparative ft index (CFI)/ Tucker-Lewis fit index (TLI)=0.95/0.94; root mean square error of approximation (RMSEA)=0.05; Standardized Root Mean Squared Residual (SRMR)=0.05. Ethnic identity is represented by two latent factors: ethnic identity-commitment and ethnic identity-exploration. Depression, anxiety and social support are single-factor latent variables. For the SEM, model fit statistics were: x^2(df)=779.26(527), p<.05; CFI/TLI=0.94/0.93; RMSEA=0.05; SRMR=0.05. Ethnic identity-commitment (b=-0.30) and social support (b=-0.33) had direct negative effects on depression, but ethnic identity-exploration did not. Ethnic identity-commitment (b=-0.43) and social support (b=-0.31) had direct negative effects on anxiety, while identity-exploration (b=0.24) demonstrated a positive effect. Social support had direct positive effects on ethnic identity-exploration (b=0.26) and ethnic identity-commitment (b=0.31). Mediation analysis demonstrated the indirect effect of ethnic identity-commitment linking social support and depression (b=0.22). Implications: Results underscore the role of social support in preventing depression via ethnic identity commitment among indigenous adults in Taiwan. Adopting the strength-based approach, mental health practitioners can mobilise indigenous peoples’ commitment to their group to promote their well-being.

Keywords: ethnic identity, indigenous population, mental health, perceived social support

Procedia PDF Downloads 103
68 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria

Authors: Tomola Obamuyi

Abstract:

The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,

Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression

Procedia PDF Downloads 120
67 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 172
66 Water Ingress into Underground Mine Voids in the Central Rand Goldfields Area, South Africa-Fluid Induced Seismicity

Authors: Artur Cichowicz

Abstract:

The last active mine in the Central Rand Goldfields area (50 km x 15 km) ceased operations in 2008. This resulted in the closure of the pumping stations, which previously maintained the underground water level in the mining voids. As a direct consequence of the water being allowed to flood the mine voids, seismic activity has increased directly beneath the populated area of Johannesburg. Monitoring of seismicity in the area has been on-going for over five years using the network of 17 strong ground motion sensors. The objective of the project is to improve strategies for mine closure. The evolution of the seismicity pattern was investigated in detail. Special attention was given to seismic source parameters such as magnitude, scalar seismic moment and static stress drop. Most events are located within historical mine boundaries. The seismicity pattern shows a strong relationship between the presence of the mining void and high levels of seismicity; no seismicity migration patterns were observed outside the areas of old mining. Seven years after the pumping stopped, the evolution of the seismicity has indicated that the area is not yet in equilibrium. The level of seismicity in the area appears to not be decreasing over time since the number of strong events, with Mw magnitudes above 2, is still as high as it was when monitoring began over five years ago. The average rate of seismic deformation is 1.6x1013 Nm/year. Constant seismic deformation was not observed over the last 5 years. The deviation from the average is in the order of 6x10^13 Nm/year, which is a significant deviation. The variation of cumulative seismic moment indicates that a constant deformation rate model is not suitable. Over the most recent five year period, the total cumulative seismic moment released in the Central Rand Basin was 9.0x10^14 Nm. This is equivalent to one earthquake of magnitude 3.9. This is significantly less than what was experienced during the mining operation. Characterization of seismicity triggered by a rising water level in the area can be achieved through the estimation of source parameters. Static stress drop heavily influences ground motion amplitude, which plays an important role in risk assessments of potential seismic hazards in inhabited areas. The observed static stress drop in this study varied from 0.05 MPa to 10 MPa. It was found that large static stress drops could be associated with both small and large events. The temporal evolution of the inter-event time provides an understanding of the physical mechanisms of earthquake interaction. Changes in the characteristics of the inter-event time are produced when a stress change is applied to a group of faults in the region. Results from this study indicate that the fluid-induced source has a shorter inter-event time in comparison to a random distribution. This behaviour corresponds to a clustering of events, in which short recurrence times tend to be close to each other, forming clusters of events.

Keywords: inter-event time, fluid induced seismicity, mine closure, spectral parameters of seismic source

Procedia PDF Downloads 285
65 Localized Recharge Modeling of a Coastal Aquifer from a Dam Reservoir (Korba, Tunisia)

Authors: Nejmeddine Ouhichi, Fethi Lachaal, Radhouane Hamdi, Olivier Grunberger

Abstract:

Located in Cap Bon peninsula (Tunisia), the Lebna dam was built in 1987 to balance local water salt intrusion taking place in the coastal aquifer of Korba. The first intention was to reduce coastal groundwater over-pumping by supplying surface water to a large irrigation system. The unpredicted beneficial effect was recorded with the occurrence of a direct localized recharge to the coastal aquifer by leakage through the geological material of the southern bank of the lake. The hydrological balance of the reservoir dam gave an estimation of the annual leakage volume, but dynamic processes and sound quantification of recharge inputs are still required to understand the localized effect of the recharge in terms of piezometry and quality. Present work focused on simulating the recharge process to confirm the hypothesis, and established a sound quantification of the water supply to the coastal aquifer and extend it to multi-annual effects. A spatial frame of 30km² was used for modeling. Intensive outcrops and geophysical surveys based on 68 electrical resistivity soundings were used to characterize the aquifer 3D geometry and the limit of the Plio-quaternary geological material concerned by the underground flow paths. Permeabilities were determined using 17 pumping tests on wells and piezometers. Six seasonal piezometric surveys on 71 wells around southern reservoir dam banks were performed during the 2019-2021 period. Eight monitoring boreholes of high frequency (15min) piezometric data were used to examine dynamical aspects. Model boundary conditions were specified using the geophysics interpretations coupled with the piezometric maps. The dam-groundwater flow model was performed using Visual MODFLOW software. Firstly, permanent state calibration based on the first piezometric map of February 2019 was established to estimate the permanent flow related to the different reservoir levels. Secondly, piezometric data for the 2019-2021 period were used for transient state calibration and to confirm the robustness of the model. Preliminary results confirmed the temporal link between the reservoir level and the localized recharge flow with a strong threshold effect for levels below 16 m.a.s.l. The good agreement of computed flow through recharge cells on the southern banks and hydrological budget of the reservoir open the path to future simulation scenarios of the dilution plume imposed by the localized recharge. The dam reservoir-groundwater flow-model simulation results approve a potential for storage of up to 17mm/year in existing wells, under gravity-feed conditions during level increases on the reservoir into the three years of operation. The Lebna dam groundwater flow model characterized a spatiotemporal relation between groundwater and surface water.

Keywords: leakage, MODFLOW, saltwater intrusion, surface water-groundwater interaction

Procedia PDF Downloads 138
64 Nature of Forest Fragmentation Owing to Human Population along Elevation Gradient in Different Countries in Hindu Kush Himalaya Mountains

Authors: Pulakesh Das, Mukunda Dev Behera, Manchiraju Sri Ramachandra Murthy

Abstract:

Large numbers of people living in and around the Hindu Kush Himalaya (HKH) region, depends on this diverse mountainous region for ecosystem services. Following the global trend, this region also experiencing rapid population growth, and demand for timber and agriculture land. The eight countries sharing the HKH region have different forest resources utilization and conservation policies that exert varying forces in the forest ecosystem. This created a variable spatial as well altitudinal gradient in rate of deforestation and corresponding forest patch fragmentation. The quantitative relationship between fragmentation and demography has not been established before for HKH vis-à-vis along elevation gradient. This current study was carried out to attribute the overall and different nature in landscape fragmentations along the altitudinal gradient with the demography of each sharing countries. We have used the tree canopy cover data derived from Landsat data to analyze the deforestation and afforestation rate, and corresponding landscape fragmentation observed during 2000 – 2010. Area-weighted mean radius of gyration (AMN radius of gyration) was computed owing to its advantage as spatial indicator of fragmentation over non-spatial fragmentation indices. Using the subtraction method, the change in fragmentation was computed during 2000 – 2010. Using the tree canopy cover data as a surrogate of forest cover, highest forest loss was observed in Myanmar followed by China, India, Bangladesh, Nepal, Pakistan, Bhutan, and Afghanistan. However, the sequence of fragmentation was different after the maximum fragmentation observed in Myanmar followed by India, China, Bangladesh, and Bhutan; whereas increase in fragmentation was seen following the sequence of as Nepal, Pakistan, and Afghanistan. Using SRTM-derived DEM, we observed higher rate of fragmentation up to 2400m that corroborated with high human population for the year 2000 and 2010. To derive the nature of fragmentation along the altitudinal gradients, the Statistica software was used, where the user defined function was utilized for regression applying the Gauss-Newton estimation method with 50 iterations. We observed overall logarithmic decrease in fragmentation change (area-weighted mean radius of gyration), forest cover loss and population growth during 2000-2010 along the elevation gradient with very high R2 values (i.e., 0.889, 0.895, 0.944 respectively). The observed negative logarithmic function with the major contribution in the initial elevation gradients suggest to gap filling afforestation in the lower altitudes to enhance the forest patch connectivity. Our finding on the pattern of forest fragmentation and human population across the elevation gradient in HKH region will have policy level implication for different nations and would help in characterizing hotspots of change. Availability of free satellite derived data products on forest cover and DEM, grid-data on demography, and utility of geospatial tools helped in quick evaluation of the forest fragmentation vis-a-vis human impact pattern along the elevation gradient in HKH.

Keywords: area-weighted mean radius of gyration, fragmentation, human impact, tree canopy cover

Procedia PDF Downloads 215
63 Strategies for the Optimization of Ground Resistance in Large Scale Foundations for Optimum Lightning Protection

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

In this paper, we discuss the standard improvements which can be made to reduce the earth resistance in difficult terrains for optimum lightning protection, what are the practical limitations, and how the modeling can be refined for accurate diagnostics and ground resistance minimization. Ground resistance minimization can be made via three different approaches: burying vertical electrodes connected in parallel, burying horizontal conductive plates or meshes, or modifying the own terrain, either by changing the entire terrain material in a large volume or by adding earth-enhancing compounds. The use of vertical electrodes connected in parallel pose several practical limitations. In order to prevent loss of effectiveness, it is necessary to keep a minimum distance between each electrode, which is typically around five times larger than the electrode length. Otherwise, the overlapping of the local equipotential lines around each electrode reduces the efficiency of the configuration. The addition of parallel electrodes reduces the resistance and facilitates the measurement, but the basic parallel resistor formula of circuit theory will always underestimate the final resistance. Numerical simulation of equipotential lines around the electrodes overcomes this limitation. The resistance of a single electrode will always be proportional to the soil resistivity. The electrodes are usually installed with a backfilling material of high conductivity, which increases the effective diameter. However, the improvement is marginal, since the electrode diameter counts in the estimation of the ground resistance via a logarithmic function. Substances that are used for efficient chemical treatment must be environmentally friendly and must feature stability, high hygroscopicity, low corrosivity, and high electrical conductivity. A number of earth enhancement materials are commercially available. Many are comprised of carbon-based materials or clays like bentonite. These materials can also be used as backfilling materials to reduce the resistance of an electrode. Chemical treatment of soil has environmental issues. Some products contain copper sulfate or other copper-based compounds, which may not be environmentally friendly. Carbon-based compounds are relatively inexpensive and they do have very low resistivities, but they also feature corrosion issues. Typically, the carbon can corrode and destroy a copper electrode in around five years. These compounds also have potential environmental concerns. Some earthing enhancement materials contain cement, which, after installation acquire properties that are very close to concrete. This prevents the earthing enhancement material from leaching into the soil. After analyzing different configurations, we conclude that a buried conductive ring with vertical electrodes connected periodically should be the optimum baseline solution for the grounding of a large size structure installed on a large resistivity terrain. In order to show this, a practical example is explained here where we simulate the ground resistance of a conductive ring buried in a terrain with a resistivity in the range of 1 kOhm·m.

Keywords: grounding improvements, large scale scientific instrument, lightning risk assessment, lightning standards

Procedia PDF Downloads 139
62 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 135
61 Design, Implementation and Evaluation of Health and Social Justice Trainings in Nigeria

Authors: Juliet Sorensen, Anna Maitland

Abstract:

Introduction: Characterized by lack of water and sanitation, food insecurity, and low access to hospitals and clinics, informal urban settlements in Lagos, Nigeria have very poor health outcomes. With little education and a general inability to demand basic rights, these communities are often disempowered and isolated from understanding, claiming, or owning their health needs. Utilizing community-based participatory research characterized by interdisciplinary, cross-cultural partnerships, evidence-based assessments, and both primary and secondary source research, a holistic health education and advocacy program was developed in Lagos to address health barriers for targeted communities. This includes a first of its kind guide formulated to teach community-based health educators how to transmit health information to low-literacy Nigerian audiences while supporting behavior change models and social support mechanisms. This paper discusses the interdisciplinary contributions to developing a health education program while also looking at the need for greater beneficiary ownership and implementation of health justice and access. Methods: In March 2016, an interdisciplinary group of medical, legal, and business graduate students and faculty from Northwestern University conduced a Health Needs Assessment (HNA) in Lagos with a partner and a local non-governmental organization. The HNA revealed that members of informal urban communities in Lagos were lacking basic health literacy, but desired to remedy this lacuna. Further, the HNA revealed that even where the government mandates specific services, many vulnerable populations are unable to access these services. The HNA concluded that a program focused on education, advocacy, and organizing around anatomy, maternal and sexual health, infectious disease and malaria, HIV/AIDS, emergency care, and water and sanitation would respond to stated needs while also building capacity in communities to address health barriers. Results: Based on the HNA, including both primary and secondary source research on integrated health education approaches and behavior change models and responsive, adaptive material development, a holistic program was developed for the Lagos partners and first implemented in November 2016. This program trained community-nominated health educators in adult, low-literacy, knowledge exchange approaches, utilizing information identified by communities as a priority. After a second training in March 2017, these educators will teach community-based groups and will support and facilitate behavior change models and peer-support methods around basic issues like hand washing and disease transmission. They will be supported by community paralegals who will help ensure that newly trained community groups can act on education around access, such as receiving free vaccinations, maternal health care, and HIV/AIDS medicines. Materials will continue to be updated as needs and issues arise, with a focus on identifying best practices around health improvements that can be shared across these partner communities. Conclusion: These materials are the first of their kind, and address a void of health information and understanding pervasive in informal-urban Lagos communities. Initial feedback indicates high levels of commitment and interest, as well as investment by communities in these materials, largely because they are responsive, targeted, and build community capacity. This methodology is an important step in dignity-based health justice solutions, albeit in the process of refinement.

Keywords: community health educators, interdisciplinary and cross cultural partnerships, health justice and access, Nigeria

Procedia PDF Downloads 248
60 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach

Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman

Abstract:

Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.

Keywords: categorical data, log linear modeling, neural network, shifting cultivation

Procedia PDF Downloads 54
59 A Shift in Approach from Cereal Based Diet to Dietary Diversity in India: A Case Study of Aligarh District

Authors: Abha Gupta, Deepak K. Mishra

Abstract:

Food security issue in India has surrounded over availability and accessibility of cereal which is regarded as the only food group to check hunger and improve nutrition. Significance of fruits, vegetables, meat and other food products have totally been neglected given the fact that they provide essential nutrients to the body. There is a need to shift the emphasis from cereal-based approach to a more diverse diet so that aim of achieving food security may change from just reducing hunger to an overall health. This paper attempts to analyse how far dietary diversity level has been achieved across different socio-economic groups in India. For this purpose, present paper sets objectives to determine (a) percentage share of different food groups to total food expenditure and consumption by background characteristics (b) source of and preference for all food items and, (c) diversity of diet across socio-economic groups. A cross sectional survey covering 304 households selected through proportional stratified random sampling was conducted in six villages of Aligarh district of Uttar Pradesh, India. Information on amount of food consumed, source of consumption and expenditure on food (74 food items grouped into 10 major food groups) was collected with a recall period of seven days. Per capita per day food consumption/expenditure was calculated through dividing consumption/expenditure by household size and number seven. Food variety score was estimated by giving 0 values to those food groups/items which had not been eaten and 1 to those which had been taken by households in last seven days. Addition of all food group/item score gave result of food variety score. Diversity of diet was computed using Herfindahl-Hirschman index. Findings of the paper show that cereal, milk, roots and tuber food groups contribute a major share in total consumption/expenditure. Consumption of these food groups vary across socio-economic groups whereas fruit, vegetables, meat and other food consumption remain low and same. Estimation of dietary diversity show higher concentration of diet due to higher consumption of cereals, milk, root and tuber products and dietary diversity slightly varies across background groups. Muslims, Scheduled caste, small farmers, lower income class, food insecure, below poverty line and labour families show higher concentration of diet as compared to their counterpart groups. These groups also evince lower mean intake of number of food item in a week due to poor economic constraints and resultant lower accessibility to number of expensive food items. Results advocate to make a shift from cereal based diet to dietary diversity which not only includes cereal and milk products but also nutrition rich food items such as fruits, vegetables, meat and other products. Integrating a dietary diversity approach in food security programmes of the country would help to achieve nutrition security as hidden hunger is widespread among the Indian population.

Keywords: dietary diversity, food Security, India, socio-economic groups

Procedia PDF Downloads 340