Search results for: traceable decisions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1783

Search results for: traceable decisions

13 Farm-Women in Technology Transfer to Foster the Capacity Building of Agriculture: A Forecast from a Draught-Prone Rural Setting in India

Authors: Pradipta Chandra, Titas Bhattacharjee, Bhaskar Bhowmick

Abstract:

The foundation of economy in India is primarily based on agriculture while this is the most neglected in the rural setting. More significantly, household women take part in agriculture with higher involvement. However, because of lower education of women they have limited access towards financial decisions, land ownership and technology but they have vital role towards the individual family level. There are limited studies on the institution-wise training barriers with the focus of gender disparity. The main purpose of this paper is to find out the factors of institution-wise training (non-formal education) barriers in technology transfer with the focus of participation of rural women in agriculture. For this study primary and secondary data were collected in the line of qualitative and quantitative approach. Qualitative data were collected by several field visits in the adjacent areas of Seva-Bharati, Seva Bharati Krishi Vigyan Kendra through semi-structured questionnaires. In the next level detailed field surveys were conducted with close-ended questionnaires scored on the seven-point Likert scale. Sample size was considered as 162. During the data collection the focus was to include women although some biasness from the end of respondents and interviewer might exist due to dissimilarity in observation, views etc. In addition to that the heterogeneity of sample is not very high although female participation is more than fifty percent. Data were analyzed using Exploratory Factor Analysis (EFA) technique with the outcome of three significant factors of training barriers in technology adoption by farmers: (a) Failure of technology transfer training (TTT) comprehension interprets that the technology takers, i.e., farmers can’t understand the technology either language barrier or way of demonstration exhibited by the experts/ trainers. (b) Failure of TTT customization, articulates that the training for individual farmer, gender crop or season-wise is not tailored. (c) Failure of TTT generalization conveys that absence of common training methods for individual trainers for specific crops is more prominent at the community level. The central finding is that the technology transfer training method can’t fulfill the need of the farmers under an economically challenged area. The impact of such study is very high in the area of dry lateritic and resource crunch area of Jangalmahal under Paschim Medinipur district, West Bengal and areas with similar socio-economy. Towards the policy level decision this research may help in framing digital agriculture for implementation of the appropriate information technology for the farming community, effective and timely investment by the government with the selection of beneficiary, formation of farmers club/ farm science club etc. The most important research implication of this study lies upon the contribution towards the knowledge diffusion mechanism of the agricultural sector in India. Farmers may overcome the barriers to achieve higher productivity through adoption of modern farm practices. Corporates will be interested in agro-sector through investment under corporate social responsibility (CSR). The research will help in framing public or industry policy and land use pattern. Consequently, a huge mass of rural farm-women will be empowered and farmer community will be benefitted.

Keywords: dry lateritic zone, institutional barriers, technology transfer in India, farm-women participation

Procedia PDF Downloads 372
12 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 33
11 Evaluating Forecasting Strategies for Day-Ahead Electricity Prices: Insights From the Russia-Ukraine Crisis

Authors: Alexandra Papagianni, George Filis, Panagiotis Papadopoulos

Abstract:

The liberalization of the energy market and the increasing penetration of fluctuating renewables (e.g., wind and solar power) have heightened the importance of the spot market for ensuring efficient electricity supply. This is further emphasized by the EU’s goal of achieving net-zero emissions by 2050. The day-ahead market (DAM) plays a key role in European energy trading, accounting for 80-90% of spot transactions and providing critical insights for next-day pricing. Therefore, short-term electricity price forecasting (EPF) within the DAM is crucial for market participants to make informed decisions and improve their market positioning. Existing literature highlights out-of-sample performance as a key factor in assessing EPF accuracy, with influencing factors such as predictors, forecast horizon, model selection, and strategy. Several studies indicate that electricity demand is a primary price determinant, while renewable energy sources (RES) like wind and solar significantly impact price dynamics, often lowering prices. Additionally, incorporating data from neighboring countries, due to market coupling, further improves forecast accuracy. Most studies predict up to 24 steps ahead using hourly data, while some extend forecasts using higher-frequency data (e.g., half-hourly or quarter-hourly). Short-term EPF methods fall into two main categories: statistical and computational intelligence (CI) methods, with hybrid models combining both. While many studies use advanced statistical methods, particularly through different versions of traditional AR-type models, others apply computational techniques such as artificial neural networks (ANNs) and support vector machines (SVMs). Recent research combines multiple methods to enhance forecasting performance. Despite extensive research on EPF accuracy, a gap remains in understanding how forecasting strategy affects prediction outcomes. While iterated strategies are commonly used, they are often chosen without justification. This paper contributes by examining whether the choice of forecasting strategy impacts the quality of day-ahead price predictions, especially for multi-step forecasts. We evaluate both iterated and direct methods, exploring alternative ways of conducting iterated forecasts on benchmark and state-of-the-art forecasting frameworks. The goal is to assess whether these factors should be considered by end-users to improve forecast quality. We focus on the Greek DAM using data from July 1, 2021, to March 31, 2022. This period is chosen due to significant price volatility in Greece, driven by its dependence on natural gas and limited interconnection capacity with larger European grids. The analysis covers two phases: pre-conflict (January 1, 2022, to February 23, 2022) and post-conflict (February 24, 2022, to March 31, 2022), following the Russian-Ukraine conflict that initiated an energy crisis. We use the mean absolute percentage error (MAPE) and symmetric mean absolute percentage error (sMAPE) for evaluation, as well as the Direction of Change (DoC) measure to assess the accuracy of price movement predictions. Our findings suggest that forecasters need to apply all strategies across different horizons and models. Different strategies may be required for different horizons to optimize both accuracy and directional predictions, ensuring more reliable forecasts.

Keywords: short-term electricity price forecast, forecast strategies, forecast horizons, recursive strategy, direct strategy

Procedia PDF Downloads 4
10 Research Project of National Interest (PRIN-PNRR) DIVAS: Developing Methods to Assess Tree Vitality after a Wildfire through Analyses of Cambium Sugar Metabolism

Authors: Claudia Cocozza, Niccolò Frassinelli, Enrico Marchi, Cristiano Foderi, Alessandro Bizzarri, Margherita Paladini, Maria Laura Traversi, Eleftherious Touloupakis, Alessio Giovannelli

Abstract:

The development of tools to quickly identify the fate of injured trees after stress is highly relevant when biodiversity restoration of damaged sites is based on nature-based solutions. In this context, an approach to assess irreversible physiological damages within trees could help to support planning management decisions of perturbed sites to restore biodiversity, for the safety of the environment and understanding functionality adjustments of the ecosystems. Tree vitality can be estimated by a series of physiological proxies like cambium activity, starch, and soluble sugars amount in C-sinks whilst the accumulation of ethanol within the cambial cells and phloem is considered an alert of cell death. However, their determination requires time-consuming laboratory protocols, which makes the approach unfeasible as a practical option in the field. The project aims to develop biosensors to assess the concentration of soluble sugars and ethanol in stem tissues. Soluble sugars and ethanol concentrations will be used to define injured trees to discriminate compromised and recovering trees in the forest directly. To reach this goal, we select study sites subjected to prescribed fires or recent wildfires as experimental set-ups. Indeed, in Mediterranean countries, forest fire is a recurrent event that must be considered as a central component of regional and global strategies in forest management and biodiversity restoration programs. A biosensor will be developed through a multistep process related to target analytes characterization, bioreceptor selection, and, finally, calibration/testing of the sensor. To validate biosensor signals, soluble sugars and ethanol will be quantified by HPLC and GC using synthetic media (in lab) and phloem sap (in field) whilst cambium vitality will be assessed by anatomical observations. On burnt trees, the stem growth will be monitored by dendrometers and/or estimated by tree ring analyses, whilst the tree response to past fire events will be assessed by isotopic discrimination. Moreover, the fire characterization and the visual assessment procedure will be used to assign burnt trees to a vitality class. At the end of the project, a well-defined procedure combining biosensor signal and visual assessment will be produced and applied to a study case. The project outcomes and the results obtained will be properly packaged to reach, engage and address the needs of the final users and widely shared with relevant stakeholders involved in the optimal use of biosensors and in the management of post-fire areas. This project was funded by National Recovery and Resilience Plan (NRRP), Mission 4, Component C2, Investment 1.1 - Call for tender No. 1409 of 14 September 2022 – ‘Progetti di Ricerca di Rilevante interesse Nazionale – PRIN’ of Italian Ministry of University and Research funded by the European Union – NextGenerationEU; Grant N° P2022Z5742, CUP B53D23023780001.

Keywords: phloem, scorched crown, conifers, prescribed burning, biosensors

Procedia PDF Downloads 15
9 Analysis of Composite Health Risk Indicators Built at a Regional Scale and Fine Resolution to Detect Hotspot Areas

Authors: Julien Caudeville, Muriel Ismert

Abstract:

Analyzing the relationship between environment and health has become a major preoccupation for public health as evidenced by the emergence of the French national plans for health and environment. These plans have identified the following two priorities: (1) to identify and manage geographic areas, where hotspot exposures are suspected to generate a potential hazard to human health; (2) to reduce exposure inequalities. At a regional scale and fine resolution of exposure outcome prerequisite, environmental monitoring networks are not sufficient to characterize the multidimensionality of the exposure concept. In an attempt to increase representativeness of spatial exposure assessment approaches, risk composite indicators could be built using additional available databases and theoretical framework approaches to combine factor risks. To achieve those objectives, combining data process and transfer modeling with a spatial approach is a fundamental prerequisite that implies the need to first overcome different scientific limitations: to define interest variables and indicators that could be built to associate and describe the global source-effect chain; to link and process data from different sources and different spatial supports; to develop adapted methods in order to improve spatial data representativeness and resolution. A GIS-based modeling platform for quantifying human exposure to chemical substances (PLAINE: environmental inequalities analysis platform) was used to build health risk indicators within the Lorraine region (France). Those indicators combined chemical substances (in soil, air and water) and noise risk factors. Tools have been developed using modeling, spatial analysis and geostatistic methods to build and discretize interest variables from different supports and resolutions on a 1 km2 regular grid within the Lorraine region. By example, surface soil concentrations have been estimated by developing a Kriging method able to integrate surface and point spatial supports. Then, an exposure model developed by INERIS was used to assess the transfer from soil to individual exposure through ingestion pathways. We used distance from polluted soil site to build a proxy for contaminated site. Air indicator combined modeled concentrations and estimated emissions to take in account 30 polluants in the analysis. For water, drinking water concentrations were compared to drinking water standards to build a score spatialized using a distribution unit serve map. The Lden (day-evening-night) indicator was used to map noise around road infrastructures. Aggregation of the different factor risks was made using different methodologies to discuss weighting and aggregation procedures impact on the effectiveness of risk maps to take decisions for safeguarding citizen health. Results permit to identify pollutant sources, determinants of exposure, and potential hotspots areas. A diagnostic tool was developed for stakeholders to visualize and analyze the composite indicators in an operational and accurate manner. The designed support system will be used in many applications and contexts: (1) mapping environmental disparities throughout the Lorraine region; (2) identifying vulnerable population and determinants of exposure to set priorities and target for pollution prevention, regulation and remediation; (3) providing exposure database to quantify relationships between environmental indicators and cancer mortality data provided by French Regional Health Observatories.

Keywords: health risk, environment, composite indicator, hotspot areas

Procedia PDF Downloads 247
8 Improving Diagnostic Accuracy of Ankle Syndesmosis Injuries: A Comparison of Traditional Radiographic Measurements and Computed Tomography-Based Measurements

Authors: Yasar Samet Gokceoglu, Ayse Nur Incesu, Furkan Okatar, Berk Nimetoglu, Serkan Bayram, Turgut Akgul

Abstract:

Ankle syndesmosis injuries pose a significant challenge in orthopedic practice due to their potential for prolonged recovery and chronic ankle dysfunction. Accurate diagnosis and management of these injuries are essential for achieving optimal patient outcomes. The use of radiological methods, such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), plays a vital role in the accurate diagnosis of syndesmosis injuries in the context of ankle fractures. Treatment options for ankle syndesmosis injuries vary, with surgical interventions such as screw fixation and suture-button implantation being commonly employed. The choice of treatment is influenced by the severity of the injury and the presence of associated fractures. Additionally, the mechanism of injury, such as pure syndesmosis injury or specific fracture types, can impact the stability and management of syndesmosis injuries. Ankle fractures with syndesmosis injury present a complex clinical scenario, requiring accurate diagnosis, appropriate reduction, and tailored management strategies. The interplay between the mechanism of injury, associated fractures, and treatment modalities significantly influences the outcomes of these challenging injuries. The long-term outcomes and patient satisfaction following ankle fractures with syndesmosis injury are crucial considerations in the field of orthopedics. Patient-reported outcome measures, such as the Foot and Ankle Outcome Score (FAOS), provide essential information about functional recovery and quality of life after these injuries. When diagnosing syndesmosis injuries, standard measurements, such as the medial clear space, tibiofibular overlap, tibiofibular clear space, anterior tibiofibular ratio (ATFR), and the anterior-posterior tibiofibular ratio (APTF), are assessed through radiographs and computed tomography (CT) scans. These parameters are critical in evaluating the presence and severity of syndesmosis injuries, enabling clinicians to choose the most appropriate treatment approach. Despite advancements in diagnostic imaging, challenges remain in accurately diagnosing and treating ankle syndesmosis injuries. Traditional diagnostic parameters, while beneficial, may not capture the full extent of the injury or provide sufficient information to guide therapeutic decisions. This gap highlights the need for exploring additional diagnostic parameters that could enhance the accuracy of syndesmosis injury diagnoses and inform treatment strategies more effectively. The primary goal of this research is to evaluate the usefulness of traditional radiographic measurements in comparison to new CT-based measurements for diagnosing ankle syndesmosis injuries. Specifically, this study aims to assess the accuracy of conventional parameters, including medial clear space, tibiofibular overlap, tibiofibular clear space, ATFR, and APTF, in contrast with the recently proposed CT-based measurements such as the delta and gamma angles. Moreover, the study intends to explore the relationship between these diagnostic parameters and functional outcomes, as measured by the Foot and Ankle Outcome Score (FAOS). Establishing a correlation between specific diagnostic measurements and FAOS scores will enable us to identify the most reliable predictors of functional recovery following syndesmosis injuries. This comparative analysis will provide valuable insights into the accuracy and dependability of CT-based measurements in diagnosing ankle syndesmosis injuries and their potential impact on predicting patient outcomes. The results of this study could greatly influence clinical practices by refining diagnostic criteria and optimizing treatment planning for patients with ankle syndesmosis injuries.

Keywords: ankle syndesmosis injury, diagnostic accuracy, computed tomography, radiographic measurements, Tibiofibular syndesmosis distance

Procedia PDF Downloads 72
7 The Use of Rule-Based Cellular Automata to Track and Forecast the Dispersal of Classical Biocontrol Agents at Scale, with an Application to the Fopius arisanus Fruit Fly Parasitoid

Authors: Agboka Komi Mensah, John Odindi, Elfatih M. Abdel-Rahman, Onisimo Mutanga, Henri Ez Tonnang

Abstract:

Ecosystems are networks of organisms and populations that form a community of various species interacting within their habitats. Such habitats are defined by abiotic and biotic conditions that establish the initial limits to a population's growth, development, and reproduction. The habitat’s conditions explain the context in which species interact to access resources such as food, water, space, shelter, and mates, allowing for feeding, dispersal, and reproduction. Dispersal is an essential life-history strategy that affects gene flow, resource competition, population dynamics, and species distributions. Despite the importance of dispersal in population dynamics and survival, understanding the mechanism underpinning the dispersal of organisms remains challenging. For instance, when an organism moves into an ecosystem for survival and resource competition, its progression is highly influenced by extrinsic factors such as its physiological state, climatic variables and ability to evade predation. Therefore, greater spatial detail is necessary to understand organism dispersal dynamics. Understanding organisms dispersal can be addressed using empirical and mechanistic modelling approaches, with the adopted approach depending on the study's purpose Cellular automata (CA) is an example of these approaches that have been successfully used in biological studies to analyze the dispersal of living organisms. Cellular automata can be briefly described as occupied cells by an individual that evolves based on proper decisions based on a set of neighbours' rules. However, in the ambit of modelling individual organisms dispersal at the landscape scale, we lack user friendly tools that do not require expertise in mathematical models and computing ability; such as a visual analytics framework for tracking and forecasting the dispersal behaviour of organisms. The term "visual analytics" (VA) describes a semiautomated approach to electronic data processing that is guided by users who can interact with data via an interface. Essentially, VA converts large amounts of quantitative or qualitative data into graphical formats that can be customized based on the operator's needs. Additionally, this approach can be used to enhance the ability of users from various backgrounds to understand data, communicate results, and disseminate information across a wide range of disciplines. To support effective analysis of the dispersal of organisms at the landscape scale, we therefore designed Pydisp which is a free visual data analytics tool for spatiotemporal dispersal modeling built in Python. Its user interface allows users to perform a quick and interactive spatiotemporal analysis of species dispersal using bioecological and climatic data. Pydisp enables reuse and upgrade through the use of simple principles such as Fuzzy cellular automata algorithms. The potential of dispersal modeling is demonstrated in a case study by predicting the dispersal of Fopius arisanus (Sonan), endoparasitoids to control Bactrocera dorsalis (Hendel) (Diptera: Tephritidae) in Kenya. The results obtained from our example clearly illustrate the parasitoid's dispersal process at the landscape level and confirm that dynamic processes in an agroecosystem are better understood when designed using mechanistic modelling approaches. Furthermore, as demonstrated in the example, the built software is highly effective in portraying the dispersal of organisms despite the unavailability of detailed data on the species dispersal mechanisms.

Keywords: cellular automata, fuzzy logic, landscape, spatiotemporal

Procedia PDF Downloads 77
6 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 73
5 Rapid Situation Assessment of Family Planning in Pakistan: Exploring Barriers and Realizing Opportunities

Authors: Waqas Abrar

Abstract:

Background: Pakistan is confronted with a formidable challenge to increase uptake of modern contraceptive methods. USAID, through its flagship Maternal and Child Survival Program (MCSP), in Pakistan is determined to support provincial Departments of Health and Population Welfare to increase the country's contraceptive prevalence rates (CPR) in Sindh, Punjab and Balochistan to achieve FP2020 goals. To inform program design and planning, a Rapid Situation Assessment (RSA) of family planning was carried out in Rawalpindi and Lahore districts in Punjab and Karachi district in Sindh. Methodology: The methodology consisted of comprehensive desk review of available literature and used a qualitative approach comprising of in-depth interviews (IDIs) and focus group discussions (FGDs). FGDs were conducted with community women, men, and mothers-in-law whereas IDIs were conducted with health facility in-charges/chiefs, healthcare providers, and community health workers. Results: Some of the oft-quoted reasons captured during desk review included poor quality of care at public sector facilities, affordability and accessibility in rural communities and providers' technical incompetence. Moreover, providers had inadequate knowledge of contraceptive methods and lacked counseling techniques; thereby, leading to dissatisfied clients and hence, discontinuation of contraceptive methods. These dissatisfied clients spread the myths and misconceptions about contraceptives in their respective communities which seriously damages community-level family planning efforts. Private providers were found reluctant to insert Intrauterine Contraceptive Devices (IUCDs) due to inadequate knowledge vis-à-vis post insertion issues/side effects. FGDs and IDIs unveiled multi-faceted reasons for poor contraceptives uptake. It was found that low education and socio-economic levels lead to low contraceptives uptake and mostly uneducated women rely on condoms provided by Lady Health Workers (LHWs). Providers had little or no knowledge about postpartum family planning or lactational amenorrhea. At community level family planning counseling sessions organized by LHWs and Male Mobilizers do not sensitize community men on permissibility of contraception in Islam. Many women attributed their physical ailments to the use of contraceptives. Lack of in-service training, job-aids and Information, Education and Communications (IEC) materials at facilities seriously comprise the quality of care in effective family planning service delivery. This is further compounded by frequent stock-outs of contraceptives at public healthcare facilities, poor data quality, false reporting, lack of data verification systems and follow-up. Conclusions: Some key conclusions from this assessment included capacity building of healthcare providers on long acting reversible contraceptives (LARCs) which give women contraception for a longer period. Secondly, capacity building of healthcare providers on postpartum family planning is an enormous challenge that can be best addressed through institutionalization. Thirdly, Providers should be equipped with counseling skills and techniques including inculcation of pros and cons of all contraceptive methods. Fourthly, printed materials such as job-aids and Information, Education and Communications (IEC) materials should be disseminated among healthcare providers and clients. These concluding statements helped MCSP to make informed decisions with regard to setting broad objectives of project and were duly approved by USAID.

Keywords: capacity building, contraceptive prevalence rate, family planning, Institutionalization, Pakistan, postpartum care, postpartum family planning services

Procedia PDF Downloads 153
4 Teacher Collaboration Impact on Bilingual Students’ Oral Communication Skills in Inclusive Contexts

Authors: Diana González, Marta Gràcia, Ana Luisa Adam-Alcocer

Abstract:

Incorporating digital tools into educational practices represents a valuable approach for enriching the quality of teachers' educational practices in oral competence and fostering improvements in student learning outcomes. This study aims to promote a collaborative and culturally sensitive approach to professional development between teachers and a speech therapist to enhance their self-awareness and reflection on high-quality educational practices that integrate school components to strengthen children’s oral communication and pragmatic skills. The study involved five bilingual teachers fluent in both English and Spanish, with three specializing in special education and two in general education. It focused on Spanish-English bilingual students, aged 3-6, who were experiencing speech delays or disorders in a New York City public school, with the collaboration of a speech therapist. Using EVALOE-DSS (Assessment Scale of Oral Language Teaching in the School Context - Decision Support System), teachers conducted self-assessments of their teaching practices, reflect and make-decisions throughout six classes from March to June, focusing on students' communicative competence across various activities. Concurrently, the speech therapist observed and evaluated six classes per teacher using EVALOE-DSS during the same period. Additionally, professional development meetings were held monthly between the speech therapist and teachers, centering on discussing classroom interactions, instructional strategies, and the progress of both teachers and students in their classes. Findings highlight the digital tool EVALOE-DSS's value in analyzing communication patterns and trends among bilingual children in inclusive settings. It helps in identifying improvement areas through teacher and speech therapist collaboration. After self-reflection meetings, teachers demonstrated increased awareness of student needs in oral language and pragmatic skills. They also exhibited enhanced utilization of strategies outlined in EVALOE-DSS, such as actively guiding and orienting students during oral language activities, promoting student-initiated communicative interactions, teaching students how to seek and provide information, and managing turn-taking to ensure inclusive participation. Teachers participating in the professional development program have shown positive progress in assessing their classes across all dimensions of the training tool, including instructional design, teacher conversation management, pupil conversation management, communicative functions, teacher strategies, and pupil communication functions. This includes aspects related to both teacher actions and child actions, particularly in child language development. This progress underscores the effectiveness of individual reflection (conducted weekly or biweekly using EVALOE-DSS) as well as collaborative reflection among teachers and the speech therapist during meetings. The EVALOE-SSD has proven effective in supporting teachers' self-reflection, decision-making, and classroom changes, leading to improved development of students' oral language and pragmatic skills. It has facilitated culturally sensitive evaluations of communication among bilingual children, cultivating collaboration between teachers and speech therapist to identify areas of growth. Participants in the professional development program demonstrated substantial progress across all dimensions assessed by EVALOE-DSS. This included improved management of pupil communication functions, implementation of effective teaching strategies, and better classroom dynamics. Regular reflection sessions using EVALOE-SSD supported continuous improvement in instructional practices, highlighting its role in fostering reflective teaching and enriching student learning experiences. Overall, EVALOE-DSS has proven invaluable for enhancing teaching effectiveness and promoting meaningful student interactions in diverse educational settings.

Keywords: bilingual students, collaboration, culturally sensitive, oral communication skills, self-reflection

Procedia PDF Downloads 34
3 Critical Factors for Successful Adoption of Land Value Capture Mechanisms – An Exploratory Study Applied to Indian Metro Rail Context

Authors: Anjula Negi, Sanjay Gupta

Abstract:

Paradigms studied inform inadequacies of financial resources, be it to finance metro rails for construction or to meet operational revenues or to derive profits in the long term. Funding sustainability is far and wide for much-needed public transport modes, like urban rail or metro rails, to be successfully operated. India embarks upon a sustainable transport journey and has proposed metro rail systems countrywide. As an emerging economic leader, its fiscal constraints are paramount, and the land value capture (LVC) mechanism provides necessary support and innovation toward development. India’s metro rail policy promotes multiple methods of financing, including private-sector investments and public-private-partnership. The critical question that remains to be addressed is what factors can make such mechanisms work. Globally, urban rail is a revolution noted by many researchers as future mobility. Researchers in this study deep dive by way of literature review and empirical assessments into factors that can lead to the adoption of LVC mechanisms. It is understood that the adoption of LVC methods is in the nascent stages in India. Research posits numerous challenges being faced by metro rail agencies in raising funding and for incremental value capture. A few issues pertaining to land-based financing, inter alia: are long-term financing, inter-institutional coordination, economic/ market suitability, dedicated metro funds, land ownership issues, piecemeal approach to real estate development, property development legal frameworks, etc. The question under probe is what are the parameters that can lead to success in the adoption of land value capture (LVC) as a financing mechanism. This research provides insights into key parameters crucial to the adoption of LVC in the context of Indian metro rails. Researchers have studied current forms of LVC mechanisms at various metro rails of the country. This study is significant as little research is available on the adoption of LVC, which is applicable to the Indian context. Transit agencies, State Government, Urban Local Bodies, Policy makers and think tanks, Academia, Developers, Funders, Researchers and Multi-lateral agencies may benefit from this research to take ahead LVC mechanisms in practice. The study deems it imperative to explore and understand key parameters that impact the adoption of LVC. Extensive literature review and ratification by experts working in the metro rails arena were undertaken to arrive at parameters for the study. Stakeholder consultations in the exploratory factor analysis (EFA) process were undertaken for principal component extraction. 43 seasoned and specialized experts participated in a semi-structured questionnaire to scale the maximum likelihood on each parameter, represented by various types of stakeholders. Empirical data was collected on chosen eighteen parameters, and significant correlation was extracted for output descriptives and inferential statistics. Study findings reveal these principal components as institutional governance framework, spatial planning features, legal frameworks, funding sustainability features and fiscal policy measures. In particular, funding sustainability features highlight sub-variables of beneficiaries to pay and use of multiple revenue options towards success in LVC adoption. Researchers recommend incorporation of these variables during early stage in design and project structuring for success in adoption of LVC. In turn leading to improvements in revenue sustainability of a public transport asset and help in undertaking informed transport policy decisions.

Keywords: Exploratory factor analysis, land value capture mechanism, financing metro rails, revenue sustainability, transport policy

Procedia PDF Downloads 81
2 A Comprehensive Study of Spread Models of Wildland Fires

Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.

Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling

Procedia PDF Downloads 81
1 An Intelligent Search and Retrieval System for Mining Clinical Data Repositories Based on Computational Imaging Markers and Genomic Expression Signatures for Investigative Research and Decision Support

Authors: David J. Foran, Nhan Do, Samuel Ajjarapu, Wenjin Chen, Tahsin Kurc, Joel H. Saltz

Abstract:

The large-scale data and computational requirements of investigators throughout the clinical and research communities demand an informatics infrastructure that supports both existing and new investigative and translational projects in a robust, secure environment. In some subspecialties of medicine and research, the capacity to generate data has outpaced the methods and technology used to aggregate, organize, access, and reliably retrieve this information. Leading health care centers now recognize the utility of establishing an enterprise-wide, clinical data warehouse. The primary benefits that can be realized through such efforts include cost savings, efficient tracking of outcomes, advanced clinical decision support, improved prognostic accuracy, and more reliable clinical trials matching. The overarching objective of the work presented here is the development and implementation of a flexible Intelligent Retrieval and Interrogation System (IRIS) that exploits the combined use of computational imaging, genomics, and data-mining capabilities to facilitate clinical assessments and translational research in oncology. The proposed System includes a multi-modal, Clinical & Research Data Warehouse (CRDW) that is tightly integrated with a suite of computational and machine-learning tools to provide insight into the underlying tumor characteristics that are not be apparent by human inspection alone. A key distinguishing feature of the System is a configurable Extract, Transform and Load (ETL) interface that enables it to adapt to different clinical and research data environments. This project is motivated by the growing emphasis on establishing Learning Health Systems in which cyclical hypothesis generation and evidence evaluation become integral to improving the quality of patient care. To facilitate iterative prototyping and optimization of the algorithms and workflows for the System, the team has already implemented a fully functional Warehouse that can reliably aggregate information originating from multiple data sources including EHR’s, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology PAC systems, Digital Pathology archives, Unstructured Clinical Documents, and Next Generation Sequencing services. The System enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information about patient tumors individually or as part of large cohorts to identify patterns that may influence treatment decisions and outcomes. The CRDW core system has facilitated peer-reviewed publications and funded projects, including an NIH-sponsored collaboration to enhance the cancer registries in Georgia, Kentucky, New Jersey, and New York, with machine-learning based classifications and quantitative pathomics, feature sets. The CRDW has also resulted in a collaboration with the Massachusetts Veterans Epidemiology Research and Information Center (MAVERIC) at the U.S. Department of Veterans Affairs to develop algorithms and workflows to automate the analysis of lung adenocarcinoma. Those studies showed that combining computational nuclear signatures with traditional WHO criteria through the use of deep convolutional neural networks (CNNs) led to improved discrimination among tumor growth patterns. The team has also leveraged the Warehouse to support studies to investigate the potential of utilizing a combination of genomic and computational imaging signatures to characterize prostate cancer. The results of those studies show that integrating image biomarkers with genomic pathway scores is more strongly correlated with disease recurrence than using standard clinical markers.

Keywords: clinical data warehouse, decision support, data-mining, intelligent databases, machine-learning.

Procedia PDF Downloads 126