Search results for: Analytic Network Process (ANP)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18944

Search results for: Analytic Network Process (ANP)

11054 Experimental and Numerical Study on the Effects of Oxygen Methane Flames with Water Dilution for Different Pressures

Authors: J. P. Chica Cano, G. Cabot, S. de Persis, F. Foucher

Abstract:

Among all possibilities to combat global warming, CO2 capture and sequestration (CCS) is presented as a great alternative to reduce greenhouse gas (GHG) emission. Several strategies for CCS from industrial and power plants are being considered. The concept of combined oxy-fuel combustion has been the most alternative solution. Nevertheless, due to the high cost of pure O2 production, additional ways recently emerged. In this paper, an innovative combustion process for a gas turbine cycle was studied: it was composed of methane combustion with oxygen enhanced air (OEA), exhaust gas recirculation (EGR) and H2O issuing from STIG (Steam Injection Gas Turbine), and the CO2 capture was realized by membrane separator. The effect on this combustion process was emphasized, and it was shown that a study of the influence of H2O dilution on the combustion parameters by experimental and numerical approaches had to be carried out. As a consequence, the laminar burning velocities measurements were performed in a stainless steel spherical combustion from atmospheric pressure to high pressure (up to 0.5 MPa), at 473 K for an equivalence ratio at 1. These experimental results were satisfactorily compared with Chemical Workbench v.4.1 package in conjunction with GRIMech 3.0 reaction mechanism. The good correlations so obtained between experimental and calculated flame speed velocities showed the validity of the GRIMech 3.0 mechanism in this domain of combustion: high H2O dilution, low N2, medium pressure. Finally, good estimations of flame speed and pollutant emissions were determined in other conditions compatible with real gas turbine. In particular, mixtures (composed of CH4/O2/N2/H2O/ or CO2) leading to the same adiabatic temperature were investigated. Influences of oxygen enrichment and H2O dilution (compared to CO2) were disused.

Keywords: CO₂ capture, oxygen enrichment, water dilution, laminar burning velocity, pollutants emissions

Procedia PDF Downloads 151
11053 Identify the Traffic Safety Needs among Risky Groups in Iraq

Authors: Aodai Abdul-Illah Ismail

Abstract:

Even though the dramatic progress that has been made in traffic safety, but still millions of peoples get killed or injured as a result of traffic crashes, besides the huge amount of economic losses due to these crashes. So traffic safety continues to be one of the most important serious issues worldwide, and it affects everyone who uses the road network system, whether you drive, walk, cycle, or push a pram. One of the most important sides that offers promise for further progress in relation to traffic safety is related to risky groups (special population groups) who may have higher potential to be involved in accidents. Traffic safety needs of risky groups are different from each other and also from the average population. Due to the various limitations between these special groups from each other and from the average population, it is not possible to address all the issues –at the same time- raising the importance ranking among the other safety issues. This paper explains a procedure used to identify the most critical traffic safety issues of five risky groups, which include younger, older and female drivers, people with disabilities and school aged children. Multi criteria used in selecting the critical issues because the single criteria is not sufficient. Highway safety professionals were surveyed to obtain the ranking of importance among the risky groups and then to develop the final ranking among issues by applying weight for each of the criteria.

Keywords: traffic safety, risky groups, old drivers, young drivers

Procedia PDF Downloads 337
11052 An Exploration of Renewal Utilization of Under-bridge Space Based on Spatial Potential Evaluation - Taking Chongqing Municipality as an Example

Authors: Xuelian Qin

Abstract:

Urban "organic renewal" based on the development of existing resources in high-density urban areas has become the mainstream of urban development in the new era. As an important stock resource of public space in high-density urban areas, promoting its value remodeling is an effective way to alleviate the shortage of public space resources. However, due to the lack of evaluation links in the process of underpass space renewal, a large number of underpass space resources have been left idle, facing the problems of low space conversion efficiency, lack of accuracy in development decision-making, and low adaptability of functional positioning to citizens' needs. Therefore, it is of great practical significance to construct the evaluation system of under-bridge space renewal potential and explore the renewal mode. In this paper, some of the under-bridge spaces in the main urban area of Chongqing are selected as the research object. Through the questionnaire interviews with the users of the built excellent space under the bridge, three types of six levels and twenty-two potential evaluation indexes of "objective demand factor, construction feasibility factor and construction suitability factor" are selected, including six levels of land resources, infrastructure, accessibility, safety, space quality and ecological environment. The analytical hierarchy process and expert scoring method are used to determine the index weight, construct the potential evaluation system of the space under the bridge in high-density urban areas of Chongqing, and explore the direction of renewal and utilization of its suitability. To provide feasible theoretical basis and scientific decision support for the use of under bridge space in the future.

Keywords: high density urban area, potential evaluation, space under bridge, updated using

Procedia PDF Downloads 68
11051 Variable Shunt Reactors for Reactive Power Compensation of HV Subsea Cables

Authors: Saeed A. AlGhamdi, Nabil Habli, Vinoj Somasanran

Abstract:

This paper presents an application of 230 kV Variable Shunt Reactors (VSR) used to compensate reactive power of dual 90 KM subsea cables. VSR integrates an on-load tap changer (OLTC) that adjusts reactive power compensation to maintain acceptable bus voltages under variable load profile and network configuration. An automatic voltage regulator (AVR) or a power management system (PMS) that allows VSR rating to be changed in discrete steps typically controls the OLTC. Typical regulation range start as minimum as 20% up to 100% and are available for systems up to 550kV. The regulation speed is normally in the order of seconds per step and approximately a minute from maximum to minimum rating. VSR can be bus or line connected depending on line/cable length and compensation requirements. The flexible reactive compensation ranges achieved by recent VSR technologies have enabled newer facilities design to deploy line connected VSR through either disconnect switches, which saves space and cost, or through circuit breakers. Lines with VSR are typically energized with lower taps (reduced reactive compensation) to minimize or remove the presence of delayed zero crossing.

Keywords: power management, reactive power, subsea cables, variable shunt reactors

Procedia PDF Downloads 226
11050 Adsorption of Pb(II) with MOF [Co2(Btec)(Bipy)(DMF)2]N in Aqueous Solution

Authors: E. Gil, A. Zepeda, J. Rivera, C. Ben-Youssef, S. Rincón

Abstract:

Water pollution has become one of the most serious environmental problems. Multiple methods have been proposed for the removal of Pb(II) from contaminated water. Among these, adsorption processes have shown to be more efficient, cheaper and easier to handle with respect to other treatment methods. However, research for adsorbents with high adsorption capacities is still necessary. For this purpose, we proposed in this work the study of metal-organic Framework [Co2(btec)(bipy)(DMF)2]n (MOF-Co) as adsorbent material of Pb (II) in aqueous media. MOF-Co was synthesized by a simple method. Firstly 4, 4’ dipyridyl, 1,2,4,5 benzenetetracarboxylic acid, cobalt (II) and nitrate hexahydrate were first mixed each one in N,N dimethylformamide (DMF) and then, mixed in a reactor altogether. The obtained solution was heated at 363 K in a muffle during 68 h to complete the synthesis. It was washed and dried, obtaining MOF-Co as the final product. MOF-Co was characterized before and after the adsorption process by Fourier transforms infrared spectra (FTIR) and X-ray photoelectron spectroscopy (XPS). The Pb(II) in aqueous media was detected by Absorption Atomic Spectroscopy (AA). In order to evaluate the adsorption process in the presence of Pb(II) in aqueous media, the experiments were realized in flask of 100 ml the work volume at 200 rpm, with different MOF-Co quantities (0.0125 and 0.025 g), pH (2-6), contact time (0.5-6 h) and temperature (298,308 and 318 K). The kinetic adsorption was represented by pseudo-second order model, which suggests that the adsorption took place through chemisorption or chemical adsorption. The best adsorption results were obtained at pH 5. Langmuir, Freundlich and BET equilibrium isotherms models were used to study the adsorption of Pb(II) with 0.0125 g of MOF-Co, in the presence of different concentration of Pb(II) (20-200 mg/L, 100 mL, pH 5) with 4 h of reaction. The correlation coefficients (R2) of the different models show that the Langmuir model is better than Freundlich and BET model with R2=0.97 and a maximum adsorption capacity of 833 mg/g. Therefore, the Langmuir model can be used to best describe the Pb(II) adsorption in monolayer behavior on the MOF-Co. This value is the highest when compared to other materials such as the graphene/activated carbon composite (217 mg/g), biomass fly ashes (96.8 mg/g), PVA/PAA gel (194.99 mg/g) and MOF with Ag12 nanoparticles (120 mg/g).

Keywords: adsorption, heavy metals, metal-organic frameworks, Pb(II)

Procedia PDF Downloads 202
11049 Human Performance Evaluating of Advanced Cardiac Life Support Procedure Using Fault Tree and Bayesian Network

Authors: Shokoufeh Abrisham, Seyed Mahmoud Hossieni, Elham Pishbin

Abstract:

In this paper, a hybrid method based on the fault tree analysis (FTA) and Bayesian networks (BNs) are employed to evaluate the team performance quality of advanced cardiac life support (ACLS) procedures in emergency department. According to American Heart Association (AHA) guidelines, a category relying on staff action leading to clinical incidents and also some discussions with emergency medicine experts, a fault tree model for ACLS procedure is obtained based on the human performance. The obtained FTA model is converted into BNs, and some different scenarios are defined to demonstrate the efficiency and flexibility of the presented model of BNs. Also, a sensitivity analysis is conducted to indicate the effects of team leader presence and uncertainty knowledge of experts on the quality of ACLS. The proposed model based on BNs shows that how the results of risk analysis can be closed to reality comparing to the obtained results based on only FTA in medical procedures.

Keywords: advanced cardiac life support, fault tree analysis, Bayesian belief networks, numan performance, healthcare systems

Procedia PDF Downloads 134
11048 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 209
11047 Fenton Sludge's Catalytic Ability with Synergistic Effects During Reuse for Landfill Leachate Treatment

Authors: Mohd Salim Mahtab, Izharul Haq Farooqi, Anwar Khursheed

Abstract:

Advanced oxidation processes (AOPs) based on Fenton are versatile options for treating complex wastewaters containing refractory compounds. However, the classical Fenton process (CFP) has limitations, such as high sludge production and reagent dosage, which limit its broad use and result in secondary contamination. As a result, long-term solutions are required for process intensification and the removal of these impediments. This study shows that Fenton sludge could serve as a catalyst in the Fe³⁺/Fe²⁺ reductive pathway, allowing non-regenerated sludge to be reused for complex wastewater treatment, such as landfill leachate treatment, even in the absence of Fenton's reagents. Experiments with and without pH adjustments in stages I and II demonstrated that an acidic pH is desirable. Humic compounds in leachate could improve the cycle of Fe³⁺/Fe²⁺ under optimal conditions, and the chemical oxygen demand (COD) removal efficiency was 22±2% and 62±2%% in stages I and II, respectively. Furthermore, excellent total suspended solids (TSS) removal (> 95%) and color removal (> 80%) were obtained in stage II. The processes underlying synergistic (oxidation/coagulation/adsorption) effects were addressed. The design of the experiment (DOE) is growing increasingly popular and has thus been implemented in the chemical, water, and environmental domains. The relevance of the statistical model for the desired response was validated using the explicitly stated optimal conditions. The operational factors, characteristics of reused sludge, toxicity analysis, cost calculation, and future research objectives were also discussed. Reusing non-regenerated Fenton sludge, according to the study's findings, can minimize hazardous solid toxic emissions and total treatment costs.

Keywords: advanced oxidation processes, catalysis, Fe³⁺/Fe²⁺ cycle, fenton sludge

Procedia PDF Downloads 78
11046 Proposing an Architecture for Drug Response Prediction by Integrating Multiomics Data and Utilizing Graph Transformers

Authors: Nishank Raisinghani

Abstract:

Efficiently predicting drug response remains a challenge in the realm of drug discovery. To address this issue, we propose four model architectures that combine graphical representation with varying positions of multiheaded self-attention mechanisms. By leveraging two types of multi-omics data, transcriptomics and genomics, we create a comprehensive representation of target cells and enable drug response prediction in precision medicine. A majority of our architectures utilize multiple transformer models, one with a graph attention mechanism and the other with a multiheaded self-attention mechanism, to generate latent representations of both drug and omics data, respectively. Our model architectures apply an attention mechanism to both drug and multiomics data, with the goal of procuring more comprehensive latent representations. The latent representations are then concatenated and input into a fully connected network to predict the IC-50 score, a measure of cell drug response. We experiment with all four of these architectures and extract results from all of them. Our study greatly contributes to the future of drug discovery and precision medicine by looking to optimize the time and accuracy of drug response prediction.

Keywords: drug discovery, transformers, graph neural networks, multiomics

Procedia PDF Downloads 129
11045 Relationships between Actors within Business Ecosystems That Adopt Circular Strategies: A Systematic Literature Review

Authors: Sophia Barquete, Adriana H. Trevisan, Janaina Mascarenhas

Abstract:

The circular economy (CE) aims at the cycling of resources through restorative and regenerative strategies. To achieve circularity, coordination of several actors who have different responsibilities is necessary. The interaction among multiple actors allows the connection between the CE and business ecosystem research fields. Although fundamental, the relationships between actors within an ecosystem to foster circularity are not deeply explored in the literature. The objective of this study was to identify the possibilities of cooperation, competition, or even coopetition among the members of business ecosystems that adopt circular strategies. In particular, the motivations that make these actors interact to achieve a circular economy were investigated. A systematic literature review was adopted to select business ecosystem cases that adopt circular strategies. As a result, several motivations were identified for actors to engage in relationships within ecosystems, such as sharing knowledge and infrastructure, developing products with a circular design, promoting reverse logistics, among others. The results suggest that partnerships between actors are, in fact, important for the implementation of circular strategies. In order to achieve a complete and circular solution, actors must be able to clearly understand their roles and relationships within the network so that they can establish new partnerships or reframe those already established.

Keywords: business ecosystem, circular economy, cooperation, coopetition, competition

Procedia PDF Downloads 209
11044 Bicycle Tourism and Sharing Economy (C2C-Tourism): Analysis of the Reciprocity Behavior in the Case of Warmshowers

Authors: Jana Heimel, Franziska Drescher, Lauren Ugur, Graciela Kuchle

Abstract:

Sharing platforms are a widely investigated field. However, there is a research gap with a lack of focus on ‘real’ (non-profit-orientated) sharing platforms. The research project addresses this gap by conducting an empirical study on a private peer-to-peer (P2P) network to investigate cooperative behavior from a socio-psychological perspective. In recent years the conversion from possession to accessing is increasingly influencing different sectors, particularly the traveling industry. The number of people participating in hospitality exchange platforms like Airbnb, Couchsurfing, and Warmshowers (WS) is rapidly growing. WS is an increasingly popular online community that is linking cycling tourists and locals. It builds on the idea of the “sharing economy” as a not-for-profit hospitality network for bicycle tourists. Hosts not only provide a sleeping berth and warm shower free of charge but also offer additional services to their guests, such as cooking and washing clothes for them. According to previous studies, they are motivated by the idea of promoting cultural experience and forming new friendships. Trust and reciprocity are supposed to play major roles in the success of such platforms. The objective of this research project is to analyze the reciprocity behavior within the WS community. Reciprocity is the act of giving and taking among each other. Individuals feel obligated to return a favor and often expect to increase their own chances of receiving future benefits for themselves. Consequently, the drivers that incite giving and taking, as well as the motivation for hosts and guests, are examined. Thus, the project investigates a particular tourism offer that contributes to sustainable tourism by analyzing P2P resp. cyclist-to-cyclist, C2C) tourism. C2C tourism is characterized by special hospitality and generosity. To find out what motivations drive the hosts and which determinants drive the sharing cycling economy, an empirical study has been conducted globally through an online survey. The data was gathered through the WS community and comprised responses from more than 10,000 cyclists around the globe. Next to general information mostly comprising quantitative data on bicycle tourism (year/tour distance, duration and budget), qualitative information on traveling with WS as well as hosting was collected. The most important motivations for a traveler is to explore the local culture, to save money, and to make friends. The main reasons to host a guest are to promote the use of bicycles and to make friends, but also to give back and pay forward. WS members prefer to stay with/host cyclists. The results indicate that C2C tourists share homogenous characteristics and a similar philosophy, which is crucial for building mutual trust. Members of WS are generally extremely trustful. The study promotes an ecological form of tourism by combining sustainability, regionality, health, experience and the local communities' cultures. The empirical evidence found and analyzed, despite evident limitations, enabled us to shed light, especially on the issue of motivations and social capital, and on the functioning of ‘sharing’ platforms. Final research results are intended to promote C2C tourism around the globe to further replace conventional by sustainable tourism.

Keywords: bicycle tourism, homogeneity, reciprocity, sharing economy, trust

Procedia PDF Downloads 109
11043 Propagation of the Effects of Certain Types of Military Psychological Operations in a Networked Population

Authors: Colette Faucher

Abstract:

In modern asymmetric conflicts, the Armed Forces generally have to intervene in countries where the internal peace is in danger. They must make the local population an ally in order to be able to deploy the necessary military actions with its support. For this purpose, psychological operations (PSYOPs) are used to shape people’s behaviors and emotions by the modification of their attitudes in acting on their perceptions. PSYOPs aim at elaborating and spreading a message that must be read, listened to and/or looked at, then understood by the info-targets in order to get from them the desired behavior. A message can generate in the info-targets, reasoned thoughts, spontaneous emotions or reflex behaviors, this effect partly depending on the means of conveyance used to spread this message. In this paper, we focus on psychological operations that generate emotions. We present a method based on the Intergroup Emotion Theory, that determines, from the characteristics of the conveyed message and of the people from the population directly reached by the means of conveyance (direct info-targets), the emotion likely to be triggered in them and we simulate the propagation of the effects of such a message on indirect info-targets that are connected to them through the social networks that structure the population.

Keywords: military psychological operations, social identity, social network, emotion propagation

Procedia PDF Downloads 397
11042 A New Center of Motion in Cabling Robots

Authors: Alireza Abbasi Moshaii, Farshid Najafi

Abstract:

In this paper a new model for centre of motion creating is proposed. This new method uses cables. So, it is very useful in robots because it is light and has easy assembling process. In the robots which need to be in touch with some things this method is very good. It will be described in the following. The accuracy of the idea is proved by an experiment. This system could be used in the robots which need a fixed point in the contact with some things and make a circular motion. Such as dancer, physician or repair robots.

Keywords: centre of motion, robotic cables, permanent touching, mechatronics engineering

Procedia PDF Downloads 417
11041 Prediction of the Crustal Deformation of Volcán - Nevado Del RUíz in the Year 2020 Using Tropomi Tropospheric Information, Dinsar Technique, and Neural Networks

Authors: Juan Sebastián Hernández

Abstract:

The Nevado del Ruíz volcano, located between the limits of the Departments of Caldas and Tolima in Colombia, presented an unstable behaviour in the course of the year 2020, this volcanic activity led to secondary effects on the crust, which is why the prediction of deformations becomes the task of geoscientists. In the course of this article, the use of tropospheric variables such as evapotranspiration, UV aerosol index, carbon monoxide, nitrogen dioxide, methane, surface temperature, among others, is used to train a set of neural networks that can predict the behaviour of the resulting phase of an unrolled interferogram with the DInSAR technique, whose main objective is to identify and characterise the behaviour of the crust based on the environmental conditions. For this purpose, variables were collected, a generalised linear model was created, and a set of neural networks was created. After the training of the network, validation was carried out with the test data, giving an MSE of 0.17598 and an associated r-squared of approximately 0.88454. The resulting model provided a dataset with good thematic accuracy, reflecting the behaviour of the volcano in 2020, given a set of environmental characteristics.

Keywords: crustal deformation, Tropomi, neural networks (ANN), volcanic activity, DInSAR

Procedia PDF Downloads 84
11040 Queer Anti-Urbanism: An Exploration of Queer Space Through Design

Authors: William Creighton, Jan Smitheram

Abstract:

Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?

Keywords: queer, queer anti-urbanism, design as research, design

Procedia PDF Downloads 153
11039 Fueling Efficient Reporting And Decision-Making In Public Health With Large Data Automation In Remote Areas, Neno Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Julia Huggins, Fabien Munyaneza

Abstract:

Background: Partners In Health – Malawi introduced one of Operational Researches called Primary Health Care (PHC) Surveys in 2020, which seeks to assess progress of delivery of care in the district. The study consists of 5 long surveys, namely; Facility assessment, General Patient, Provider, Sick Child, Antenatal Care (ANC), primarily conducted in 4 health facilities in Neno district. These facilities include Neno district hospital, Dambe health centre, Chifunga and Matope. Usually, these annual surveys are conducted from January, and the target is to present final report by June. Once data is collected and analyzed, there are a series of reviews that take place before reaching final report. In the first place, the manual process took over 9 months to present final report. Initial findings reported about 76.9% of the data that added up when cross-checked with paper-based sources. Purpose: The aim of this approach is to run away from manually pulling the data, do fresh analysis, and reporting often associated not only with delays in reporting inconsistencies but also with poor quality of data if not done carefully. This automation approach was meant to utilize features of new technologies to create visualizations, reports, and dashboards in Power BI that are directly fished from the data source – CommCare hence only require a single click of a ‘refresh’ button to have the updated information populated in visualizations, reports, and dashboards at once. Methodology: We transformed paper-based questionnaires into electronic using CommCare mobile application. We further connected CommCare Mobile App directly to Power BI using Application Program Interface (API) connection as data pipeline. This provided chance to create visualizations, reports, and dashboards in Power BI. Contrary to the process of manually collecting data in paper-based questionnaires, entering them in ordinary spreadsheets, and conducting analysis every time when preparing for reporting, the team utilized CommCare and Microsoft Power BI technologies. We utilized validations and logics in CommCare to capture data with less errors. We utilized Power BI features to host the reports online by publishing them as cloud-computing process. We switched from sharing ordinary report files to sharing the link to potential recipients hence giving them freedom to dig deep into extra findings within Power BI dashboards and also freedom to export to any formats of their choice. Results: This data automation approach reduced research timelines from the initial 9 months’ duration to 5. It also improved the quality of the data findings from the original 76.9% to 98.9%. This brought confidence to draw conclusions from the findings that help in decision-making and gave opportunities for further researches. Conclusion: These results suggest that automating the research data process has the potential of reducing overall amount of time spent and improving the quality of the data. On this basis, the concept of data automation should be taken into serious consideration when conducting operational research for efficiency and decision-making.

Keywords: reporting, decision-making, power BI, commcare, data automation, visualizations, dashboards

Procedia PDF Downloads 106
11038 Revised Risk Priority Number in Failure Mode and Effects Analysis Model from the Perspective of Healthcare System

Authors: Fatemeh Rezaei, Mohammad H. Yarmohammadian, Masoud Ferdosi, Abbas Haghshnas

Abstract:

Background: Failure Modes and Effect Analysis is now having known as the main methods of risk assessment and the accreditation requirements for many organizations. The Risk Priority Number (RPN) approach is generally preferred, especially for its easiness of use. Indeed it does not require statistical data, but it is based on subjective evaluations given by the experts about the Occurrence (O i), the Severity (Si) and the Detectability (D i) of each cause of failure. Methods: This study is a quantitative – qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment was conducted to calculate RPN score. Results; We have studied patient’s journey process in surgery ward and the most important phase of the process determined Transport of the patient from the holding area to the operating room. Failures of the phase with the highest priority determined by defining inclusion criteria included severity (clinical effect, claim consequence, waste of time and financial loss), occurrence (time- unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) and quantifying risks priority criteria in the context of RPN index. Ability of improved RPN reassess by root cause (RCA) analysis showed some variations. Conclusions: Finally, It could be concluded that understandable criteria should have been developed according to personnel specialized language and communication field. Therefore, participation of both technical and clinical groups is necessary to modify and apply these models.

Keywords: failure mode, effects analysis, risk priority number(RPN), health system, risk assessment

Procedia PDF Downloads 303
11037 Effects of Seed Culture and Attached Growth System on the Performance of Anammox Hybrid Reactor (AHR) Treating Nitrogenous Wastewater

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

The start-up of anammox (anaerobic ammonium oxidation) process in hybrid reactor delineated four distinct phases i.e. cell lysis, lag phase, activity elevation and stationary phase. Cell lysis phase was marked by death and decay of heterotrophic denitrifiers resulting in breakdown of organic nitrogen into ammonium. Lag phase showed initiation of anammox activity with turnover of heterotrophic denitrifiers, which is evident from appearance of NO3-N in the effluent. In activity elevation phase, anammox became the dominant reaction, which can be attributed to consequent reduction of NH4-N into N2 with increased NO3-N in the effluent. Proper selection of mixed seed culture at influent NO2-/NH4+ ratio (1:1) and hydraulic retention time (HRT) of 1 day led to early startup of anammox within 70 days. Pseudo steady state removal efficiencies of NH4+ and NO2- were found as 94.3% and 96.4% respectively, at nitrogen loading rate (NLR) of 0.35 kg N/m3d at an HRT of 1 day. Analysis of the data indicated that attached growth system contributes an additional 11% increase in the ammonium removal and results in an average of 29% reduction in sludge washout rate. Mass balance study of nitrogen indicated that 74.1% of total input nitrogen is converted into N2 gas followed by 11.2% being utilized in biomass development. Scanning electron microscope (SEM) observation of the granular sludge clearly showed the presence of cocci and rod shaped microorganisms intermingled on the external surface of the granules. The average size of anammox granules (1.2-1.5 mm) with an average settling velocity of 45.6 m/h indicated a high degree of granulation resulting into formation of well compacted granules in the anammox process.

Keywords: anammox, hybrid reactor, startup, granulation, nitrogen removal, mixed seed culture

Procedia PDF Downloads 168
11036 Design Thinking and Requirements Engineering in Application Development: Case Studies in Brazil

Authors: V. Prodocimo, A. Malucelli, S. Reinehr

Abstract:

Organizations, driven by business digitization, have in software the main core of value generation and the main channel of communication with their clients. The software, as well as responding to momentary market needs, spans an extensive product family, ranging from mobile applications to multilateral platforms. Thus, the software specification needs to represent solutions focused on consumer problems and market needs. However, requirements engineering, whose approach is strongly linked to technology, becomes deficient and ineffective when the problem is not well defined or when looking for an innovative solution, thus needing a complementary approach. Research has cited the combination of design thinking and requirements engineering, many correlating design thinking as a support technique for the elicitation step, however, little is known about the real benefits and challenges that this combination can bring. From the point of view of the development process, there is little empirical evidence of how Design Thinking interactions with requirements engineering occur. Given this scenario, this paper aims to understand how design thinking practices are applied in each of the requirements engineering stages in software projects. To elucidate these interactions, a qualitative and exploratory research was carried out through the application of the case study method in IT organizations in Brazil that work in the development of software projects. The results indicate that design thinking has aided requirements engineering, both in projects that adopt agile methods and those that adopt the waterfall process, bringing a complementary thought that seeks to build the best software solution design for business problems. It was also possible to conclude that organizations choose to use design thinking not based on a specific software family (e.g. mobile or desktop applications), but given the characteristics of the software projects, such as: vague nature of the problem, complex problems and/or need for innovative solutions.

Keywords: software engineering, requirements engineering, design thinking, innovative solutions

Procedia PDF Downloads 110
11035 Resilience in the Face of Environmental Extremes through Networking and Resource Mobilization

Authors: Abdullah Al Mohiuddin

Abstract:

Bangladesh is one of the poorest countries in the world, and ranks low on almost all measures of economic development, thus leaving the population extremely vulnerable to natural disasters and climate events. 20% of GDP come from agriculture but more than 60% of the population relies on agriculture as their main source of income making the entire economy vulnerable to climate change and natural disasters. High population density exacerbates the exposure to and effect of climate events, and increases the levels of vulnerability, as does the poor institutional development of the country. The most vulnerable sectors to climate change impacts in Bangladesh are agriculture, coastal zones, water resources, forestry, fishery, health, biomass, and energy. High temperatures, heavy rainfall, high humidity and fairly marked seasonal variations characterize the climate in Bangladesh: Mild winter, hot humid summer and humid, warm rainy monsoon. Much of the country is flooded during the summer monsoon. The Department of Environment (DOE) under the Ministry of Environment and Forestry (MoEF) is the focal point for the United Nations Framework Convention on Climate Change (UNFCCC) and coordinates climate related activities in the country. Recently, a Climate Change Cell (CCC) has been established to address several issues including adaptation to climate change. The climate change focus started with The National Environmental Management Action Plan (NEMAP) which was prepared in 1995 in order to initiate the process to address environmental and climate change issues as long-term environmental problems for Bangladesh. Bangladesh was one of the first countries to finalise a NAPA (Preparation of a National Adaptation Plan of Action) which addresses climate change issues. The NAPA was completed in 2005, and is the first official initiative for mainstreaming adaptation to national policies and actions to cope with climate change and vulnerability. The NAPA suggests a number of adaptation strategies, for example: - Providing drinking water to coastal communities to fight the enhanced salinity caused by sea level rise, - Integrating climate change in planning and design of infrastructure, - Including climate change issues in education, - Supporting adaptation of agricultural systems to new weather extremes, - Mainstreaming CCA into policies and programmes in different sectors, e.g. disaster management, water and health, - Dissemination of CCA information and awareness raising on enhanced climate disasters, especially in vulnerable communities. Bangladesh has geared up its environment conservation steps to save the world’s poorest countries from the adverse effects of global warming. Now it is turning towards green economy policies to save the degrading ecosystem. Bangladesh is a developing country and always fights against Natural Disaster. At the same time we also fight for establishing ecological environment through promoting Green Economy/Energy by Youth Networking. ANTAR is coordinating a big Youth Network in the southern part of Bangladesh where 30 Youth group involved. It can be explained as the economic development based on sustainable development which generates growth and improvement in human’s lives while significantly reducing environmental risks and ecological scarcities. Green economy in Bangladesh promotes three bottom lines – sustaining economic, environment and social well-being.

Keywords: resilience, networking, mobilizing, resource

Procedia PDF Downloads 297
11034 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution

Authors: Haiyan Wu, Ying Liu, Shaoyun Shi

Abstract:

Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.

Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction

Procedia PDF Downloads 121
11033 Digital Fashion: An Integrated Approach to Additive Manufacturing in Wearable Fashion

Authors: Lingju Wu, Hao Hua

Abstract:

This paper presents a digital fashion production methodology and workflow based on fused deposition modeling additive manufacturing technology, as demonstrated through a 3D printed fashion show held at Southeast University in Nanjing, China. Unlike traditional fashion, 3D printed fashion allows for the creation of complex geometric shapes and unique structural designs, facilitating diverse reconfiguration and sustainable production of textile fabrics. The proposed methodology includes two components: morphogenesis and the 3D printing process. The morphogenesis part comprises digital design methods such as mesh deformation, structural reorganization, particle flow stretching, sheet partitioning, and spreading methods. The 3D printing process section includes three types of methods: sculptural objects, multi-material composite fabric, and self-forming composite fabrics. This paper focuses on multi-material composite fabrics and self-forming composite fabrics, both of which involve weaving fabrics with 3D-printed material sandwiches. Multi-material composite fabrics create specially tailored fabric from the original properties of the printing path and multiple materials, while self-forming fabrics apply pre-stress to the flat fabric and then print the sandwich, allowing the fabric's own elasticity to interact with the printed components and shape into a 3D state. The digital design method and workflow enable the integration of abstract sensual aesthetics and rational thinking, showcasing a digital aesthetic that challenges conventional handicraft workshops. Overall, this paper provides a comprehensive framework for the production of 3D-printed fashion, from concept to final product.

Keywords: digital fashion, composite fabric, self-forming structure, additive manufacturing, generating design

Procedia PDF Downloads 92
11032 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies

Authors: Pragati Sirohi, Vivek Singh Rana

Abstract:

Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.

Keywords: brand value, FMCG, market capitalization, net worth

Procedia PDF Downloads 342
11031 Analysis of an Alternative Data Base for the Estimation of Solar Radiation

Authors: Graciela Soares Marcelli, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Claudineia Brazil, Rafael Haag

Abstract:

The sun is a source of renewable energy, and its use as both a source of heat and light is one of the most promising energy alternatives for the future. To measure the thermal or photovoltaic systems a solar irradiation database is necessary. Brazil still has a reduced number of meteorological stations that provide frequency tests, as an alternative to the radio data platform, with reanalysis systems, quite significant. ERA-Interim is a global fire reanalysis by the European Center for Medium-Range Weather Forecasts (ECMWF). The data assimilation system used for the production of ERA-Interim is based on a 2006 version of the IFS (Cy31r2). The system includes a 4-dimensional variable analysis (4D-Var) with a 12-hour analysis window. The spatial resolution of the dataset is approximately 80 km at 60 vertical levels from the surface to 0.1 hPa. This work aims to make a comparative analysis between the ERA-Interim data and the data observed in the Solarimmetric Atlas of the State of Rio Grande do Sul, to verify its applicability in the absence of an observed data network. The analysis of the results obtained for a study region as an alternative to the energy potential of a given region.

Keywords: energy potential, reanalyses, renewable energy, solar radiation

Procedia PDF Downloads 149
11030 Solutions to Reduce CO2 Emissions in Autonomous Robotics

Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu

Abstract:

Mobile robots can be used in many different applications, including mapping, search, rescue, reconnaissance, hazard detection, and carpet cleaning, exploration, etc. However, they are limited due to their reliance on traditional energy sources such as electricity and oil which cannot always provide a convenient energy source in all situations. In an ever more eco-conscious world, solar energy offers the most environmentally clean option of all energy sources. Electricity presents threats of pollution resulting from its production process, and oil poses a huge threat to the environment. Not only does it pose harm by the toxic emissions (for instance CO2 emissions), it produces the combustion process necessary to produce energy, but there is the ever present risk of oil spillages and damages to ecosystems. Solar energy can help to mitigate carbon emissions by replacing more carbon intensive sources of heat and power. The challenge of this work is to propose the design and the implementation of electric battery recharge stations. Those recharge docks are based on the use of renewable energy such as solar energy (with photovoltaic panels) with the object to reduce the CO2 emissions. In this paper, a comparative study of the CO2 emission productions (from the use of different energy sources: natural gas, gas oil, fuel and solar panels) in the charging process of the Segway PT batteries is carried out. To make the study with solar energy, a photovoltaic panel, and a Buck-Boost DC/DC block has been used. Specifically, the STP005S-12/Db solar panel has been used to carry out our experiments. This module is a 5Wp-photovoltaic (PV) module, configured with 36 monocrystalline cells serially connected. With those elements, a battery recharge station is made to recharge the robot batteries. For the energy storage DC/DC block, a series of ultracapacitors have been used. Due to the variation of the PV panel with the temperature and irradiation, and the non-integer behavior of the ultracapacitors as well as the non-linearities of the whole system, authors have been used a fractional control method to achieve that solar panels supply the maximum allowed power to recharge the robots in the lesser time. Greenhouse gas emissions for production of electricity vary due to regional differences in source fuel. The impact of an energy technology on the climate can be characterised by its carbon emission intensity, a measure of the amount of CO2, or CO2 equivalent emitted by unit of energy generated. In our work, the coal is the fossil energy more hazardous, providing a 53% more of gas emissions than natural gas and a 30% more than fuel. Moreover, it is remarkable that existing fossil fuel technologies produce high carbon emission intensity through the combustion of carbon-rich fuels, whilst renewable technologies such as solar produce little or no emissions during operation, but may incur emissions during manufacture. The solar energy thus can help to mitigate carbon emissions.

Keywords: autonomous robots, CO2 emissions, DC/DC buck-boost, solar energy

Procedia PDF Downloads 408
11029 The Phenomenon of Rockfall in the Traceca Corridor and the Choice of Engineering Measures to Combat It

Authors: I. Iremashvili, I. Pirtskhalaishvili, K. Kiknadze, F. Lortkipanidze

Abstract:

The paper deals with the causes of rockfall and its possible consequences on slopes adjacent to motorways and railways. A list of measures is given that hinder rockfall; these measures are directed at protecting roads from rockfalls, and not preventing them. From the standpoint of local stability of slopes the main effective measure is perhaps strengthening their surface by the method of filling, which will check or end (or both) the process of deformation, local slipping off, sliding off and development of erosion.

Keywords: rockfall, concrete spraying, heliodevices, railways

Procedia PDF Downloads 363
11028 Improving the Foult Ride through Capability and Stability of Wind Farms with DFIG Wind Turbine by Using Statcom

Authors: Abdulfetah Shobole, Arif Karakas, Ugur Savas Selamogullari, Mustafa Baysal

Abstract:

The concern of reducing emissions of Co2 from the fossil fuel generating units and using renewable energy sources increased in our world. Due this fact the integration ratio of wind farms to grid reached 20-30% in some part of our world. With increased integration of large MW scaled wind farms to the electric grid, the stability of the electrical system is a great concern. Thus, operators of power systems usually deman the wind turbine generators to obey the same rules as other traditional kinds of generation, such as thermal and hydro, i.e. not affect the grid stability. FACTS devices such as SVC or STATCOM are mostly installed close to the connection point of the wind farm to the grid in order to increase the stability especially during faulty conditions. In this paper wind farm with DFIG turbine type and STATCOM are dynamically modeled and simulated under three phase short circuit fault condition. The dynamic modeling is done by DigSILENT PowerFactory for the wind farm, STATCOM and the network. The simulation results show improvement of system stability near to the connection point of the STATCOM.

Keywords: DFIG wind turbine, statcom, dynamic modeling, digsilent

Procedia PDF Downloads 702
11027 The Polarization on Twitter and COVID-19 Vaccination in Brazil

Authors: Giselda Cristina Ferreira, Carlos Alberto Kamienski, Ana Lígia Scott

Abstract:

The COVID-19 pandemic has enhanced the anti-vaccination movement in Brazil, supported by unscientific theories and false news and the possibility of wide communication through social networks such as Twitter, Facebook, and YouTube. The World Health Organization (WHO) classified the large volume of information on the subject against COVID-19 as an Infodemic. In this paper, we present a protocol to identify polarizing users (called polarizers) and study the profiles of Brazilian polarizers on Twitter (renamed to X some weeks ago). We analyzed polarizing interactions on Twitter (in Portuguese) to identify the main polarizers and how the conflicts they caused influenced the COVID-19 vaccination rate throughout the pandemic. This protocol uses data from this social network, graph theory, Java, and R-studio scripts to model and analyze the data. The information about the vaccination rate was obtained in a public database for the government called OpenDataSus. The results present the profiles of Twitter’s Polarizer (political position, gender, professional activity, immunization opinions). We observed that social and political events influenced the participation of these different profiles in conflicts and the vaccination rate.

Keywords: Twitter, polarization, vaccine, Brazil

Procedia PDF Downloads 58
11026 The Current Application of BIM - An Empirical Study Focusing on the BIM-Maturity Level

Authors: Matthias Stange

Abstract:

Building Information Modelling (BIM) is one of the most promising methods in the building design process and plays an important role in the digitalization of the Architectural, Engineering, and Construction (AEC) Industry. The application of BIM is seen as the key enabler for increasing productivity in the construction industry. The model-based collaboration using the BIM method is intended to significantly reduce cost increases, schedule delays, and quality problems in the planning and construction of buildings. Numerous qualitative studies based on expert interviews support this theory and report perceived benefits from the use of BIM in terms of achieving project objectives related to cost, schedule, and quality. However, there is a large research gap in analysing quantitative data collected from real construction projects regarding the actual benefits of applying BIM based on representative sample size and different application regions as well as different project typologies. In particular, the influence of the project-related BIM maturity level is completely unexplored. This research project examines primary data from 105 construction projects worldwide using quantitative research methods. Projects from the areas of residential, commercial, and industrial construction as well as infrastructure and hydraulic engineering were examined in application regions North America, Australia, Europe, Asia, MENA region, and South America. First, a descriptive data analysis of 6 independent project variables (BIM maturity level, application region, project category, project type, project size, and BIM level) were carried out using statistical methods. With the help of statisticaldata analyses, the influence of the project-related BIM maturity level on 6 dependent project variables (deviation in planning time, deviation in construction time, number of planning collisions, frequency of rework, number of RFIand number of changes) was investigated. The study revealed that most of the benefits of using BIM perceived through numerous qualitative studies have not been confirmed. The results of the examined sample show that the application of BIM did not have an improving influence on the dependent project variables, especially regarding the quality of the planning itself and the adherence to the schedule targets. The quantitative research suggests the conclusion that the BIM planning method in its current application has not (yet) become a recognizable increase in productivity within the planning and construction process. The empirical findings indicate that this is due to the overall low level of BIM maturity in the projects of the examined sample. As a quintessence, the author suggests that the further implementation of BIM should primarily focus on an application-oriented and consistent development of the project-related BIM maturity level instead of implementing BIM for its own sake. Apparently, there are still significant difficulties in the interweaving of people, processes, and technology.

Keywords: AEC-process, building information modeling, BIM maturity level, project results, productivity of the construction industry

Procedia PDF Downloads 61
11025 Hydro-Gravimetric Ann Model for Prediction of Groundwater Level

Authors: Jayanta Kumar Ghosh, Swastik Sunil Goriwale, Himangshu Sarkar

Abstract:

Groundwater is one of the most valuable natural resources that society consumes for its domestic, industrial, and agricultural water supply. Its bulk and indiscriminate consumption affects the groundwater resource. Often, it has been found that the groundwater recharge rate is much lower than its demand. Thus, to maintain water and food security, it is necessary to monitor and management of groundwater storage. However, it is challenging to estimate groundwater storage (GWS) by making use of existing hydrological models. To overcome the difficulties, machine learning (ML) models are being introduced for the evaluation of groundwater level (GWL). Thus, the objective of this research work is to develop an ML-based model for the prediction of GWL. This objective has been realized through the development of an artificial neural network (ANN) model based on hydro-gravimetry. The model has been developed using training samples from field observations spread over 8 months. The developed model has been tested for the prediction of GWL in an observation well. The root means square error (RMSE) for the test samples has been found to be 0.390 meters. Thus, it can be concluded that the hydro-gravimetric-based ANN model can be used for the prediction of GWL. However, to improve the accuracy, more hydro-gravimetric parameter/s may be considered and tested in future.

Keywords: machine learning, hydro-gravimetry, ground water level, predictive model

Procedia PDF Downloads 111