Search results for: ambient computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1643

Search results for: ambient computing

353 A Prediction Method of Pollutants Distribution Pattern: Flare Motion Using Computational Fluid Dynamics (CFD) Fluent Model with Weather Research Forecast Input Model during Transition Season

Authors: Benedictus Asriparusa, Lathifah Al Hakimi, Aulia Husada

Abstract:

A large amount of energy is being wasted by the release of natural gas associated with the oil industry. This release interrupts the environment particularly atmosphere layer condition globally which contributes to global warming impact. This research presents an overview of the methods employed by researchers in PT. Chevron Pacific Indonesia in the Minas area to determine a new prediction method of measuring and reducing gas flaring and its emission. The method emphasizes advanced research which involved analytical studies, numerical studies, modeling, and computer simulations, amongst other techniques. A flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process releases emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the chemical composition of air and environment around the boundary layer mainly during transition season. Transition season in Indonesia is absolutely very difficult condition to predict its pattern caused by the difference of two air mass conditions. This paper research focused on transition season in 2013. A simulation to create the new pattern of the pollutants distribution is needed. This paper has outlines trends in gas flaring modeling and current developments to predict the dominant variables in the pollutants distribution. A Fluent model is used to simulate the distribution of pollutants gas coming out of the stack, whereas WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. Based on the running model, the most influence factor was wind speed. The goal of the simulation is to predict the new pattern based on the time of fastest wind and slowest wind occurs for pollutants distribution. According to the simulation results, it can be seen that the fastest wind (last of March) moves pollutants in a horizontal direction and the slowest wind (middle of May) moves pollutants vertically. Besides, the design of flare stack in compliance according to EPA Oil and Gas Facility Stack Parameters likely shows pollutants concentration remains on the under threshold NAAQS (National Ambient Air Quality Standards).

Keywords: flare motion, new prediction, pollutants distribution, transition season, WRF model

Procedia PDF Downloads 555
352 Investigations of the Service Life of Different Material Configurations at Solid-lubricated Rolling Bearings

Authors: Bernd Sauer, Michel Werner, Stefan Emrich, Michael Kopnarski, Oliver Koch

Abstract:

Friction reduction is an important aspect in the context of sustainability and energy transition. Rolling bearings are therefore used in many applications in which components move relative to each other. Conventionally lubricated rolling bearings are used in a wide range of applications, but are not suitable under certain conditions. Conventional lubricants such as grease or oil cannot be used at very high or very low temperatures. In addition, these lubricants evaporate at very low ambient pressure, e.g. in a high vacuum environment, making the use of solid lubricated bearings unavoidable. With the use of solid-lubricated bearings, predicting the service life becomes more complex. While the end of the service life of bearings with conventional lubrication is mainly caused by the failure of the bearing components due to material fatigue, solid-lubricated bearings fail at the moment when the lubrication layer is worn and the rolling elements come into direct contact with the raceway during operation. In order to extend the service life of these bearings beyond the service life of the initial coating, the use of transfer lubrication is recommended, in which pockets or sacrificial cages are used in which the balls run and can thus absorb the lubricant, which is then available for lubrication in tribological contact. This contribution presents the results of wear and service life tests on solid-lubricated rolling bearings with sacrificial cage pockets. The cage of the bearing consists of a polyimide (PI) matrix with 15% molybdenum disulfide (MoS2) and serves as a lubrication depot alongside the silver-coated balls. The bearings are tested under high vacuum (pE < 10-2 Pa) at a temperature of 300 °C on a four-bearing test rig. First, investigations of the bearing system within the bearing service life are presented and the torque curve, the wear mass and surface analyses are discussed. With regard to wear, it can be seen that the bearing rings tend to increase in mass over the service life of the bearing, while the balls and the cage tend to lose mass. With regard to the elementary surface properties, the surfaces of the bearing rings and balls are examined in terms of the mass of the elements on them. Furthermore, service life investigations with different material pairings are presented, whereby the focus here is on the service life achieved in addition to the torque curve, wear development and surface analysis. It was shown that MoS2 in the cage leads to a longer service life, while a silver (Ag) coating on the balls has no positive influence on the service life and even appears to reduce it in combination with MoS2.

Keywords: ball bearings, molybdenum disulfide, solid lubricated bearings, solid lubrication mechanisms

Procedia PDF Downloads 49
351 Regeneration of a Liquid Desiccant Using Membrane Distillation to Unlock Coastal Desert Agriculture Potential

Authors: Kimberly J. Cribbs, Ryan M. Lefers, TorOve Leiknes, Noreddine Ghaffour

Abstract:

In Gulf Cooperation Council (GCC) countries, domestic agriculture is hindered by a lack of freshwater, poor soil quality, and ambient temperatures unsuitable for cultivation resulting in a heavy reliance on imported food. Attempts to minimize the risk of food insecurity by growing crops domestically creates a significant demand on limited freshwater resources in this region. Cultivating food in a greenhouse allows some of these challenges, such as poor soil quality and temperatures unsuitable for cultivation, to be overcome. One of the most common methods for greenhouse cooling is evaporative cooling. This method cools the air by the evaporation of water and requires a large amount of water relative to that needed for plant growth and air with a low relative humidity. Considering that much of the population in GCC countries live within 100 km of a coast and that sea water can be utilized for evaporative cooling, coastal agriculture could reduce the risk of food insecurity and water demand. Unfortunately, coastal regions tend to experience both high temperatures and high relative humidity causing evaporative cooling by itself to be inadequate. Therefore, dehumidification is needed prior to utilizing evaporative cooling. Utilizing a liquid desiccant for air dehumidification is promising, but the desiccant regeneration to retain its dehumidification potential remains a significant obstacle for the adoption of this technology. This project studied the regeneration of a magnesium chloride (MgCl₂) desiccant solution from 20wt% to 30wt% by direct contact membrane distillation (DCMD) and explored the possibility of using the recovered water for irrigation. Two 0.2 µm hydrophobic PTFE membranes were tested at feed temperatures of 80, 70, and 60°C and with a permeate temperature of 20°C. It was observed that the permeate flux increases as the difference between the feed and coolant temperature increases and also as the feed concentration decreases. At 21wt% the permeate flux was 34,17, and 14 L m⁻² h⁻¹ for feed temperatures of 80, 70, and 60°C, respectively. Salt rejection decreased overtime; however, it remained greater than 99.9% over an experimental time span of 10 hours. The results show that DCMD can successfully regenerate the magnesium chloride desiccant solution.

Keywords: agriculture, direct contact membrane distillation, GCC countries, liquid desiccant, water recovery

Procedia PDF Downloads 149
350 Noise Source Identification on Urban Construction Sites Using Signal Time Delay Analysis

Authors: Balgaisha G. Mukanova, Yelbek B. Utepov, Aida G. Nazarova, Alisher Z. Imanov

Abstract:

The problem of identifying local noise sources on a construction site using a sensor system is considered. Mathematical modeling of detected signals on sensors was carried out, considering signal decay and signal delay time between the source and detector. Recordings of noises produced by construction tools were used as a dependence of noise on time. Synthetic sensor data was constructed based on these data, and a model of the propagation of acoustic waves from a point source in the three-dimensional space was applied. All sensors and sources are assumed to be located in the same plane. A source localization method is checked based on the signal time delay between two adjacent detectors and plotting the direction of the source. Based on the two direct lines' crossline, the noise source's position is determined. Cases of one dominant source and the case of two sources in the presence of several other sources of lower intensity are considered. The number of detectors varies from three to eight detectors. The intensity of the noise field in the assessed area is plotted. The signal of a two-second duration is considered. The source is located for subsequent parts of the signal with a duration above 0.04 sec; the final result is obtained by computing the average value.

Keywords: acoustic model, direction of arrival, inverse source problem, sound localization, urban noises

Procedia PDF Downloads 62
349 Investigation of Mechanical and Tribological Property of Graphene Reinforced SS-316L Matrix Composite Prepared by Selective Laser Melting

Authors: Ajay Mandal, Jitendar Kumar Tiwari, N. Sathish, A. K. Srivastava

Abstract:

A fundamental investigation is performed on the development of graphene (Gr) reinforced stainless steel 316L (SS 316L) metal matrix composite via selective laser melting (SLM) in order to improve specific strength and wear resistance property of SS 316L. Firstly, SS 316L powder and graphene were mixed in a fixed ratio using low energy planetary ball milling. The milled powder is then subjected to the SLM process to fabricate composite samples at a laser power of 320 W and exposure time of 100 µs. The prepared composite was mechanically tested (hardness and tensile test) at ambient temperature, and obtained results indicate that the properties of the composite increased significantly with the addition of 0.2 wt. % Gr. Increment of about 25% (from 194 to 242 HV) and 70% (from 502 to 850 MPa) is obtained in hardness and yield strength of composite, respectively. Raman mapping and XRD were performed to see the distribution of Gr in the matrix and its effect on the formation of carbide, respectively. Results of Raman mapping show the uniform distribution of graphene inside the matrix. Electron back scatter diffraction (EBSD) map of the prepared composite was analyzed under FESEM in order to understand the microstructure and grain orientation. Due to thermal gradient, elongated grains were observed along the building direction, and grains get finer with the addition of Gr. Most of the mechanical components are subjected to several types of wear conditions. Therefore, it is very necessary to improve the wear property of the component, and hence apart from strength and hardness, a tribological property of composite was also measured under dry sliding condition. Solid lubrication property of Gr plays an important role during the sliding process due to which the wear rate of composite reduces up to 58%. Also, the surface roughness of worn surface reduces up to 70% as measured by 3D surface profilometry. Finally, it can be concluded that SLM is an efficient method of fabricating cutting edge metal matrix nano-composite having Gr like reinforcement, which was very difficult to fabricate through conventional manufacturing techniques. Prepared composite has superior mechanical and tribological properties and can be used for a wide variety of engineering applications. However, due to the unavailability of a considerable amount of literature in a similar domain, more experimental works need to perform, such as thermal property analysis, and is a part of ongoing study.

Keywords: selective laser melting, graphene, composite, mechanical property, tribological property

Procedia PDF Downloads 136
348 Insulin Resistance in Children and Adolescents in Relation to Body Mass Index, Waist Circumference and Body Fat Weight

Authors: E. Vlachopapadopoulou, E. Dikaiakou, E. Anagnostou, I. Panagiotopoulos, E. Kaloumenou, M. Kafetzi, A. Fotinou, S. Michalacos

Abstract:

Aim: To investigate the relation and impact of Body Mass Index (BMI), Waist Circumference (WC) and Body Fat Weight (BFW) on insulin resistance (MATSUDA INDEX < 2.5) in children and adolescents. Methods: Data from 95 overweight and obese children (47 boys and 48 girls) with mean age 10.7 ± 2.2 years were analyzed. ROC analysis was used to investigate the predictive ability of BMI, WC and BFW for insulin resistance and find the optimal cut-offs. The overall performance of the ROC analysis was quantified by computing area under the curve (AUC). Results: ROC curve analysis indicated that the optimal-cut off of WC for the prediction of insulin resistance was 97 cm with sensitivity equal to 75% and specificity equal to 73.1%. AUC was 0.78 (95% CI: 0.63-0.92, p=0.001). The sensitivity and specificity of obesity for the discrimination of participants with insulin resistance from those without insulin resistance were equal to 58.3% and 75%, respectively (AUC=0.67). BFW had a borderline predictive ability for insulin resistance (AUC=0.58, 95% CI: 0.43-0.74, p=0.101). The predictive ability of WC was equivalent with the correspondence predictive ability of BMI (p=0.891). Obese subjects had 4.2 times greater odds for having insulin resistance (95% CI: 1.71-10.30, p < 0.001), while subjects with WC more than 97 had 8.1 times greater odds for having insulin resistance (95% CI: 2.14-30.86, p=0.002). Conclusion: BMI and WC are important clinical factors that have significant clinical relation with insulin resistance in children and adolescents. The cut off of 97 cm for WC can identify children with greater likelihood for insulin resistance.

Keywords: body fat weight, body mass index, insulin resistance, obese children, waist circumference

Procedia PDF Downloads 320
347 Influence of Variable Calcium Content on Mechanical Properties of Geopolymer Synthesized at Different Temperature and Moisture Conditions

Authors: Suraj D. Khadka, Priyantha W. Jayawickrama

Abstract:

In search of a sustainable construction material, geopolymer has been investigated for past decades to evaluate its advantage over conventional products. Synthesis of geopolymer requires a source of aluminosilicate mixed with sodium hydroxide and sodium silicate at different proportions to maintain a Si/Al molar ratio of 1-3 and Na/Al molar ratio of unity. A comprehensive geopolymer study was performed with Metakaolin and Class C Fly ash as primary aluminosilicate sources. Synthesized geopolymer was analyzed for time-dependent viscosity, setting period and strength at varying initial moisture content, curing temperature and humidity. Different concentration of Ca(OH)₂ and CaSO₄.2H₂O were added to vary the amount of calcium contained in synthesized geopolymer. Influence of calcium content in unconfined compressive strength behavior of geopolymer were analyzed. Finally, Scanning Electron Microscopy-Energy Dispersive Spectroscopy (SEM-EDS) was performed to investigate the hardened product. It was observed that fly ash based geopolymer had shortened setting time and faster increase in viscosity as compared to geopolymer synthesized from metakaolin. This was primarily attributed to higher calcium content resulting in formation of calcium silicate hydrates (CSH). SEM-EDS was performed to verify the presence of CSH phases. Spectral analysis of geopolymer prepared by addition of Ca(OH)₂ and CaSO₄.2H₂O indicated higher CSH phases at higher concentration. It was observed that lower concentration of added calcium favored strength gain in geopolymer. However, at higher calcium concentration, decrease in strength was observed. Strength variation was also observed with humidity at initial curing condition. At 100% humidity, geopolymer with added calcium presented higher strength compared to samples cured at ambient humidity condition (40%). Reduction in strength in these samples at lower humidity was primarily attributed to reduction in moisture content in specimen due to the formation of CSH phases and loss of moisture through evaporation. For low calcium content geopolymers, with increase in temperature, gain in strength was observed with maximum strength observed at 200 ˚C. However, samples with higher calcium content demonstrated severe cracking resulting in low strength at elevated temperatures.

Keywords: calcium silicate hydrates, geopolymer, humidity, Scanning Electron Microscopy-Energy Dispersive Spectroscopy, unconfined compressive strength

Procedia PDF Downloads 127
346 A Low-Latency Quadratic Extended Domain Modular Multiplier for Bilinear Pairing Based on Non-Least Positive Multiplication

Authors: Yulong Jia, Xiang Zhang, Ziyuan Wu, Shiji Hu

Abstract:

The calculation of bilinear pairing is the core of the SM9 algorithm, which relies on the underlying prime domain algorithm and the quadratic extension domain algorithm. Among the field algorithms, modular multiplication operation is the most time-consuming part. Therefore, the underlying modular multiplication algorithm is optimized to maximize the operation speed of bilinear pairings. This paper uses a modular multiplication method based on non-least positive (NLP) combined with Karatsuba and schoolbook multiplication to improve the Montgomery algorithm. At the same time, according to the characteristics of multiplication operation in the quadratic extension domain, a quadratic extension domain FP2-NLP modular multiplication algorithm for bilinear pairings is proposed, which effectively reduces the operation time of modular multiplication in the quadratic extension domain. The sub-expanded domain Fp₂ -NLP modular multiplication algorithm effectively reduces the operation time of modular multiplication under the second-expanded domain. The multiplication unit in the quadratic extension domain is implemented using SMIC55nm process, and two different implementation architectures are designed to cope with different application scenarios. Compared with the existing related literature, The output latency of this design can reach a minimum of 15 cycles. The shortest time for calculating the (AB+CD)r⁻¹ mod form is 37.5ns, and the comprehensive area-time product (AT) is 11400. The final R-ate pairing algorithm hardware accelerator consumes 2670k equivalent logic gates and 1.8ms computing time in 55nm process.

Keywords: sm9, hardware, NLP, Montgomery

Procedia PDF Downloads 3
345 Reinforcement Learning Optimization: Unraveling Trends and Advancements in Metaheuristic Algorithms

Authors: Rahul Paul, Kedar Nath Das

Abstract:

The field of machine learning (ML) is experiencing rapid development, resulting in a multitude of theoretical advancements and extensive practical implementations across various disciplines. The objective of ML is to facilitate the ability of machines to perform cognitive tasks by leveraging knowledge gained from prior experiences and effectively addressing complex problems, even in situations that deviate from previously encountered instances. Reinforcement Learning (RL) has emerged as a prominent subfield within ML and has gained considerable attention in recent times from researchers. This surge in interest can be attributed to the practical applications of RL, the increasing availability of data, and the rapid advancements in computing power. At the same time, optimization algorithms play a pivotal role in the field of ML and have attracted considerable interest from researchers. A multitude of proposals have been put forth to address optimization problems or improve optimization techniques within the domain of ML. The necessity of a thorough examination and implementation of optimization algorithms within the context of ML is of utmost importance in order to provide guidance for the advancement of research in both optimization and ML. This article provides a comprehensive overview of the application of metaheuristic evolutionary optimization algorithms in conjunction with RL to address a diverse range of scientific challenges. Furthermore, this article delves into the various challenges and unresolved issues pertaining to the optimization of RL models.

Keywords: machine learning, reinforcement learning, loss function, evolutionary optimization techniques

Procedia PDF Downloads 74
344 A Multi-Scale Study of Potential-Dependent Ammonia Synthesis on IrO₂ (110): DFT, 3D-RISM, and Microkinetic Modeling

Authors: Shih-Huang Pan, Tsuyoshi Miyazaki, Minoru Otani, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang

Abstract:

Ammonia (NH₃) is crucial in renewable energy and agriculture, yet its traditional production via the Haber-Bosch process faces challenges due to the inherent inertness of nitrogen (N₂) and the need for high temperatures and pressures. The electrocatalytic nitrogen reduction (ENRR) presents a more sustainable option, functioning at ambient conditions. However, its advancement is limited by selectivity and efficiency challenges due to the competing hydrogen evolution reaction (HER). The critical roles of protonation of N-species and HER highlight the necessity of selecting optimal catalysts and solvents to enhance ENRR performance. Notably, transition metal oxides, with their adjustable electronic states and excellent chemical and thermal stability, have shown promising ENRR characteristics. In this study, we use density functional theory (DFT) methods to investigate the ENRR mechanisms on IrO₂ (110), a material known for its tunable electronic properties and exceptional chemical and thermal stability. Employing the constant electrode potential (CEP) model, where the electrode - electrolyte interface is treated as a polarizable continuum with implicit solvation, and adjusting electron counts to equalize work functions in the grand canonical ensemble, we further incorporate the advanced 3D Reference Interaction Site Model (3D-RISM) to accurately determine the ENRR limiting potential across various solvents and pH conditions. Our findings reveal that the limiting potential for ENRR on IrO₂ (110) is significantly more favorable than for HER, highlighting the efficiency of the IrO₂ catalyst for converting N₂ to NH₃. This is supported by the optimal *NH₃ desorption energy on IrO₂, which enhances the overall reaction efficiency. Microkinetic simulations further predict a promising NH₃ production rate, even at the solution's boiling point¸ reinforcing the catalytic viability of IrO₂ (110). This comprehensive approach provides an atomic-level understanding of the electrode-electrolyte interface in ENRR, demonstrating the practical application of IrO₂ in electrochemical catalysis. The findings provide a foundation for developing more efficient and selective catalytic strategies, potentially revolutionizing industrial NH₃ production.

Keywords: density functional theory, electrocatalyst, nitrogen reduction reaction, electrochemistry

Procedia PDF Downloads 20
343 Green It-Outsourcing Assurance Model for It-Outsourcing Vendors

Authors: Siffat Ullah Khan, Rahmat Ullah Khan, Rafiq Ahmad Khan, Habibullah Khan

Abstract:

Green IT or green computing has emerged as a fast growing business paradigm in recent years in order to develop energy-efficient Software and peripheral devices. With the constant evolution of technology and the world critical environmental status, all private and public information technology (IT) businesses are moving towards sustainability. We identified, through systematic literature review and questionnaire survey, 9 motivators, in total, faced by vendors in IT-Outsourcing relationship. Amongst these motivators 7 were ranked as critical motivators. We also identified 21, in total, practices for addressing these critical motivators. Based on these inputs we have developed Green IT-Outsourcing Assurance Model (GITAM) for IT-Outsourcing vendors. The model comprises four different levels. i.e. Initial, White, Green and Grey. Each level comprises different critical motivators and their relevant practices. We conclude that our model, GITAM, will assist IT-Outsourcing vendors in gauging their level in order to manage IT-Outsourcing activities in a green and sustainable fashion to assist the environment and to reduce the carbon emission. The model will assist vendors in improving their current level by suggesting various practices. The model will contribute to the body of knowledge in the field of Green IT.

Keywords: Green IT-outsourcing Assurance Model (GITAM), Systematic Literature Review, Empirical Study, Case Study

Procedia PDF Downloads 252
342 Investigating the Usability of a University Website from the Users’ Perspective: An Empirical Study of Benue State University Website

Authors: Abraham Undu, Stephen Akuma

Abstract:

Websites are becoming a major component of an organization’s success in our ever globalizing competitive world. The website symbolizes an organization, interacting or projecting an organization’s principles, culture, values, vision, and perspectives. It is an interface connecting organizations and their clients. The university, as an academic institution, makes use of a website to communicate and offer computing services to its stakeholders (students, staff, host community, university management etc). Unfortunately, website designers often give more consideration to the technology, organizational structure and business objectives of the university than to the usability of the site. Website designers end up designing university websites which do not meet the needs of the primary users. This empirical study investigated the Benue State University website from the point view of students. This research was realized by using a standardized website usability questionnaire based on the five factors of usability defined by WAMMI (Website Analysis and Measurement Inventory): attractiveness, controllability, efficiency, learnability and helpfulness. The result of the investigation showed that the university website (https://portal.bsum.edu.ng/) has neutral usability level because of the usability issues associated with the website. The research recommended feasible solutions to improve the usability of the website from the users’ perspective and also provided a modified usability model that will be used for better evaluation of the Benue State University website.

Keywords: Benue State University, modified usability model, usability, usability factors

Procedia PDF Downloads 151
341 [Keynote Talk]: The Challenges and Solutions for Developing Mobile Apps in a Small University

Authors: Greg Turner, Bin Lu, Cheer-Sun Yang

Abstract:

As computing technology advances, smartphone applications can assist in student learning in a pervasive way. For example, the idea of using a mobile apps for the PA Common Trees, Pests, Pathogens, in the field as a reference tool allows middle school students to learn about trees and associated pests/pathogens without bringing a textbook. In the past, some researches study the mobile software Mobile Application Software Development Life Cycle (MADLC) including traditional models such as the waterfall model, or more recent Agile Methods. Others study the issues related to the software development process. Very little research is on the development of three heterogenous mobile systems simultaneously in a small university where the availability of developers is an issue. In this paper, we propose to use a hybride model of Waterfall Model and the Agile Model, known as the Relay Race Methodology (RRM) in practice, to reflect the concept of racing and relaying for scheduling. Based on the development project, we observe that the modeling of the transition between any two phases is manifested naturally. Thus, we claim that the RRM model can provide a de fecto rather than a de jure basis for the core concept in the MADLC. In this paper, the background of the project is introduced first. Then, the challenges are pointed out followed by our solutions. Finally, the experiences learned and the future work are presented.

Keywords: agile methods, mobile apps, software process model, waterfall model

Procedia PDF Downloads 409
340 System Devices to Reduce Particulate Matter Concentrations in Railway Metro Systems

Authors: Armando Cartenì

Abstract:

Within the design of sustainable transportation engineering, the problem of reducing particulate matter (PM) concentrations in railways metro system was not much discussed. It is well known that PM levels in railways metro system are mainly produced by mechanical friction at the rail-wheel-brake interactions and by the PM re-suspension caused by the turbulence generated by the train passage, which causes dangerous problems for passenger health. Starting from these considerations, the aim of this research was twofold: i) to investigate the particulate matter concentrations in a ‘traditional’ railways metro system; ii) to investigate the particulate matter concentrations of a ‘high quality’ metro system equipped with design devices useful for reducing PM concentrations: platform screen doors, rubber-tyred and an advanced ventilation system. Two measurement surveys were performed: one in the ‘traditional’ metro system of Naples (Italy) and onother in the ‘high quality’ rubber-tyred metro system of Turin (Italy). Experimental results regarding the ‘traditional’ metro system of Naples, show that the average PM10 concentrations measured in the underground station platforms are very high and range between 172 and 262 µg/m3 whilst the average PM2,5 concentrations range between 45 and 60 µg/m3, with dangerous problems for passenger health. By contrast the measurements results regarding the ‘high quality’ metro system of Turin show that: i) the average PM10 (PM2.5) concentrations measured in the underground station platform is 22.7 µg/m3 (16.0 µg/m3) with a standard deviation of 9.6 µg/m3 (7.6 µg/m3); ii) the indoor concentrations (both for PM10 and for PM2.5) are statistically lower from those measured in outdoors (with a ratio equal to 0.9-0.8), meaning that the indoor air quality is greater than those in urban ambient; iii) that PM concentrations in underground stations are correlated to the trains passage; iv) the inside trains concentrations (both for PM10 and for PM2.5) are statistically lower from those measured at station platform (with a ratio equal to 0.7-0.8), meaning that inside trains the use of air conditioning system could promote a greater circulation that clean the air. The comparison among the two case studies allow to conclude that the metro system designed with PM reduction devices allow to reduce PM concentration up to 11 times against a ‘traditional’ one. From these results, it is possible to conclude that PM concentrations measured in a ‘high quality’ metro system are significantly lower than the ones measured in a ‘traditional’ railway metro systems. This result allows possessing the bases for the design of useful devices for retrofitting metro systems all around the world.

Keywords: air quality, pollutant emission, quality in public transport, underground railway, external cost reduction, transportation planning

Procedia PDF Downloads 210
339 Integrative Analysis of Urban Transportation Network and Land Use Using GIS: A Case Study of Siddipet City

Authors: P. Priya Madhuri, J. Kamini, S. C. Jayanthi

Abstract:

Assessment of land use and transportation networks is essential for sustainable urban growth, urban planning, efficient public transportation systems, and reducing traffic congestion. The study focuses on land use, population density, and their correlation with the road network for future development. The scope of the study covers inventory and assessment of the road network dataset (line) at the city, zonal, or ward level, which is extracted from very high-resolution satellite data (spatial resolution < 0.5 m) at 1:4000 map scale and ground truth verification. Road network assessment is carried out by computing various indices that measure road coverage and connectivity. In this study, an assessment of the road network is carried out for the study region at the municipal and ward levels. In order to identify gaps, road coverage and connectivity were associated with urban land use, built-up area, and population density in the study area. Ward-wise road connectivity and coverage maps have been prepared. To assess the relationship between road network metrics, correlation analysis is applied. The study's conclusions are extremely beneficial for effective road network planning and detecting gaps in the road network at the ward level in association with urban land use, existing built-up, and population.

Keywords: road connectivity, road coverage, road network, urban land use, transportation analysis

Procedia PDF Downloads 33
338 Overview of Multi-Chip Alternatives for 2.5 and 3D Integrated Circuit Packagings

Authors: Ching-Feng Chen, Ching-Chih Tsai

Abstract:

With the size of the transistor gradually approaching the physical limit, it challenges the persistence of Moore’s Law due to the development of the high numerical aperture (high-NA) lithography equipment and other issues such as short channel effects. In the context of the ever-increasing technical requirements of portable devices and high-performance computing, relying on the law continuation to enhance the chip density will no longer support the prospects of the electronics industry. Weighing the chip’s power consumption-performance-area-cost-cycle time to market (PPACC) is an updated benchmark to drive the evolution of the advanced wafer nanometer (nm). The advent of two and half- and three-dimensional (2.5 and 3D)- Very-Large-Scale Integration (VLSI) packaging based on Through Silicon Via (TSV) technology has updated the traditional die assembly methods and provided the solution. This overview investigates the up-to-date and cutting-edge packaging technologies for 2.5D and 3D integrated circuits (ICs) based on the updated transistor structure and technology nodes. The author concludes that multi-chip solutions for 2.5D and 3D IC packagings are feasible to prolong Moore’s Law.

Keywords: moore’s law, high numerical aperture, power consumption-performance-area-cost-cycle time to market, 2.5 and 3D- very-large-scale integration, packaging, through silicon via

Procedia PDF Downloads 114
337 Outdoor Visible Light Communication Channel Modeling under Fog and Smoke Conditions

Authors: Véronique Georlette, Sebastien Bette, Sylvain Brohez, Nicolas Point, Veronique Moeyaert

Abstract:

Visible light communication (VLC) is a communication technology that is part of the optical wireless communication (OWC) family. It uses the visible and infrared spectrums to send data. For now, this technology has widely been studied for indoor use-cases, but it is sufficiently mature nowadays to consider the outdoor environment potentials. The main outdoor challenges are the meteorological conditions and the presence of smoke due to fire or pollutants in urban areas. This paper proposes a methodology to assess the robustness of an outdoor VLC system given the outdoor conditions. This methodology is put into practice in two realistic scenarios, a VLC bus stop, and a VLC streetlight. The methodology consists of computing the power margin available in the system, given all the characteristics of the VLC system and its surroundings. This is done thanks to an outdoor VLC communication channel simulator developed in Python. This simulator is able to quantify the effects of fog and smoke thanks to models taken from environmental and fire engineering scientific literature as well as the optical power reaching the receiver. These two phenomena impact the communication by increasing the total attenuation of the medium. The main conclusion drawn in this paper is that the levels of attenuation due to fog and smoke are in the same order of magnitude. The attenuation of fog being the highest under the visibility of 1 km. This gives a promising prospect for the deployment of outdoor VLC uses-cases in the near future.

Keywords: channel modeling, fog modeling, meteorological conditions, optical wireless communication, smoke modeling, visible light communication

Procedia PDF Downloads 150
336 Comparison Study of Machine Learning Classifiers for Speech Emotion Recognition

Authors: Aishwarya Ravindra Fursule, Shruti Kshirsagar

Abstract:

In the intersection of artificial intelligence and human-centered computing, this paper delves into speech emotion recognition (SER). It presents a comparative analysis of machine learning models such as K-Nearest Neighbors (KNN),logistic regression, support vector machines (SVM), decision trees, ensemble classifiers, and random forests, applied to SER. The research employs four datasets: Crema D, SAVEE, TESS, and RAVDESS. It focuses on extracting salient audio signal features like Zero Crossing Rate (ZCR), Chroma_stft, Mel Frequency Cepstral Coefficients (MFCC), root mean square (RMS) value, and MelSpectogram. These features are used to train and evaluate the models’ ability to recognize eight types of emotions from speech: happy, sad, neutral, angry, calm, disgust, fear, and surprise. Among the models, the Random Forest algorithm demonstrated superior performance, achieving approximately 79% accuracy. This suggests its suitability for SER within the parameters of this study. The research contributes to SER by showcasing the effectiveness of various machine learning algorithms and feature extraction techniques. The findings hold promise for the development of more precise emotion recognition systems in the future. This abstract provides a succinct overview of the paper’s content, methods, and results.

Keywords: comparison, ML classifiers, KNN, decision tree, SVM, random forest, logistic regression, ensemble classifiers

Procedia PDF Downloads 45
335 Weighted Data Replication Strategy for Data Grid Considering Economic Approach

Authors: N. Mansouri, A. Asadi

Abstract:

Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.

Keywords: data grid, data replication, simulation, replica selection, replica placement

Procedia PDF Downloads 260
334 Challenges and Opportunities in Computing Logistics Cost in E-Commerce Supply Chain

Authors: Pramod Ghadge, Swadesh Srivastava

Abstract:

Revenue generation of a logistics company depends on how the logistics cost of a shipment is calculated. Logistics cost of a shipment is a function of distance & speed of the shipment travel in a particular network, its volumetric size and dead weight. Logistics billing is based mainly on the consumption of the scarce resource (space or weight carrying capacity of a carrier). Shipment’s size or deadweight is a function of product and packaging weight, dimensions and flexibility. Hence, to arrive at a standard methodology to compute accurate cost to bill the customer, the interplay among above mentioned physical attributes along with their measurement plays a key role. This becomes even more complex for an ecommerce company, like Flipkart, which caters to shipments from both warehouse and marketplace in an unorganized non-standard market like India. In this paper, we will explore various methodologies to define a standard way of billing the non-standard shipments across a wide range of size, shape and deadweight. Those will be, usage of historical volumetric/dead weight data to arrive at a factor which can be used to compute the logistics cost of a shipment, also calculating the real/contour volume of a shipment to address the problem of irregular shipment shapes which cannot be solved by conventional bounding box volume measurements. We will also discuss certain key business practices and operational quality considerations needed to bring standardization and drive appropriate ownership in the ecosystem.

Keywords: contour volume, logistics, real volume, volumetric weight

Procedia PDF Downloads 269
333 Perception of Eco-Music From the Contents the Earth’s Sound Ecosystem

Authors: Joni Asitashvili, Eka Chabashvili, Maya Virsaladze, Alexander Chokhonelidze

Abstract:

Studying the soundscape is a major challenge in many countries of the civilized world today. The sound environment and music itself are part of the Earth's ecosystem. Therefore, researching its positive or negative impact is important for a clean and healthy environment. The acoustics of nature gave people many musical ideas, and people enriched musical features and performance skills with the ability to imitate the surrounding sound. For example, a population surrounded by mountains invented the technique of antiphonal singing, which mimics the effect of an echo. Canadian composer Raymond Murray Schafer viewed the world as a kind of musical instrument with ever-renewing tuning. He coined the term "Soundscape" as a name of a natural environmental sound, including the sound field of the Earth. It can be said that from which the “music of nature” is constructed. In the 21st century, a new field–Ecomusicology–has emerged in the field of musical art to study the sound ecosystem and various issues related to it. Ecomusicology considers the interconnections between music, culture, and nature–According to the Aaron Allen. Eco-music is a field of ecomusicology concerning with the depiction and realization of practical processes using modern composition techniques. Finding an artificial sound source (instrumental or electronic) for the piece that will blend into the soundscape of Sound Oases. Creating a composition, which sounds in harmony with the vibrations of human, nature, environment, and micro- macrocosm as a whole; Currently, we are exploring the ambient sound of the Georgian urban and suburban environment to discover “Sound Oases" and compose Eco-music works. We called “Sound Oases" an environment with a specific sound of the ecosystem to use in the musical piece as an instrument. The most interesting examples of Eco-music are the round dances, which were already created in the BC era. In round dances people would feel the united energy. This urge to get united revealed itself in our age too, manifesting itself in a variety of social media. The virtual world, however, is not enough for a healthy interaction; we created plan of “contemporary round dance” in sound oasis, found during expedition in Georgian caves, where people interacted with cave's soundscape and eco-music, they feel each other sharing energy and listen to earth sound. This project could be considered a contemporary round dance, a long improvisation, particular type of art therapy, where everyone can participate in an artistic process. We would like to present research result of our eco-music experimental performance.

Keywords: eco-music, environment, sound, oasis

Procedia PDF Downloads 61
332 Data-Driven Monitoring and Control of Water Sanitation and Hygiene for Improved Maternal Health in Rural Communities

Authors: Paul Barasa Wanyama, Tom Wanyama

Abstract:

Governments and development partners in low-income countries often prioritize building Water Sanitation and Hygiene (WaSH) infrastructure of healthcare facilities to improve maternal healthcare outcomes. However, the operation, maintenance, and utilization of this infrastructure are almost never considered. Many healthcare facilities in these countries use untreated water that is not monitored for quality or quantity. Consequently, it is common to run out of water while a patient is on their way to or in the operating theater. Further, the handwashing stations in healthcare facilities regularly run out of water or soap for months, and the latrines are typically not clean, in part due to the lack of water. In this paper, we present a system that uses Internet of Things (IoT), big data, cloud computing, and AI to initiate WaSH security in healthcare facilities, with a specific focus on maternal health. We have implemented smart sensors and actuators to monitor and control WaSH systems from afar to ensure their objectives are achieved. We have also developed a cloud-based system to analyze WaSH data in real time and communicate relevant information back to the healthcare facilities and their stakeholders (e.g., medical personnel, NGOs, ministry of health officials, facilities managers, community leaders, pregnant women, and new mothers and their families) to avert or mitigate problems before they occur.

Keywords: WaSH, internet of things, artificial intelligence, maternal health, rural communities, healthcare facilities

Procedia PDF Downloads 16
331 Pushing the Boundary of Parallel Tractability for Ontology Materialization via Boolean Circuits

Authors: Zhangquan Zhou, Guilin Qi

Abstract:

Materialization is an important reasoning service for applications built on the Web Ontology Language (OWL). To make materialization efficient in practice, current research focuses on deciding tractability of an ontology language and designing parallel reasoning algorithms. However, some well-known large-scale ontologies, such as YAGO, have been shown to have good performance for parallel reasoning, but they are expressed in ontology languages that are not parallelly tractable, i.e., the reasoning is inherently sequential in the worst case. This motivates us to study the problem of parallel tractability of ontology materialization from a theoretical perspective. That is we aim to identify the ontologies for which materialization is parallelly tractable, i.e., in the NC complexity. Since the NC complexity is defined based on Boolean circuit that is widely used to investigate parallel computing problems, we first transform the problem of materialization to evaluation of Boolean circuits, and then study the problem of parallel tractability based on circuits. In this work, we focus on datalog rewritable ontology languages. We use Boolean circuits to identify two classes of datalog rewritable ontologies (called parallelly tractable classes) such that materialization over them is parallelly tractable. We further investigate the parallel tractability of materialization of a datalog rewritable OWL fragment DHL (Description Horn Logic). Based on the above results, we analyze real-world datasets and show that many ontologies expressed in DHL belong to the parallelly tractable classes.

Keywords: ontology materialization, parallel reasoning, datalog, Boolean circuit

Procedia PDF Downloads 271
330 Segmentation of the Liver and Spleen From Abdominal CT Images Using Watershed Approach

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

The phase of segmentation is an important step in the processing and interpretation of medical images. In this paper, we focus on the segmentation of liver and spleen from the abdomen computed tomography (CT) images. The importance of our study comes from the fact that the segmentation of ROI from CT images is usually a difficult task. This difficulty is the gray’s level of which is similar to the other organ also the ROI are connected to the ribs, heart, kidneys, etc. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to remove the surrounding and connected organs and tissues by applying morphological filters. This first step makes the extraction of interest regions easier. The second step consists of improving the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce these deficiencies by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 495
329 Role of mHealth in Effective Response to Disaster

Authors: Mohammad H. Yarmohamadian, Reza Safdari, Nahid Tavakoli

Abstract:

In recent years, many countries have suffered various natural disasters. Disaster response continues to face the challenges in health care sector in all countries. Information and communication management is a significant challenge in disaster scene. During the last decades, rapid advances in information technology have led to manage information effectively and improve communication in health care setting. Information technology is a vital solution for effective response to disasters and emergencies so that if an efficient ICT-based health information system is available, it will be highly valuable in such situation. Of that, mobile technology represents a nearly computing technology infrastructure that is accessible, convenient, inexpensive and easy to use. Most projects have not yet reached the deployment stage, but evaluation exercises show that mHealth should allow faster processing and transport of patients, improved accuracy of triage and better monitoring of unattended patients at a disaster scene. Since there is a high prevalence of cell phones among world population, it is expected the health care providers and managers to take measures for applying this technology for improvement patient safety and public health in disasters. At present there are challenges in the utilization of mhealth in disasters such as lack of structural and financial issues in our country. In this paper we will discuss about benefits and challenges of mhealth technology in disaster setting considering connectivity, usability, intelligibility, communication and teaching for implementing this technology for disaster response.

Keywords: information technology, mhealth, disaster, effective response

Procedia PDF Downloads 440
328 Visual Template Detection and Compositional Automatic Regular Expression Generation for Business Invoice Extraction

Authors: Anthony Proschka, Deepak Mishra, Merlyn Ramanan, Zurab Baratashvili

Abstract:

Small and medium-sized businesses receive over 160 billion invoices every year. Since these documents exhibit many subtle differences in layout and text, extracting structured fields such as sender name, amount, and VAT rate from them automatically is an open research question. In this paper, existing work in template-based document extraction is extended, and a system is devised that is able to reliably extract all required fields for up to 70% of all documents in the data set, more than any other previously reported method. The approaches are described for 1) detecting through visual features which template a given document belongs to, 2) automatically generating extraction rules for a given new template by composing regular expressions from multiple components, and 3) computing confidence scores that indicate the accuracy of the automatic extractions. The system can generate templates with as little as one training sample and only requires the ground truth field values instead of detailed annotations such as bounding boxes that are hard to obtain. The system is deployed and used inside a commercial accounting software.

Keywords: data mining, information retrieval, business, feature extraction, layout, business data processing, document handling, end-user trained information extraction, document archiving, scanned business documents, automated document processing, F1-measure, commercial accounting software

Procedia PDF Downloads 130
327 Automatic Identification and Monitoring of Wildlife via Computer Vision and IoT

Authors: Bilal Arshad, Johan Barthelemy, Elliott Pilton, Pascal Perez

Abstract:

Getting reliable, informative, and up-to-date information about the location, mobility, and behavioural patterns of animals will enhance our ability to research and preserve biodiversity. The fusion of infra-red sensors and camera traps offers an inexpensive way to collect wildlife data in the form of images. However, extracting useful data from these images, such as the identification and counting of animals remains a manual, time-consuming, and costly process. In this paper, we demonstrate that such information can be automatically retrieved by using state-of-the-art deep learning methods. Another major challenge that ecologists are facing is the recounting of one single animal multiple times due to that animal reappearing in other images taken by the same or other camera traps. Nonetheless, such information can be extremely useful for tracking wildlife and understanding its behaviour. To tackle the multiple count problem, we have designed a meshed network of camera traps, so they can share the captured images along with timestamps, cumulative counts, and dimensions of the animal. The proposed method takes leverage of edge computing to support real-time tracking and monitoring of wildlife. This method has been validated in the field and can be easily extended to other applications focusing on wildlife monitoring and management, where the traditional way of monitoring is expensive and time-consuming.

Keywords: computer vision, ecology, internet of things, invasive species management, wildlife management

Procedia PDF Downloads 138
326 Development of a Test Plant for Parabolic Trough Solar Collectors Characterization

Authors: Nelson Ponce Jr., Jonas R. Gazoli, Alessandro Sete, Roberto M. G. Velásquez, Valério L. Borges, Moacir A. S. de Andrade

Abstract:

The search for increased efficiency in generation systems has been of great importance in recent years to reduce the impact of greenhouse gas emissions and global warming. For clean energy sources, such as the generation systems that use concentrated solar power technology, this efficiency improvement impacts a lower investment per kW, improving the project’s viability. For the specific case of parabolic trough solar concentrators, their performance is strongly linked to their geometric precision of assembly and the individual efficiencies of their main components, such as parabolic mirrors and receiver tubes. Thus, for accurate efficiency analysis, it should be conducted empirically, looking for mounting and operating conditions like those observed in the field. The Brazilian power generation and distribution company Eletrobras Furnas, through the R&D program of the National Agency of Electrical Energy, has developed a plant for testing parabolic trough concentrators located in Aparecida de Goiânia, in the state of Goiás, Brazil. The main objective of this test plant is the characterization of the prototype concentrator that is being developed by the company itself in partnership with Eudora Energia, seeking to optimize it to obtain the same or better efficiency than the concentrators of this type already known commercially. This test plant is a closed pipe system where a pump circulates a heat transfer fluid, also calledHTF, in the concentrator that is being characterized. A flow meter and two temperature transmitters, installed at the inlet and outlet of the concentrator, record the parameters necessary to know the power absorbed by the system and then calculate its efficiency based on the direct solar irradiation available during the test period. After the HTF gains heat in the concentrator, it flows through heat exchangers that allow the acquired energy to be dissipated into the ambient. The goal is to keep the concentrator inlet temperature constant throughout the desired test period. The developed plant performs the tests in an autonomous way, where the operator must enter the HTF flow rate in the control system, the desired concentrator inlet temperature, and the test time. This paper presents the methodology employed for design and operation, as well as the instrumentation needed for the development of a parabolic trough test plant, being a guideline for standardization facilities.

Keywords: parabolic trough, concentrated solar power, CSP, solar power, test plant, energy efficiency, performance characterization, renewable energy

Procedia PDF Downloads 118
325 Deregulation of Thorium for Room Temperature Superconductivity

Authors: Dong Zhao

Abstract:

Abstract—Extensive research on obtaining applicable room temperature superconductors meets the major barrier, and the record Tc of 135 K achieved via cuprate has been idling for decades. Even though, the accomplishment of higher Tc than the cuprate was made through pressurizing certain compounds composed of light elements, such as for the LaH10 and for the metallic hydrogen. Room temperature superconductivity under ambient pressure is still the preferred approach and is believed to be the ultimate solution for many applications. While racing to find the breakthrough method to achieve this room temperature Tc milestone in superconducting research, a report stated a discovery of a possible high-temperature superconductor, i.e., the thorium sulfide ThS. Apparently, ThS’s Tc can be at room temperature or even higher. This is because ThS revealed an unusual property of the ‘coexistence of high electrical conductivity and diamagnetism’. Noticed that this property of coexistence of high electrical conductivity and diamagnetism is in line with superconductors, meaning ThS is also at its superconducting state. Surprisingly, ThS owns the property of superconductivity at least at room temperature and under atmosphere pressure. Further study of the ThS’s electrical and magnetic properties in comparison with thorium di-iodide ThI2 concluded its molecular configuration as [Th4+(e-)2]S. This means the ThS’s cation is composed of a [Th4+(e-)2]2+ cation core. It is noticed that this cation core is built by an oxidation state +4 of thorium atom plus an electron pair on this thorium atom that resulted in an oxidation state +2 of this [Th4+(e-)2]2+ cation core. This special construction of [Th4+(e-)2]2+ cation core may lead to the ThS’s room temperature superconductivity because of this characteristic electron lone pair residing on the thorium atom. Since the study of thorium chemistry was carried out in the period of before 1970s. the exploration about ThS’s possible room temperature superconductivity would require resynthesizing ThS. This re-preparation of ThS will provide the sample and enable professionals to verify the ThS’s room temperature superconductivity. Regrettably, the current regulation prevents almost everyone from getting access to thorium metal or thorium compounds due to the radioactive nature of thorium-232 (Th-232), even though the radioactive level of Th-232 is extremely low with its half-life of 14.05 billion years. Consequently, further confirmation of ThS’s high-temperature superconductivity through experiments will be impossible unless the use of corresponding thorium metal and related thorium compounds can be deregulated. This deregulation would allow researchers to obtain the necessary starting materials for the study of ThS. Hopefully, the confirmation of ThS’s room temperature superconductivity can not only establish a method to obtain applicable superconductors but also to pave the way for fully understanding the mechanism of superconductivity.

Keywords: co-existence of high electrical conductivity and diamagnetism, electron pairing and electron lone pair, room temperature superconductivity, the special molecular configuration of thorium sulfide ThS

Procedia PDF Downloads 49
324 Neuron Efficiency in Fluid Dynamics and Prediction of Groundwater Reservoirs'' Properties Using Pattern Recognition

Authors: J. K. Adedeji, S. T. Ijatuyi

Abstract:

The application of neural network using pattern recognition to study the fluid dynamics and predict the groundwater reservoirs properties has been used in this research. The essential of geophysical survey using the manual methods has failed in basement environment, hence the need for an intelligent computing such as predicted from neural network is inevitable. A non-linear neural network with an XOR (exclusive OR) output of 8-bits configuration has been used in this research to predict the nature of groundwater reservoirs and fluid dynamics of a typical basement crystalline rock. The control variables are the apparent resistivity of weathered layer (p1), fractured layer (p2), and the depth (h), while the dependent variable is the flow parameter (F=λ). The algorithm that was used in training the neural network is the back-propagation coded in C++ language with 300 epoch runs. The neural network was very intelligent to map out the flow channels and detect how they behave to form viable storage within the strata. The neural network model showed that an important variable gr (gravitational resistance) can be deduced from the elevation and apparent resistivity pa. The model results from SPSS showed that the coefficients, a, b and c are statistically significant with reduced standard error at 5%.

Keywords: gravitational resistance, neural network, non-linear, pattern recognition

Procedia PDF Downloads 212