Search results for: isomorphism of graph
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 457

Search results for: isomorphism of graph

67 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L)

Procedia PDF Downloads 204
66 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis

Authors: Meng Su

Abstract:

High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.

Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis

Procedia PDF Downloads 86
65 Social Media Idea Ontology: A Concept for Semantic Search of Product Ideas in Customer Knowledge through User-Centered Metrics and Natural Language Processing

Authors: Martin H¨ausl, Maximilian Auch, Johannes Forster, Peter Mandl, Alexander Schill

Abstract:

In order to survive on the market, companies must constantly develop improved and new products. These products are designed to serve the needs of their customers in the best possible way. The creation of new products is also called innovation and is primarily driven by a company’s internal research and development department. However, a new approach has been taking place for some years now, involving external knowledge in the innovation process. This approach is called open innovation and identifies customer knowledge as the most important source in the innovation process. This paper presents a concept of using social media posts as an external source to support the open innovation approach in its initial phase, the Ideation phase. For this purpose, the social media posts are semantically structured with the help of an ontology and the authors are evaluated using graph-theoretical metrics such as density. For the structuring and evaluation of relevant social media posts, we also use the findings of Natural Language Processing, e. g. Named Entity Recognition, specific dictionaries, Triple Tagger and Part-of-Speech-Tagger. The selection and evaluation of the tools used are discussed in this paper. Using our ontology and metrics to structure social media posts enables users to semantically search these posts for new product ideas and thus gain an improved insight into the external sources such as customer needs.

Keywords: idea ontology, innovation management, semantic search, open information extraction

Procedia PDF Downloads 173
64 Thermoluminescence Characteristic of Nanocrystalline BaSO4 Doped with Europium

Authors: Kanika S. Raheja, A. Pandey, Shaila Bahl, Pratik Kumar, S. P. Lochab

Abstract:

The subject of undertaking for this paper is the study of BaSO4 nanophosphor doped with Europium in which mainly the concentration of the rare earth impurity Eu (0.05, 0.1, 0.2, 0.5, and 1 mol %) has been varied. A comparative study of the thermoluminescence(TL) properties of the given nanophosphor has also been done using a well-known standard dosimetry material i.e. TLD-100.Firstly, a number of samples were prepared successfully by the chemical co-precipitation method. The whole lot was then compared to a well established standard material (TLD-100) for its TL sensitivity property. BaSO4:Eu ( 0.2 mol%) showed the highest sensitivity out of the lot. It was also found that when compared to the standard TLD-100, BaSo4:Eu (0.2mol%) showed surprisingly high sensitivity for a large range of doses. The TL response curve for all prepared samples has also been studied over a wide range of doses i.e 10Gy to 2kGy for gamma radiation. Almost all the samples of BaSO4:Eu showed a remarkable linearity for a broad range of doses, which is a characteristic feature of a fine TL dosimeter. The graph remained linear even beyond 1kGy for gamma radiation. Thus, the given nanophosphor has been successfully optimised for the concentration of the dopant material to achieve its highest TL sensitivity. Further, the comparative study with the standard material revealed that the current optimised sample shows an astonishingly better TL sensitivity and a phenomenal linear response curve for an incredibly wide range of doses for gamma radiation (Co-60) as compared to the standard TLD-100, which makes the current optimised BaSo4:Eu quite promising as an efficient gamma radiation dosimeter. Lastly, the present phosphor has been optimised for its annealing temperature to acquire the best results while also studying its fading and reusability properties.

Keywords: gamma radiation, nanoparticles, radiation dosimetry, thermoluminescence

Procedia PDF Downloads 417
63 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 58
62 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0

Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini

Abstract:

Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.

Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling

Procedia PDF Downloads 79
61 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem

Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães

Abstract:

This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.

Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart

Procedia PDF Downloads 155
60 Protective Effect of Diosgenin against Silica-Induced Tuberculosis in Rat Model

Authors: Williams A. Adu, Cynthia A. Danquah, Paul P. S. Ossei, Selase Ativui, Michael Ofori, James Asenso, George Owusu

Abstract:

Background Silicosis is an occupational disease of the lung that is caused by chronic exposure to silica dust. There is a higher frequency of co-existence of silicosis with tuberculosis (TB), ultimately resulting in lung fibrosis and respiratory failure. Chronic intake of synthetic drugs has resulted in undesirable side effects. Diosgenin is a steroidal saponin that has been shown to exert a therapeutic effect on lung injury. Therefore, we investigated the ability of diosgenin to reduce the susceptibility of silica-induced TB in rats. Method Silicosis was induced by intratracheal instillation of 50 mg/kg crystalline silica in Sprague Dawley rats. Different doses of diosgenin (1, 10, and 100 mg/kg), Mycobacterium smegmatis and saline were administered for 30 days. Afterwards, 5 of the rats from each group were sacrificed, and the 5 remaining rats in each group, except the control, received Mycobacterium smegmatis. Treatment of diosgenin continued until the 50th day, and the rats were sacrificed at the end of the experiment. The result was analysed using a one-way analysis of variance (ANOVA) with a Graph-pad prism Result At a half-maximal inhibition concentration of 48.27 µM, diosgenin inhibited the growth of Mycobacterium smegmatis. There was a marked decline in the levels of immune cell infiltration and cytokines production. Lactate dehydrogenase and total protein levels were significantly reduced compared to control. There was an increase in the survival rate of the treatment group compared to the control. Conclusion Diosgenin ameliorated silica-induced pulmonary tuberculosis by declining the levels of inflammatory and pro-inflammatory cytokines and, in effect, significantly reduced the susceptibility of rats to pulmonary TB.

Keywords: silicosis, tuberculosis, diosgenin, fibrosis, crystalline silica

Procedia PDF Downloads 48
59 A Multi-Objective Decision Making Model for Biodiversity Conservation and Planning: Exploring the Concept of Interdependency

Authors: M. Mohan, J. P. Roise, G. P. Catts

Abstract:

Despite living in an era where conservation zones are de-facto the central element in any sustainable wildlife management strategy, we still find ourselves grappling with several pareto-optimal situations regarding resource allocation and area distribution for the same. In this paper, a multi-objective decision making (MODM) model is presented to answer the question of whether or not we can establish mutual relationships between these contradicting objectives. For our study, we considered a Red-cockaded woodpecker (Picoides borealis) habitat conservation scenario in the coastal plain of North Carolina, USA. Red-cockaded woodpecker (RCW) is a non-migratory territorial bird that excavates cavities in living pine trees for roosting and nesting. The RCW groups nest in an aggregation of cavity trees called ‘cluster’ and for our model we use the number of clusters to be established as a measure of evaluating the size of conservation zone required. The case study is formulated as a linear programming problem and the objective function optimises the Red-cockaded woodpecker clusters, carbon retention rate, biofuel, public safety and Net Present Value (NPV) of the forest. We studied the variation of individual objectives with respect to the amount of area available and plotted a two dimensional dynamic graph after establishing interrelations between the objectives. We further explore the concept of interdependency by integrating the MODM model with GIS, and derive a raster file representing carbon distribution from the existing forest dataset. Model results demonstrate the applicability of interdependency from both linear and spatial perspectives, and suggest that this approach holds immense potential for enhancing environmental investment decision making in future.

Keywords: conservation, interdependency, multi-objective decision making, red-cockaded woodpecker

Procedia PDF Downloads 322
58 Sensitivity Analysis and Solitary Wave Solutions to the (2+1)-Dimensional Boussinesq Equation in Dispersive Media

Authors: Naila Nasreen, Dianchen Lu

Abstract:

This paper explores the dynamical behavior of the (2+1)-dimensional Boussinesq equation, which is a nonlinear water wave equation and is used to model wave packets in dispersive media with weak nonlinearity. This equation depicts how long wave made in shallow water propagates due to the influence of gravity. The (2+1)- dimensional Boussinesq equation combines the two-way propagation of the classical Boussinesq equation with the dependence on a second spatial variable, as that occurs in the two-dimensional Kadomstev- Petviashvili equation. This equation provides a description of head- on collision of oblique waves and it possesses some interesting properties. The governing model is discussed by the assistance of Ricatti equation mapping method, a relatively integration tool. The solutions have been extracted in different forms the solitary wave solutions as well as hyperbolic and periodic solutions. Moreover, the sensitivity analysis is demonstrated for the designed dynamical structural system’s wave profiles, where the soliton wave velocity and wave number parameters regulate the water wave singularity. In addition to being helpful for elucidating nonlinear partial differential equations, the method in use gives previously extracted solutions and extracts fresh exact solutions. Assuming the right values for the parameters, various graph in different shapes are sketched to provide information about the visual format of the earned results. This paper’s findings support the efficacy of the approach taken in enhancing nonlinear dynamical behavior. We believe this research will be of interest to a wide variety of engineers that work with engineering models. Findings show the effectiveness simplicity, and generalizability of the chosen computational approach, even when applied to complicated systems in a variety of fields, especially in ocean engineering.

Keywords: (2+1)-dimensional Boussinesq equation, solitary wave solutions, Ricatti equation mapping approach, nonlinear phenomena

Procedia PDF Downloads 74
57 Analysis of Travel Behavior Patterns of Frequent Passengers after the Section Shutdown of Urban Rail Transit - Taking the Huaqiao Section of Shanghai Metro Line 11 Shutdown During the COVID-19 Epidemic as an Example

Authors: Hongyun Li, Zhibin Jiang

Abstract:

The travel of passengers in the urban rail transit network is influenced by changes in network structure and operational status, and the response of individual travel preferences to these changes also varies. Firstly, the influence of the suspension of urban rail transit line sections on passenger travel along the line is analyzed. Secondly, passenger travel trajectories containing multi-dimensional semantics are described based on network UD data. Next, passenger panel data based on spatio-temporal sequences is constructed to achieve frequent passenger clustering. Then, the Graph Convolutional Network (GCN) is used to model and identify the changes in travel modes of different types of frequent passengers. Finally, taking Shanghai Metro Line 11 as an example, the travel behavior patterns of frequent passengers after the Huaqiao section shutdown during the COVID-19 epidemic are analyzed. The results showed that after the section shutdown, most passengers would transfer to the nearest Anting station for boarding, while some passengers would transfer to other stations for boarding or cancel their travels directly. Among the passengers who transferred to Anting station for boarding, most of passengers maintained the original normalized travel mode, a small number of passengers waited for a few days before transferring to Anting station for boarding, and only a few number of passengers stopped traveling at Anting station or transferred to other stations after a few days of boarding on Anting station. The results can provide a basis for understanding urban rail transit passenger travel patterns and improving the accuracy of passenger flow prediction in abnormal operation scenarios.

Keywords: urban rail transit, section shutdown, frequent passenger, travel behavior pattern

Procedia PDF Downloads 58
56 Adverse Childhood Experiences (ACES) and Later-Life Depression: Perceived Social Support as a Potential Protective Factor

Authors: E. Von Cheong, Carol Sinnott, Darren Dahly, Patricia M. Kearney

Abstract:

Introduction and Aim: Adverse childhood experiences (ACEs) are all too common and have been linked to poorer health and wellbeing across the life course. While the prevention of ACEs is a worthy goal, it is important that we also try to lessen the impact of ACEs for those who do experience them. This study aims to investigate associations between adverse childhood experiences (ACEs) and later-life depressive symptoms; and to explore whether perceived social support (PSS) moderates these. Method: We analysed baseline data from the Mitchelstown (Ireland) 2010-11 cohort involving 2047 men and women aged 50–69 years. Self-reported assessments included ACEs (Centre for Disease Control ACE questionnaire), PSS (Oslo Social Support Scale), and depressive symptoms (CES-D). The primary exposure was self-report of at least one ACE. We also investigated the effects of ACE exposure by the subtypes abuse, neglect, and household dysfunction. Associations between each of these exposures and depressive symptoms were estimated using logistic regression, adjusted for socio-demographic factors that were selected using the Directed Acyclic Graph (DAG) approach. We also tested whether the estimated associations varied across levels of PSS (poor, moderate, and good). Results: 23.7% of participants reported at least one ACE (95% CI: 21.9% to 25.6%). ACE exposures (overall or subtype) were associated with a higher odds of depressive symptoms, but only among individuals with poor PSS. For example, exposure to any ACE (vs. none) was associated with 3 times the odds of depressive symptoms (Adjusted OR 2.97; 95% CI 1.63 to 5.40) among individuals reporting poor PSS, while among those reporting moderate PSS, the adjusted OR was 1.18 (95% CI 0.72 to 1.94). Discussion: ACEs are common among older adults in Ireland and are associated with higher odds of later-life depressive symptoms among those also reporting poor PSS. Interventions that enhance perception of social support following ACE exposure may help reduce the burden of depression in older populations.

Keywords: adverse childhood experiences, depression, later-life, perceived social support

Procedia PDF Downloads 218
55 Design, Development and Analysis of Combined Darrieus and Savonius Wind Turbine

Authors: Ashish Bhattarai, Bishnu Bhatta, Hem Raj Joshi, Nabin Neupane, Pankaj Yadav

Abstract:

This report concerns the design, development, and analysis of the combined Darrieus and Savonius wind turbine. Vertical Axis Wind Turbines (VAWT's) are of two type's viz. Darrieus (lift type) and Savonius (drag type). The problem associated with Darrieus is the lack of self-starting while Savonius has low efficiency. There are 3 straight Darrieus blades having the cross-section of NACA(National Advisory Committee of Aeronautics) 0018 placed circumferentially and a helically twisted Savonius blade to get even torque distribution. This unique design allows the use of Savonius as a method of self-starting the wind turbine, which the Darrieus cannot achieve on its own. All the parts of the wind turbine are designed in CAD software, and simulation data were obtained via CFD(Computational Fluid Dynamics) approach. Also, the design was imported to FlashForge Finder to 3D print the wind turbine profile and finally, testing was carried out. The plastic material used for Savonius was ABS(Acrylonitrile Butadiene Styrene) and that for Darrieus was PLA(Polylactic Acid). From the data obtained experimentally, the hybrid VAWT so fabricated has been found to operate at the low cut-in speed of 3 m/s and maximum power output has been found to be 7.5537 watts at the wind speed of 6 m/s. The maximum rpm of the rotor blade is recorded to be 431 rpm(rotation per minute) at the wind velocity of 6 m/s, signifying its potentiality of wind power production. Besides, the data so obtained from both the process when analyzed through graph plots has shown the similar nature slope wise. Also, the difference between the experimental and theoretical data obtained has shown mechanical losses. The objective is to eliminate the need for external motors for self-starting purposes and study the performance of the model. The testing of the model was carried out for different wind velocities.

Keywords: VAWT, Darrieus, Savonius, helical blades, CFD, flash forge finder, ABS, PLA

Procedia PDF Downloads 189
54 No Histological and Biochemical Changes Following Administration of Tenofovir Nanoparticles: Animal Model Study

Authors: Aniekan Peter, ECS Naidu, Edidiong Akang, U. Offor, R. Kalhapure, A. A. Chuturgoon, T. Govender, O. O. Azu

Abstract:

Introduction: Nano-drugs are novel innovations in the management of human immunodeficiency virus (HIV) pandemic, especially resistant strains of the virus in their sanctuary sites: testis and the brain. There are safety concerns to be addressed to achieve the full potential of this new drug delivery system. Aim of study: Our study was designed to investigate toxicity profile of Tenofovir Nanoparticle (TDF-N) synthesized by University of Kwazulu-Natal (UKZN) Nano-team for prevention and treatment of HIV infection. Methodology: Ten adult male Sprague-Dawley rats maintained at the Animal House of the Biomedical Resources Unit UKZN were used for the study. The animals were weighed and divided into two groups of 5 animal each. Control animals (A) were administered with normal saline. Therapeutic dose (4.3 mg/kg) of TDF-N was administered to group B. At the end of four weeks, animals were weighed and sacrificed. Liver and kidney were removed fixed in formal saline, processed and stained using H/E, PAS and MT stains for light microscopy. Serum was obtained for renal function test (RFT), liver function test (LFT) and full blood count (FBC) using appropriate analysers. Cellular measurements were done using ImageJ and Leica software 2.0. Data were analysed using graph pad 6, values < 0.05 were significant. Results: We reported no histological alterations in the liver, kidney, FBC, LFT and RFT between the TDF-N animals and saline control. There were no significant differences in weight, organo-somatic index and histological measurements in the treatment group when compared with saline control. Conclusion/recommendations: TDF-N is not toxic to the liver, kidney and blood cells in our study. More studies using human subjects is recommended.

Keywords: tenofovir nanoparticles, liver, kidney, blood cells

Procedia PDF Downloads 163
53 Rejuvenating a Space into World Class Environment through Conservation of Heritage Architecture

Authors: Abhimanyu Sharma

Abstract:

India is known for its cultural heritage. As the country is rich in diversity along its length and breadth, the state of Jammu & Kashmir is world famous for the beautiful tourist destinations in the Kashmir region of the state. However, equally destined destinations are also located in Jammu region of the said state. For most of the time in last 50-60 years, the prime focus of development was centered around Kashmir region. But now due to an ever increase in globalization, the focus is decentralizing throughout the country. Pertinently, the potential of Jammu Region needs to be incorporated into the world tourist map in particular. One such spot in the Jammu region of the state is a place called ‘Mubarak Mandi’ – the palace with the royal residence of the Maharaja of Jammu & Kashmir from the Dogra Dynasty, is located in the heart of Jammu city (the winter capital of the state). Since the place is destined with a heritage importance but yet lack the supporting infrastructure to attract the national tourist in general and worldwide tourist at large. For such places, conservation and restoration of the existing structures are the potential tools to overcome the present limiting nature of the place. The rejuvenation of this place through potential and dynamic conservation techniques is targeted through this paper. This paper deals with developing and restoring the areas within the whole campus with appropriate building materials, conservation techniques, etc. to promote a great number of visitors by developing it into a prioritised tourist attraction point. Major thrust shall be on studying the criteria’s for developing the place considering the psychological effect needed to create a socially interactive environment. Additionally, thrust shall be on the spatial elements that will aid in creating a common platform for all kinds of tourists. Accordingly, different conservation guidelines (or model) shall be targeted through this paper so that this Jammu region shall also be an equally contributor to the tourist graph of the country as the Kashmir part is.

Keywords: conservation, heritage architecture, rejuvenating, restoration

Procedia PDF Downloads 282
52 Performance Comparison and Visualization of COMSOL Multiphysics, Matlab, and Fortran for Predicting the Reservoir Pressure on Oil Production in a Multiple Leases Reservoir with Boundary Element Method

Authors: N. Alias, W. Z. W. Muhammad, M. N. M. Ibrahim, M. Mohamed, H. F. S. Saipol, U. N. Z. Ariffin, N. A. Zakaria, M. S. Z. Suardi

Abstract:

This paper presents the performance comparison of some computation software for solving the boundary element method (BEM). BEM formulation is the numerical technique and high potential for solving the advance mathematical modeling to predict the production of oil well in arbitrarily shaped based on multiple leases reservoir. The limitation of data validation for ensuring that a program meets the accuracy of the mathematical modeling is considered as the research motivation of this paper. Thus, based on this limitation, there are three steps involved to validate the accuracy of the oil production simulation process. In the first step, identify the mathematical modeling based on partial differential equation (PDE) with Poisson-elliptic type to perform the BEM discretization. In the second step, implement the simulation of the 2D BEM discretization using COMSOL Multiphysic and MATLAB programming languages. In the last step, analyze the numerical performance indicators for both programming languages by using the validation of Fortran programming. The performance comparisons of numerical analysis are investigated in terms of percentage error, comparison graph and 2D visualization of pressure on oil production of multiple leases reservoir. According to the performance comparison, the structured programming in Fortran programming is the alternative software for implementing the accurate numerical simulation of BEM. As a conclusion, high-level language for numerical computation and numerical performance evaluation are satisfied to prove that Fortran is well suited for capturing the visualization of the production of oil well in arbitrarily shaped.

Keywords: performance comparison, 2D visualization, COMSOL multiphysic, MATLAB, Fortran, modelling and simulation, boundary element method, reservoir pressure

Procedia PDF Downloads 476
51 Design and Development of a Mechanical Force Gauge for the Square Watermelon Mold

Authors: Morteza Malek Yarand, Hadi Saebi Monfared

Abstract:

This study aimed at designing and developing a mechanical force gauge for the square watermelon mold for the first time. It also tried to introduce the square watermelon characteristics and its production limitations. The mechanical force gauge performance and the product itself were also described. There are three main designable gauge models: a. hydraulic gauge, b. strain gauge, and c. mechanical gauge. The advantage of the hydraulic model is that it instantly displays the pressure and thus the force exerted by the melon. However, considering the inability to measure forces at all directions, complicated development, high cost, possible hydraulic fluid leak into the fruit chamber and the possible influence of increased ambient temperature on the fluid pressure, the development of this gauge was overruled. The second choice was to calculate pressure using the direct force a strain gauge. The main advantage of these strain gauges over spring types is their high precision in measurements; but with regard to the lack of conformity of strain gauge working range with water melon growth, calculations were faced with problems. Finally the mechanical pressure gauge has advantages, including the ability to measured forces and pressures on the mold surface during melon growth; the ability to display the peak forces; the ability to produce melon growth graph thanks to its continuous force measurements; the conformity of its manufacturing materials with the required physical conditions of melon growth; high air conditioning capability; the ability to permit sunlight reaches the melon rind (no yellowish skin and quality loss); fast and straightforward calibration; no damages to the product during assembling and disassembling; visual check capability of the product within the mold; applicable to all growth environments (field, greenhouses, etc.); simple process; low costs and so forth.

Keywords: mechanical force gauge, mold, reshaped fruit, square watermelon

Procedia PDF Downloads 261
50 Completion of the Modified World Health Organization (WHO) Partograph during Labour in Public Health Institutions of Addis Ababa, Ethiopia

Authors: Engida Yisma, Berhanu Dessalegn, Ayalew Astatkie, Nebreed Fesseha

Abstract:

Background: The World Health Organization (WHO) recommends using the partograph to follow labour and delivery, with the objective to improve health care and reduce maternal and foetal morbidity and death. Methods: A retrospective document review was undertaken to assess the completion of the modified WHO partograph during labour in public health institutions of Addis Ababa, Ethiopia. A total of 420 of the modified WHO partographs used to monitor mothers in labour from five public health institutions that provide maternity care were reviewed. A structured checklist was used to gather the required data. The collected data were analyzed using SPSS version 16.0. Frequency distributions, cross-tabulations and a graph were used to describe the results of the study. Results: All facilities were using the modified WHO partograph. The correct completion of the partograph was very low. From 420 partographs reviewed across all the five health facilities, foetal heart rate was recorded into the recommended standard in 129(30.7%) of the partographs, while 138 (32.9%) of cervical dilatation and 87 (20.70%) of uterine contractions were recorded to the recommended standard. The study did not document descent of the presenting part in 353 (84%). Moulding in 364 (86.7%) of the partographs reviewed was not recorded. Documentation of state of the liquor was 113(26.9%), while the maternal blood pressure was recorded to standard only in 78(18.6%) of the partographs reviewed. Conclusions: This study showed a poor completion of the modified WHO partographs during labour in public health institutions of Addis Ababa, Ethiopia. The findings may reflect poor management of labour and indicate the need for pre-service and periodic on-job training of health workers on the proper completion of the partograph. Regular supportive supervision, provision of guidelines and mandatory health facility policy are also needed in support of a collaborative effort to reduce maternal and perinatal deaths.

Keywords: modified WHO partograph, completion, public health institutions, Addis Ababa, Ethiopia

Procedia PDF Downloads 325
49 Normalized P-Laplacian: From Stochastic Game to Image Processing

Authors: Abderrahim Elmoataz

Abstract:

More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.

Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems

Procedia PDF Downloads 496
48 Vulnerability Assessment of Healthcare Interdependent Critical Infrastructure Coloured Petri Net Model

Authors: N. Nivedita, S. Durbha

Abstract:

Critical Infrastructure (CI) consists of services and technological networks such as healthcare, transport, water supply, electricity supply, information technology etc. These systems are necessary for the well-being and to maintain effective functioning of society. Critical Infrastructures can be represented as nodes in a network where they are connected through a set of links depicting the logical relationship among them; these nodes are interdependent on each other and interact with each at other at various levels, such that the state of each infrastructure influences or is correlated to the state of another. Disruption in the service of one infrastructure nodes of the network during a disaster would lead to cascading and escalating disruptions across other infrastructures nodes in the network. The operation of Healthcare Infrastructure is one such Critical Infrastructure that depends upon a complex interdependent network of other Critical Infrastructure, and during disasters it is very vital for the Healthcare Infrastructure to be protected, accessible and prepared for a mass casualty. To reduce the consequences of a disaster on the Critical Infrastructure and to ensure a resilient Critical Health Infrastructure network, knowledge, understanding, modeling, and analyzing the inter-dependencies between the infrastructures is required. The paper would present inter-dependencies related to Healthcare Critical Infrastructure based on Hierarchical Coloured Petri Nets modeling approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The model properties are being analyzed for the various state changes which occur when there is a disruption or damage to any of the Critical Infrastructure. The failure probabilities for the failure risk of interconnected systems are calculated by deriving a reachability graph, which is later mapped to a Markov chain. By analytically solving and analyzing the Markov chain, the overall vulnerability of the Healthcare CI HCPN model is demonstrated. The entire model would be integrated with Geographic information-based decision support system to visualize the dynamic behavior of the interdependency of the Healthcare and related CI network in a geographically based environment.

Keywords: critical infrastructure interdependency, hierarchical coloured petrinet, healthcare critical infrastructure, Petri Nets, Markov chain

Procedia PDF Downloads 506
47 Elastodynamic Response of Shear Wave Dispersion in a Multi-Layered Concentric Cylinders Composed of Reinforced and Piezo-Materials

Authors: Sunita Kumawat, Sumit Kumar Vishwakarma

Abstract:

The present study fundamentally focuses on analyzing the limitations and transference of horizontally polarized Shear waves(SH waves) in a four-layered compounded cylinder. The geometrical structure comprises of concentric cylinders of infinite length composed of self-reinforced (SR), fibre-reinforced (FR), piezo-magnetic (PM), and piezo-electric(PE) materials. The entire structure is assumed to be pre stressed along the azimuthal direction. In order to make the structure sensitive to the application pertaining to sensors and actuators, the PM and PE cylinders have been categorically placed in the outer part of the geometry. Whereas in order to provide stiffness and stability to the structure, the inner part consists of self-reinforced and fibre-reinforced media. The common boundary between each of the cylinders has been essentially considered as imperfectly bounded. At the interface of PE and PM media, mechanical, electrical, magnetic, and inter-coupled types of imperfections have been exhibited. The closed-form of dispersion relation has been deduced for two contrast cases i.e. electrically open magnetically short(EOMS) and electrically short and magnetically open ESMO circuit conditions. Dispersion curves have been plotted to illustrate the salient features of parameters like normalized imperfect interface parameters, initial stresses, and radii of the concentric cylinders. The comparative effect of each one of these parameters on the phase velocity of the wave has been enlisted and marked individually. Every graph has been presented with two consecutive modes in succession for a comprehensive understanding. This theoretical study may be implemented to improvise the performance of surface acoustic wave (SAW) sensors and actuators consisting of piezo-electric quartz and piezo-composite concentric cylinders.

Keywords: self-reinforced, fibre-reinforced, piezo-electric, piezo-magnetic, interfacial imperfection

Procedia PDF Downloads 93
46 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs

Authors: Anika Chebrolu

Abstract:

Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.

Keywords: drug design, multitargeticity, de-novo, reinforcement learning

Procedia PDF Downloads 70
45 Environmental Impact Assessment in Mining Regions with Remote Sensing

Authors: Carla Palencia-Aguilar

Abstract:

Calculations of Net Carbon Balance can be obtained by means of Net Biome Productivity (NBP), Net Ecosystem Productivity (NEP), and Net Primary Production (NPP). The latter is an important component of the biosphere carbon cycle and is easily obtained data from MODIS MOD17A3HGF; however, the results are only available yearly. To overcome data availability, bands 33 to 36 from MODIS MYD021KM (obtained on a daily basis) were analyzed and compared with NPP data from the years 2000 to 2021 in 7 sites where surface mining takes place in the Colombian territory. Coal, Gold, Iron, and Limestone were the minerals of interest. Scales and Units as well as thermal anomalies, were considered for net carbon balance per location. The NPP time series from the satellite images were filtered by using two Matlab filters: First order and Discrete Transfer. After filtering the NPP time series, comparing the graph results from the satellite’s image value, and running a linear regression, the results showed R2 from 0,72 to 0,85. To establish comparable units among NPP and bands 33 to 36, the Greenhouse Gas Equivalencies Calculator by EPA was used. The comparison was established in two ways: one by the sum of all the data per point per year and the other by the average of 46 weeks and finding the percentage that the value represented with respect to NPP. The former underestimated the total CO2 emissions. The results also showed that coal and gold mining in the last 22 years had less CO2 emissions than limestone, with an average per year of 143 kton CO2 eq for gold, 152 kton CO2 eq for coal, and 287 kton CO2 eq for iron. Limestone emissions varied from 206 to 441 kton CO2 eq. The maximum emission values from unfiltered data correspond to 165 kton CO2 eq. for gold, 188 kton CO2 eq. for coal, and 310 kton CO2 eq. for iron and limestone, varying from 231 to 490 kton CO2 eq. If the most pollutant limestone site improves its production technology, limestone could count with a maximum of 318 kton CO2 eq emissions per year, a value very similar respect to iron. The importance of gathering data is to establish benchmarks in order to attain 2050’s zero emissions goal.

Keywords: carbon dioxide, NPP, MODIS, MINING

Procedia PDF Downloads 82
44 Analysis of Lift Arm Failure and Its Improvement for the Use in Farm Tractor

Authors: Japinder Wadhawan, Pradeep Rajan, Alok K. Saran, Navdeep S. Sidhu, Daanvir K. Dhir

Abstract:

Currently, research focus in the development of agricultural equipment and tractor parts in India is innovation and use of alternate materials like austempered ductile iron (ADI). Three-point linkage mechanism of the tractor is susceptible to unpredictable load conditions in the field, and one of the critical components vulnerable to failure is lift arm. Conventionally, lift arm is manufactured either by forging or casting (SG Iron) and main objective of the present work is to reduce the failure occurrences in the lift arm, which is achieved by changing the manufacturing material, i.e ADI, without changing existing design. Effect of four pertinent variables of manufacturing ADI, viz. austenitizing temperature, austenitizing time, austempering temperature, austempering time, was investigated using Taguchi method for design of experiments. To analyze the effect of parameters on the mechanical properties, mean average and signal-to-noise (S/N) ratio was calculated based on the design of experiments with L9 orthogonal array and the linear graph. The best combination for achieving the desired mechanical properties of lift arm is austenitization at 860°C for 90 minutes and austempering at 350°C for 60 minutes. Results showed that the developed component is having 925 MPA tensile strength, 7.8 per cent elongation and 120 joules toughness making it more suitable material for lift arm manufacturing. The confirmatory experiment has been performed and found a good agreement between predicted and experimental value. Also, the CAD model of the existing design was developed in computer aided design software, and structural loading calculations were performed by a commercial finite element analysis package. An optimized shape of the lift arm has also been proposed resulting in light weight and cheaper product than the existing design, which can withstand the same loading conditions effectively.

Keywords: austempered ductile iron, design of experiment, finite element analysis, lift arm

Procedia PDF Downloads 221
43 FEM for Stress Reduction by Optimal Auxiliary Holes in a Loaded Plate with Elliptical Hole

Authors: Basavaraj R. Endigeri, S. G. Sarganachari

Abstract:

Steel is widely used in machine parts, structural equipment and many other applications. In many steel structural elements, holes of different shapes and orientations are made with a view to satisfy the design requirements. The presence of holes in steel elements creates stress concentration, which eventually reduce the mechanical strength of the structure. Therefore, it is of great importance to investigate the state of stress around the holes for the safety and properties design of such elements. By literature survey, it is known that till date, there is no analytical solution to reduce the stress concentration by providing auxiliary holes at a definite location and radii in a steel plate. The numerical method can be used to determine the optimum location and radii of auxiliary holes. In the present work plate with an elliptical hole, for a steel material subjected to uniaxial load is analyzed and the effect of stress concentration is graphically represented .The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 11.0 is used to analyse the steel plate. The analysis is carried out using a plane 42 element. Further the ANSYS optimization model is used to determine the location and radii for optimum values of auxiliary hole to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. The results of this study are in the form of the graphs for determining the locations and diameter of optimal auxiliary holes. The graph of stress concentration v/s central hole diameter to plate width ratio. The Finite Elements results of the study indicates that the stress concentration effect of central elliptical hole in an uniaxial loaded plate can be reduced by introducing auxiliary holes on either side of the central circular hole.

Keywords: finite element method, optimization, stress concentration factor, auxiliary holes

Procedia PDF Downloads 438
42 Legal Judgment Prediction through Indictments via Data Visualization in Chinese

Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun

Abstract:

Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.

Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization

Procedia PDF Downloads 108
41 Integrated Genetic-A* Graph Search Algorithm Decision Model for Evaluating Cost and Quality of School Renovation Strategies

Authors: Yu-Ching Cheng, Yi-Kai Juan, Daniel Castro

Abstract:

Energy consumption of buildings has been an increasing concern for researchers and practitioners in the last decade. Sustainable building renovation can reduce energy consumption and carbon dioxide emissions; meanwhile, it also can extend existing buildings useful life and facilitate environmental sustainability while providing social and economic benefits to the society. School buildings are different from other designed spaces as they are more crowded and host the largest portion of daily activities and occupants. Strategies that focus on reducing energy use but also improve the students’ learning environment becomes a significant subject in sustainable school buildings development. A decision model is developed in this study to solve complicated and large-scale combinational, discrete and determinate problems such as school renovation projects. The task of this model is to automatically search for the most cost-effective (lower cost and higher quality) renovation strategies. In this study, the search process of optimal school building renovation solutions is by nature a large-scale zero-one programming determinate problem. A* is suitable for solving deterministic problems due to its stable and effective search process, and genetic algorithms (GA) provides opportunities to acquire global optimal solutions in a short time via its indeterminate search process based on probability. These two algorithms are combined in this study to consider trade-offs between renovation cost and improved quality, this decision model is able to evaluate current school environmental conditions and suggest an optimal scheme of sustainable school buildings renovation strategies. Through adoption of this decision model, school managers can overcome existing limitations and transform school buildings into spaces more beneficial to students and friendly to the environment.

Keywords: decision model, school buildings, sustainable renovation, genetic algorithm, A* search algorithm

Procedia PDF Downloads 108
40 A Low-Cost of Foot Plantar Shoes for Gait Analysis

Authors: Zulkifli Ahmad, Mohd Razlan Azizan, Nasrul Hadi Johari

Abstract:

This paper presents a study on development and conducting of a wearable sensor system for gait analysis measurement. For validation, the method of plantar surface measurement by force plate was prepared. In general gait analysis, force plate generally represents a studies about barefoot in whole steps and do not allow analysis of repeating movement step in normal walking and running. The measurements that were usually perform do not represent the whole daily plantar pressures in the shoe insole and only obtain the ground reaction force. The force plate measurement is usually limited a few step and it is done indoor and obtaining coupling information from both feet during walking is not easily obtained. Nowadays, in order to measure pressure for a large number of steps and obtain pressure in each insole part, it could be done by placing sensors within an insole. With this method, it will provide a method for determine the plantar pressures while standing, walking or running of a shoe wearing subject. Inserting pressure sensors in the insole will provide specific information and therefore the point of the sensor placement will result in obtaining the critical part under the insole. In the wearable shoe sensor project, the device consists left and right shoe insole with ten FSR. Arduino Mega was used as a micro-controller that read the analog input from FSR. The analog inputs were transmitted via bluetooth data transmission that gains the force data in real time on smartphone. Blueterm software which is an android application was used as an interface to read the FSR reading on the shoe wearing subject. The subject consist of two healthy men with different age and weight doing test while standing, walking (1.5 m/s), jogging (5 m/s) and running (9 m/s) on treadmill. The data obtain will be saved on the android device and for making an analysis and comparison graph.

Keywords: gait analysis, plantar pressure, force plate, earable sensor

Procedia PDF Downloads 431
39 Effects of Aerobic Dance Circuit Training Programme on Blood Pressure Variables of Obese Female College Students in Oyo State, Nigeria

Authors: Isiaka Oladele Oladipo, Olusegun Adewale Ajayi

Abstract:

The blood pressure fitness of female college students has been implicated in sedentary lifestyles. This study was designed to determine the effects of the Aerobic Dance Circuit Training Programme (ADCT) on blood pressure variables (Diastolic Blood Pressure (DBP) and Systolic Blood Pressure (SBP). Participants’ Pretest-Posttest control group quasi-experimental design using a 2x2x4 factorial matrix was adopted, while one (1) research question and two (2) research hypotheses were formulated. Seventy (70) untrained obese students-volunteers age 21.10±2.46 years were purposively selected from Oyo town, Nigeria; Emmanuel Alayande College of Education (experimental group and Federal College of Education (special) control group. The participants’ BMI, weight (kg), height (m), systolic bp(mmHg), and diastolic bp (mmHg) were measured before and completion of ADCT. Data collected were analysed using a pie chart, graph, percentage, mean, frequency, and standard deviation, while a t-test was used to analyse the stated hypotheses set at the critical level of 0.05. There were significant mean differences in baseline and post-treatment values of blood pressure variables in terms of SBP among the experimental group 136.49mmHg and 131.66mmHg; control group 130.82mmHg and 130.56mmHg (crit-t=2.00, cal.t=3.02, df=69, p<.0, the hypothesis was rejected; while DBP experimental group 88.65mmHg and 82.21mmHg; control group 69.91mmHg and 72.66mmHg (crit-t=2.00, cal.t=1.437, df=69, p>.05) in which the hypothesis was accepted). It was revealed from the findings that participants’ SBP decrease from week 4 to week 12 of ADCT indicated an effective reduction in blood pressure variables of obese female students. Therefore, the study confirmed that the use of ADCT is safe and effective in the management of blood pressure for the healthy benefit of obesity.

Keywords: aerobic dance circuit training, fitness lifestyles, obese college female students, systolic blood pressure, diastolic blood pressure

Procedia PDF Downloads 58
38 A Strategy to Oil Production Placement Zones Based on Maximum Closeness

Authors: Waldir Roque, Gustavo Oliveira, Moises Santos, Tatiana Simoes

Abstract:

Increasing the oil recovery factor of an oil reservoir has been a concern of the oil industry. Usually, the production placement zones are defined after some analysis of geological and petrophysical parameters, being the rock porosity, permeability and oil saturation of fundamental importance. In this context, the determination of hydraulic flow units (HFUs) renders an important step in the process of reservoir characterization since it may provide specific regions in the reservoir with similar petrophysical and fluid flow properties and, in particular, techniques supporting the placement of production zones that favour the tracing of directional wells. A HFU is defined as a representative volume of a total reservoir rock in which petrophysical and fluid flow properties are internally consistent and predictably distinct of other reservoir rocks. Technically, a HFU is characterized as a rock region that exhibit flow zone indicator (FZI) points lying on a straight line of the unit slope. The goal of this paper is to provide a trustful indication for oil production placement zones for the best-fit HFUs. The FZI cloud of points can be obtained from the reservoir quality index (RQI), a function of effective porosity and permeability. Considering log and core data the HFUs are identified and using the discrete rock type (DRT) classification, a set of connected cell clusters can be found and by means a graph centrality metric, the maximum closeness (MaxC) cell is obtained for each cluster. Considering the MaxC cells as production zones, an extensive analysis, based on several oil recovery factor and oil cumulative production simulations were done for the SPE Model 2 and the UNISIM-I-D synthetic fields, where the later was build up from public data available from the actual Namorado Field, Campos Basin, in Brazil. The results have shown that the MaxC is actually technically feasible and very reliable as high performance production placement zones.

Keywords: hydraulic flow unit, maximum closeness centrality, oil production simulation, production placement zone

Procedia PDF Downloads 307