Search results for: single-phase models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6743

Search results for: single-phase models

4643 Centrifuge Modelling Approach on Sysmic Loading Analysis of Clay: A Geotechnical Study

Authors: Anthony Quansah, Tresor Ntaryamira, Shula Mushota

Abstract:

Models for geotechnical centrifuge testing are usually made from re-formed soil, allowing for comparisons with naturally occurring soil deposits. However, there is a fundamental omission in this process because the natural soil is deposited in layers creating a unique structure. Nonlinear dynamics of clay material deposit is an essential part of changing the attributes of ground movements when subjected to solid seismic loading, particularly when diverse intensification conduct of speeding up and relocation are considered. The paper portrays a review of axis shaking table tests and numerical recreations to explore the offshore clay deposits subjected to seismic loadings. These perceptions are accurately reenacted by DEEPSOIL with appropriate soil models and parameters reviewed from noteworthy centrifuge modeling researches. At that point, precise 1-D site reaction investigations are performed on both time and recurrence spaces. The outcomes uncover that for profound delicate clay is subjected to expansive quakes, noteworthy increasing speed lessening may happen close to the highest point of store because of soil nonlinearity and even neighborhood shear disappointment; nonetheless, huge enhancement of removal at low frequencies are normal in any case the forces of base movements, which proposes that for dislodging touchy seaward establishments and structures, such intensified low-recurrence relocation reaction will assume an essential part in seismic outline. This research shows centrifuge as a tool for creating a layered sample important for modelling true soil behaviour (such as permeability) which is not identical in all directions. Currently, there are limited methods for creating layered soil samples.

Keywords: seismic analysis, layered modeling, terotechnology, finite element modeling

Procedia PDF Downloads 155
4642 Innovations for Freight Transport Systems

Authors: M. Lu

Abstract:

The paper presents part of the results of EU-funded projects: SoCool@EU (Sustainable Organisation between Clusters Of Optimized Logistics @ Europe), DG-RTD (Research and Innovation), Regions of Knowledge Programme (FP7-REGIONS-2011-1). It will provide an in-depth review of emerging technologies for further improving urban mobility and freight transport systems, such as (information and physical) infrastructure, ICT-based Intelligent Transport Systems (ITS), vehicles, advanced logistics, and services. Furthermore, the paper will provide an analysis of the barriers and will review business models for the market uptake of innovations. From a perspective of science and technology, the challenges of urbanization could be mainly handled through adequate (human-oriented) solutions for urban planning, sustainable energy, the water system, building design and construction, the urban transport system (both physical and information aspects), and advanced logistics and services. Implementation of solutions for these domains should be follow a highly integrated and balanced approach, a silo approach should be avoided. To develop a sustainable urban transport system (for people and goods), including inter-hubs and intra-hubs, a holistic view is needed. To achieve a sustainable transport system for people and goods (in terms of cost-effectiveness, efficiency, environment-friendliness and fulfillment of the mobility, transport and logistics needs of the society), a proper network and information infrastructure, advanced transport systems and operations, as well as ad hoc and seamless services are required. In addition, a road map for an enhanced urban transport system until 2050 will be presented. This road map aims to address the challenges of urban transport, and to provide best practices in inter-city and intra-city environments from various perspectives, including policy, traveler behaviour, economy, liability, business models, and technology.

Keywords: synchromodality, multimodal transport, logistics, Intelligent Transport Systems (ITS)

Procedia PDF Downloads 316
4641 Inverse Matrix in the Theory of Dynamical Systems

Authors: Renata Masarova, Bohuslava Juhasova, Martin Juhas, Zuzana Sutova

Abstract:

In dynamic system theory a mathematical model is often used to describe their properties. In order to find a transfer matrix of a dynamic system we need to calculate an inverse matrix. The paper contains the fusion of the classical theory and the procedures used in the theory of automated control for calculating the inverse matrix. The final part of the paper models the given problem by the Matlab.

Keywords: dynamic system, transfer matrix, inverse matrix, modeling

Procedia PDF Downloads 516
4640 Scenario Analysis to Assess the Competitiveness of Hydrogen in Securing the Italian Energy System

Authors: Gianvito Colucci, Valeria Di Cosmo, Matteo Nicoli, Orsola Maria Robasto, Laura Savoldi

Abstract:

The hydrogen value chain deployment is likely to be boosted in the near term by the energy security measures planned by European countries to face the recent energy crisis. In this context, some countries are recognized to have a crucial role in the geopolitics of hydrogen as importers, consumers and exporters. According to the European Hydrogen Backbone Initiative, Italy would be part of one of the 5 corridors that will shape the European hydrogen market. However, the set targets are very ambitious and require large investments to rapidly develop effective hydrogen policies: in this regard, scenario analysis is becoming increasingly important to support energy planning, and energy system optimization models appear to be suitable tools to quantitively carry on that kind of analysis. The work aims to assess the competitiveness of hydrogen in contributing to the Italian energy security in the coming years, under different price and import conditions, using the energy system model TEMOA-Italy. A wide spectrum of hydrogen technologies is included in the analysis, covering the production, storage, delivery, and end-uses stages. National production from fossil fuels with and without CCS, as well as electrolysis and import of low-carbon hydrogen from North Africa, are the supply solutions that would compete with other ones, such as natural gas, biomethane and electricity value chains, to satisfy sectoral energy needs (transport, industry, buildings, agriculture). Scenario analysis is then used to study the competition under different price and import conditions. The use of TEMOA-Italy allows the work to catch the interaction between the economy and technological detail, which is much needed in the energy policies assessment, while the transparency of the analysis and of the results is ensured by the full accessibility of the TEMOA open-source modeling framework.

Keywords: energy security, energy system optimization models, hydrogen, natural gas, open-source modeling, scenario analysis, TEMOA

Procedia PDF Downloads 116
4639 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 191
4638 Enhancement of Aircraft Longitudinal Stability Using Tubercles

Authors: Muhammad Umer, Aishwariya Giri, Umaiyma Rakha

Abstract:

Mimicked from the humpback whale flippers, the application of tubercle technology is seen to be particularly advantageous at high angles of attack. This particular advantage is of paramount importance when it comes to structures producing lift at high angles of attack. This characteristic of the technology makes it ideal for horizontal stabilizers and selecting the same as the subject of study to identify and exploit the advantage highlighted by researchers on airfoils, this project aims in establishing a foundation for the application of the bio-mimicked technology on an existing aircraft. Using a baseline and 2 tubercle configuration integrated models, the project targets to achieve the twin aim of highlighting the possibility and merits over the base model and also choosing the right configuration in providing the best characteristic suitable for high angles of attack. To facilitate this study, the required models are generated using Solidworks followed by trials in a virtual aerodynamic environment using Fluent in Ansys for resolving the project objectives. Following a structured plan, the aim is to initially identify the advantages mathematically and then selecting the optimal configuration, simulate the end configuration at angles mimicking the actual operation envelope for the particular structure. Upon simulating the baseline configuration at various angles of attack, the stall angle was determined to be 22 degrees. Thus, the tubercle configurations will be simulated and compared at 4 different angles of attacks: 0, 10, 20, and 24. Further, after providing the optimum configuration of horizontal stabilizers, this study aims at the integration of aircraft structure so that the results better imply the end deliverables of real life application. This draws the project scope closer at this point into longitudinal static stability considerations and improvements in the manoeuvrability characteristics. The objective of the study is to achieve a complete overview ready for real life application with marked benefits obtainable from bio morphing of the tubercle technology.

Keywords: flow simulation, horizontal stabilizer, stability enhancement, tubercle

Procedia PDF Downloads 320
4637 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil

Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes

Abstract:

Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.

Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey

Procedia PDF Downloads 173
4636 Removal of Cr (VI) from Water through Adsorption Process Using GO/PVA as Nanosorbent

Authors: Syed Hadi Hasan, Devendra Kumar Singh, Viyaj Kumar

Abstract:

Cr (VI) is a known toxic heavy metal and has been considered as a priority pollutant in water. The effluent of various industries including electroplating, anodizing baths, leather tanning, steel industries and chromium based catalyst are the major source of Cr (VI) contamination in the aquatic environment. Cr (VI) show high mobility in the environment and can easily penetrate cell membrane of the living tissues to exert noxious effects. The Cr (VI) contamination in drinking water causes various hazardous health effects to the human health such as cancer, skin and stomach irritation or ulceration, dermatitis, damage to liver, kidney circulation and nerve tissue damage. Herein, an attempt has been done to develop an efficient adsorbent for the removal of Cr (VI) from water. For this purpose nanosorbent composed of polyvinyl alcohol functionalized graphene oxide (GO/PVA) was prepared. Thus, obtained GO/PVA was characterized through FTIR, XRD, SEM, and Raman Spectroscopy. As prepared nanosorbent of GO/PVA was utilized for the removal Cr (VI) in batch mode experiment. The process variables such as contact time, initial Cr (VI) concentration, pH, and temperature were optimized. The maximum 99.8 % removal of Cr (VI) was achieved at initial Cr (VI) concentration 60 mg/L, pH 2, temperature 35 °C and equilibrium was achieved within 50 min. The two widely used isotherm models viz. Langmuir and Freundlich were analyzed using linear correlation coefficient (R2) and it was found that Langmuir model gives best fit with high value of R2 for the data of present adsorption system which indicate the monolayer adsorption of Cr (VI) on the GO/PVA. Kinetic studies were also conducted using pseudo-first order and pseudo-second order models and it was observed that chemosorptive pseudo-second order model described the kinetics of current adsorption system in better way with high value of correlation coefficient. Thermodynamic studies were also conducted and results showed that the adsorption was spontaneous and endothermic in nature.

Keywords: adsorption, GO/PVA, isotherm, kinetics, nanosorbent, thermodynamics

Procedia PDF Downloads 389
4635 The Structural Behavior of Fiber Reinforced Lightweight Concrete Beams: An Analytical Approach

Authors: Jubee Varghese, Pouria Hafiz

Abstract:

Increased use of lightweight concrete in the construction industry is mainly due to its reduction in the weight of the structural elements, which in turn reduces the cost of production, transportation, and the overall project cost. However, the structural application of these lightweight concrete structures is limited due to its reduced density. Hence, further investigations are in progress to study the effect of fiber inclusion in improving the mechanical properties of lightweight concrete. Incorporating structural steel fibers, in general, enhances the performance of concrete and increases its durability by minimizing its potential to cracking and providing crack arresting mechanism. In this research, Geometric and Materially Non-linear Analysis (GMNA) was conducted for Finite Element Modelling using a software known as ABAQUS, to investigate the structural behavior of lightweight concrete with and without the addition of steel fibers and shear reinforcement. 21 finite element models of beams were created to study the effect of steel fibers based on three main parameters; fiber volume fraction (Vf = 0, 0.5 and 0.75%), shear span to depth ratio (a/d of 2, 3 and 4) and ratio of area of shear stirrups to spacing (As/s of 0.7, 1 and 1.6). The models created were validated with the previous experiment conducted by H.K. Kang et al. in 2011. It was seen that the lightweight fiber reinforcement can replace the use of fiber reinforced normal weight concrete as structural elements. The effect of an increase in steel fiber volume fraction is dominant for beams with higher shear span to depth ratio than for lower ratios. The effect of stirrups in the presence of fibers was very negligible; however; it provided extra confinement to the cracks by reducing the crack propagation and extra shear resistance than when compared to beams with no stirrups.

Keywords: ABAQUS, beams, fiber-reinforced concrete, finite element, light weight, shear span-depth ratio, steel fibers, steel-fiber volume fraction

Procedia PDF Downloads 107
4634 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming

Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero

Abstract:

Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.

Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up

Procedia PDF Downloads 244
4633 Optimizing The Residential Design Process Using Automated Technologies

Authors: Martin Georgiev, Milena Nanova, Damyan Damov

Abstract:

Architects, engineers, and developers need to analyse and implement a wide spectrum of data in different formats, if they want to produce viable residential developments. Usually, this data comes from a number of different sources and is not well structured. The main objective of this research project is to provide parametric tools working with real geodesic data that can generate residential solutions. Various codes, regulations and design constraints are described by variables and prioritized. In this way, we establish a common workflow for architects, geodesists, and other professionals involved in the building and investment process. This collaborative medium ensures that the generated design variants conform to various requirements, contributing to a more streamlined and informed decision-making process. The quantification of distinctive characteristics inherent to typical residential structures allows a systematic evaluation of the generated variants, focusing on factors crucial to designers, such as daylight simulation, circulation analysis, space utilization, view orientation, etc. Integrating real geodesic data offers a holistic view of the built environment, enhancing the accuracy and relevance of the design solutions. The use of generative algorithms and parametric models offers high productivity and flexibility of the design variants. It can be implemented in more conventional CAD and BIM workflow. Experts from different specialties can join their efforts, sharing a common digital workspace. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the building investment during its entire lifecycle.

Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization

Procedia PDF Downloads 52
4632 Interfacial Reactions between Aromatic Polyamide Fibers and Epoxy Matrix

Authors: Khodzhaberdi Allaberdiev

Abstract:

In order to understand the interactions on the interface polyamide fibers and epoxy matrix in fiber- reinforced composites were investigated industrial aramid fibers: armos, svm, terlon using individual epoxy matrix components, epoxies: diglycidyl ether of bisphenol A (DGEBA), three- and diglycidyl derivatives of m, p-amino-, m, p-oxy-, o, m,p-carboxybenzoic acids, the models: curing agent, aniline and the compound, that depict of the structure the primary addition reaction the amine to the epoxy resin, N-di (oxyethylphenoxy) aniline. The chemical structure of the surface of untreated and treated polyamide fibers analyzed using Fourier transform infrared spectroscopy (FTIR). The impregnation of fibers with epoxy matrix components and N-di (oxyethylphenoxy) aniline has been carried out by heating 150˚C (6h). The optimum fiber loading is at 65%.The result a thermal treatment is the covalent bonds formation , derived from a combined of homopolymerization and crosslinking mechanisms in the interfacial region between the epoxy resin and the surface of fibers. The reactivity of epoxy resins on interface in microcomposites (MC) also depends from processing aids treated on surface of fiber and the absorbance moisture. The influences these factors as evidenced by the conversion of epoxy groups values in impregnated with DGEBA of the terlons: industrial, dried (in vacuum) and purified samples: 5.20 %, 4.65% and 14.10%, respectively. The same tendency for svm and armos fibers is observed. The changes in surface composition of these MC were monitored by X-ray photoelectron spectroscopy (XPS). In the case of the purified fibers, functional groups of fibers act as well as a catalyst and curing agent of epoxy resin. It is found that the value of the epoxy groups conversion for reinforced formulations depends on aromatic polyamides nature and decreases in the order: armos >svm> terlon. This difference is due of the structural characteristics of fibers. The interfacial interactions also examined between polyglycidyl esters substituted benzoic acids and polyamide fibers in the MC. It is found that on interfacial interactions these systems influences as well as the structure and the isomerism of epoxides. The IR-spectrum impregnated fibers with aniline showed that the polyamide fibers appreciably with aniline do not react. FTIR results of treated fibers with N-di (oxyethylphenoxy) aniline fibers revealed dramatically changes IR-characteristic of the OH groups of the amino alcohol. These observations indicated hydrogen bondings and covalent interactions between amino alcohol and functional groups of fibers. This result also confirms appearance of the exo peak on Differential Scanning Calorimetry (DSC) curve of the MC. Finally, the theoretical evaluation non-covalent interactions between individual epoxy matrix components and fibers has been performed using the benzanilide and its derivative contaning the benzimidazole moiety as a models of terlon and svm,armos, respectively. Quantum-topological analysis also demonstrated the existence hydrogen bond between amide group of models and epoxy matrix components.All the results indicated that on the interface polyamide fibers and epoxy matrix exist not only covalent, but and non-covalent the interactions during the preparation of MC.

Keywords: epoxies, interface, modeling, polyamide fibers

Procedia PDF Downloads 266
4631 Stock Market Prediction by Regression Model with Social Moods

Authors: Masahiro Ohmura, Koh Kakusho, Takeshi Okadome

Abstract:

This paper presents a regression model with autocorrelated errors in which the inputs are social moods obtained by analyzing the adjectives in Twitter posts using a document topic model. The regression model predicts Dow Jones Industrial Average (DJIA) more precisely than autoregressive moving-average models.

Keywords: stock market prediction, social moods, regression model, DJIA

Procedia PDF Downloads 548
4630 A Mathematical Framework for Expanding a Railway’s Theoretical Capacity

Authors: Robert L. Burdett, Bayan Bevrani

Abstract:

Analytical techniques for measuring and planning railway capacity expansion activities have been considered in this article. A preliminary mathematical framework involving track duplication and section sub divisions is proposed for this task. In railways, these features have a great effect on network performance and for this reason they have been considered. Additional motivations have also arisen from the limitations of prior models that have not included them.

Keywords: capacity analysis, capacity expansion, railways, track sub division, track duplication

Procedia PDF Downloads 359
4629 Navigating the Nexus of HIV/AIDS Care: Leveraging Statistical Insight to Transform Clinical Practice and Patient Outcomes

Authors: Nahashon Mwirigi

Abstract:

The management of HIV/AIDS is a global challenge, demanding precise tools to predict disease progression and guide tailored treatment. CD4 cell count dynamics, a crucial immune function indicator, play an essential role in understanding HIV/AIDS progression and enhancing patient care through effective modeling. While several models assess disease progression, existing methods often fall short in capturing the complex, non-linear nature of HIV/AIDS, especially across diverse demographics. A need exists for models that balance predictive accuracy with clinical applicability, enabling individualized care strategies based on patient-specific progression rates. This study utilizes patient data from Kenyatta National Hospital (2003–2014) to model HIV/AIDS progression across six CD4-defined states. The Exponential, 2-Parameter Weibull, and 3-Parameter Weibull models are employed to analyze failure rates and explore progression patterns by age and gender. Model selection is based on Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to identify models best representing disease progression variability across demographic groups. The 3-Parameter Weibull model emerges as the most effective, accurately capturing HIV/AIDS progression dynamics, particularly by incorporating delayed progression effects. This model reflects age and gender-specific variations, offering refined insights into patient trajectories and facilitating targeted interventions. One key finding is that older patients progress more slowly through CD4-defined stages, with a delayed onset of advanced stages. This suggests that older patients may benefit from extended monitoring intervals, allowing providers to optimize resources while maintaining consistent care. Recognizing slower progression in this demographic helps clinicians reduce unnecessary interventions, prioritizing care for faster-progressing groups. Gender-based analysis reveals that female patients exhibit more consistent progression, while male patients show greater variability. This highlights the need for gender-specific treatment approaches, as men may require more frequent assessments and adaptive treatment plans to address their variable progression. Tailoring treatment by gender can improve outcomes by addressing distinct risk patterns in each group. The model’s ability to account for both accelerated and delayed progression equips clinicians with a robust tool for estimating the duration of each disease stage. This supports individualized treatment planning, allowing clinicians to optimize antiretroviral therapy (ART) regimens based on demographic factors and expected disease trajectories. Aligning ART timing with specific progression patterns can enhance treatment efficacy and adherence. The model also has significant implications for healthcare systems, as its predictive accuracy enables proactive patient management, reducing the frequency of advanced-stage complications. For resource limited providers, this capability facilitates strategic intervention timing, ensuring that high-risk patients receive timely care while resources are allocated efficiently. Anticipating progression stages enhances both patient care and resource management, reinforcing the model’s value in supporting sustainable HIV/AIDS healthcare strategies. This study underscores the importance of models that capture the complexities of HIV/AIDS progression, offering insights to guide personalized, data-informed care. The 3-Parameter Weibull model’s ability to accurately reflect delayed progression and demographic risk variations presents a valuable tool for clinicians, supporting the development of targeted interventions and resource optimization in HIV/AIDS management.

Keywords: HIV/AIDS progression, 3-parameter Weibull model, CD4 cell count stages, antiretroviral therapy, demographic-specific modeling

Procedia PDF Downloads 7
4628 Explaining Motivation in Language Learning: A Framework for Evaluation and Research

Authors: Kim Bower

Abstract:

Evaluating and researching motivation in language learning is a complex and multi-faceted activity. Various models for investigating learner motivation have been proposed in the literature, but no one model supplies a complex and coherent model for investigating a range of motivational characteristics. Here, such a methodological framework, which includes exemplification of sources of evidence and potential methods of investigation, is proposed. The process model for the investigation of motivation within language learning settings proposed is based on a complex dynamic systems perspective that takes account of cognition and affects. It focuses on three overarching aspects of motivation: the learning environment, learner engagement and learner identities. Within these categories subsets are defined: the learning environment incorporates teacher, course and group specific aspects of motivation; learner engagement addresses the principal characteristics of learners' perceived value of activities, their attitudes towards language learning, their perceptions of their learning and engagement in learning tasks; and within learner identities, principal characteristics of self-concept and mastery of the language are explored. Exemplifications of potential sources of evidence in the model reflect the multiple influences within and between learner and environmental factors and the possible changes in both that may emerge over time. The model was initially developed as a framework for investigating different models of Content and Language Integrated Learning (CLIL) in contrasting contexts in secondary schools in England. The study, from which examples are drawn to exemplify the model, aimed to address the following three research questions: (1) in what ways does CLIL impact on learner motivation? (2) what are the main elements of CLIL that enhance motivation? and (3) to what extent might these be transferable to other contexts? This new model has been tried and tested in three locations in England and reported as case studies. Following an initial visit to each institution to discuss the qualitative research, instruments were developed according to the proposed model. A questionnaire was drawn up and completed by one group prior to a 3-day data collection visit to each institution, during which interviews were held with academic leaders, the head of the department, the CLIL teacher(s), and two learner focus groups of six-eight learners. Interviews were recorded and transcribed verbatim. 2-4 naturalistic observations of lessons were undertaken in each setting, as appropriate to the context, to provide colour and thereby a richer picture. Findings were subjected to an interpretive analysis by the themes derived from the process model and are reported elsewhere. The model proved to be an effective and coherent framework for planning the research, instrument design, data collection and interpretive analysis of data in these three contrasting settings, in which different models of language learning were in place. It is hoped that the proposed model, reported here together with exemplification and commentary, will enable teachers and researchers in a wide range of language learning contexts to investigate learner motivation in a systematic and in-depth manner.

Keywords: investigate, language-learning, learner motivation model, dynamic systems perspective

Procedia PDF Downloads 268
4627 Using Variation Theory in a Design-based Approach to Improve Learning Outcomes of Teachers Use of Video and Live Experiments in Swedish Upper Secondary School

Authors: Andreas Johansson

Abstract:

Conceptual understanding needs to be grounded on observation of physical phenomena, experiences or metaphors. Observation of physical phenomena using demonstration experiments has a long tradition within physics education and students need to develop mental models to relate the observations to concepts from scientific theories. This study investigates how live and video experiments involving an acoustic trap to visualize particle-field interaction, field properties and particle properties can help develop students' mental models and how they can be used differently to realize their potential as teaching tools. Initially, they were treated as analogs and the lesson designs were kept identical. With a design-based approach, the experimental and video designs, as well as best practices for a respective teaching tool, were then developed in iterations. Variation theory was used as a theoretical framework to analyze the planned respective realized pattern of variation and invariance in order to explain learning outcomes as measured by a pre-posttest consisting of conceptual multiple-choice questions inspired by the Force Concept Inventory and the Force and Motion Conceptual Evaluation. Interviews with students and teachers were used to inform the design of experiments and videos in each iteration. The lesson designs and the live and video experiments has been developed to help teachers improve student learning and make school physics more interesting by involving experimental setups that usually are out of reach and to bridge the gap between what happens in classrooms and in science research. As students’ conceptual knowledge also rises their interest in physics the aim is to increase their chances of pursuing careers within science, technology, engineering or mathematics.

Keywords: acoustic trap, design-based research, experiments, variation theory

Procedia PDF Downloads 83
4626 Detection of Flood Prone Areas Using Multi Criteria Evaluation, Geographical Information Systems and Fuzzy Logic. The Ardas Basin Case

Authors: Vasileiou Apostolos, Theodosiou Chrysa, Tsitroulis Ioannis, Maris Fotios

Abstract:

The severity of extreme phenomena is due to their ability to cause severe damage in a small amount of time. It has been observed that floods affect the greatest number of people and induce the biggest damage when compared to the total of annual natural disasters. The detection of potential flood-prone areas constitutes one of the fundamental components of the European Natural Disaster Management Policy, directly connected to the European Directive 2007/60. The aim of the present paper is to develop a new methodology that combines geographical information, fuzzy logic and multi-criteria evaluation methods so that the most vulnerable areas are defined. Therefore, ten factors related to geophysical, morphological, climatological/meteorological and hydrological characteristics of the basin were selected. Afterwards, two models were created to detect the areas pronest to flooding. The first model defined the gravitas of each factor using Analytical Hierarchy Process (AHP) and the final map of possible flood spots were created using GIS and Boolean Algebra. The second model made use of the fuzzy logic and GIS combination and a respective map was created. The application area of the aforementioned methodologies was in Ardas basin due to the frequent and important floods that have taken place these last years. Then, the results were compared to the already observed floods. The result analysis shows that both models can detect with great precision possible flood spots. As the fuzzy logic model is less time-consuming, it is considered the ideal model to apply to other areas. The said results are capable of contributing to the delineation of high risk areas and to the creation of successful management plans dealing with floods.

Keywords: analytical hierarchy process, flood prone areas, fuzzy logic, geographic information system

Procedia PDF Downloads 379
4625 Assessing the Social Impacts of Regional Services: The Case of a Portuguese Municipality

Authors: A. Camões, M. Ferreira Dias, M. Amorim

Abstract:

In recent years, the social economy is increasingly seen as a viable means to address social problems. Social enterprises, as well as public projects and initiatives targeted to meet social purposes, offer organizational models that assume heterogeneity, flexibility and adaptability to the ‘real world and real problems’. Despite the growing popularity of social initiatives, decision makers still face a paucity in what concerns the available models and tools to adequately assess its sustainability, and its impacts, notably the nature of its contribution to economic growth. This study was carried out at the local level, by analyzing the social impact initiatives and projects promoted by the Municipality of Albergaria-a-Velha (Câmara Municipal de Albergaria-a-Velha -CMA), a municipality of 25,000 inhabitants in the central region of Portugal. This work focuses on the challenges related to the qualifications and employability of citizens, which stands out as one of the key concerns in the Portuguese economy, particularly expressive in the context of small-scale cities and inland territories. The study offers a characterization of the Municipality, its socio-economic structure and challenges, followed by an exploratory analysis of multiple sourced data, collected from the CMA's documental sources as well as from privileged informants. The purpose is to conduct detailed analysis of the CMA's social projects, aimed at characterizing its potential impact for the model of qualifications and employability of the citizens of the Municipality. The study encompasses a discussion of the socio-economic profile of the municipality, notably its asymmetries, the analysis of the social projects and initiatives, as well as of data derived from inquiry actors involved in the implementation of the social projects and its beneficiaries. Finally, the results obtained with the Better Life Index will be included. This study makes it possible to ascertain if what is implicit in the literature goes to the encounter of what one experiences in reality.

Keywords: measurement, municipalities, social economy, social impact

Procedia PDF Downloads 134
4624 Numerical Modeling of Turbulent Natural Convection in a Square Cavity

Authors: Mohammadreza Sedighi, Mohammad Said Saidi, Hesamoddin Salarian

Abstract:

A numerical study has been performed to investigate the effect of using different turbulent models on natural convection flow field and temperature distributions in partially heated square cavity compare to benchmark. The temperature of the right vertical wall is lower than that of heater while other walls are insulated. The commercial CFD codes are used to model. Standard k-w model provided good agreement with the experimental data.

Keywords: Buoyancy, Cavity, CFD, Heat Transfer, Natural Convection, Turbulence

Procedia PDF Downloads 341
4623 Statistical Modeling and by Artificial Neural Networks of Suspended Sediment Mina River Watershed at Wadi El-Abtal Gauging Station (Northern Algeria)

Authors: Redhouane Ghernaout, Amira Fredj, Boualem Remini

Abstract:

Suspended sediment transport is a serious problem worldwide, but it is much more worrying in certain regions of the world, as is the case in the Maghreb and more particularly in Algeria. It continues to take disturbing proportions in Northern Algeria due to the variability of rains in time and in space and constant deterioration of vegetation. Its prediction is essential in order to identify its intensity and define the necessary actions for its reduction. The purpose of this study is to analyze the concentration data of suspended sediment measured at Wadi El-Abtal Hydrometric Station. It also aims to find and highlight regressive power relationships, which can explain the suspended solid flow by the measured liquid flow. The study strives to find models of artificial neural networks linking the flow, month and precipitation parameters with solid flow. The obtained results show that the power function of the solid transport rating curve and the models of artificial neural networks are appropriate methods for analysing and estimating suspended sediment transport in Wadi Mina at Wadi El-Abtal Hydrometric Station. They made it possible to identify in a fairly conclusive manner the model of neural networks with four input parameters: the liquid flow Q, the month and the daily precipitation measured at the representative stations (Frenda 013002 and Ain El-Hadid 013004 ) of the watershed. The model thus obtained makes it possible to estimate the daily solid flows (interpolate and extrapolate) even beyond the period of observation of solid flows (1985/86 to 1999/00), given the availability of the average daily liquid flows and daily precipitation since 1953/1954.

Keywords: suspended sediment, concentration, regression, liquid flow, solid flow, artificial neural network, modeling, mina, algeria

Procedia PDF Downloads 103
4622 Data Mining Model for Predicting the Status of HIV Patients during Drug Regimen Change

Authors: Ermias A. Tegegn, Million Meshesha

Abstract:

Human Immunodeficiency Virus and Acquired Immunodeficiency Syndrome (HIV/AIDS) is a major cause of death for most African countries. Ethiopia is one of the seriously affected countries in sub Saharan Africa. Previously in Ethiopia, having HIV/AIDS was almost equivalent to a death sentence. With the introduction of Antiretroviral Therapy (ART), HIV/AIDS has become chronic, but manageable disease. The study focused on a data mining technique to predict future living status of HIV/AIDS patients at the time of drug regimen change when the patients become toxic to the currently taking ART drug combination. The data is taken from University of Gondar Hospital ART program database. Hybrid methodology is followed to explore the application of data mining on ART program dataset. Data cleaning, handling missing values and data transformation were used for preprocessing the data. WEKA 3.7.9 data mining tools, classification algorithms, and expertise are utilized as means to address the research problem. By using four different classification algorithms, (i.e., J48 Classifier, PART rule induction, Naïve Bayes and Neural network) and by adjusting their parameters thirty-two models were built on the pre-processed University of Gondar ART program dataset. The performances of the models were evaluated using the standard metrics of accuracy, precision, recall, and F-measure. The most effective model to predict the status of HIV patients with drug regimen substitution is pruned J48 decision tree with a classification accuracy of 98.01%. This study extracts interesting attributes such as Ever taking Cotrim, Ever taking TbRx, CD4 count, Age, Weight, and Gender so as to predict the status of drug regimen substitution. The outcome of this study can be used as an assistant tool for the clinician to help them make more appropriate drug regimen substitution. Future research directions are forwarded to come up with an applicable system in the area of the study.

Keywords: HIV drug regimen, data mining, hybrid methodology, predictive model

Procedia PDF Downloads 142
4621 The Impact of COVID-19 on Antibiotic Prescribing in Primary Care in England: Evaluation and Risk Prediction of the Appropriateness of Type and Repeat Prescribing

Authors: Xiaomin Zhong, Alexander Pate, Ya-Ting Yang, Ali Fahmi, Darren M. Ashcroft, Ben Goldacre, Brian Mackenna, Amir Mehrkar, Sebastian C. J. Bacon, Jon Massey, Louis Fisher, Peter Inglesby, Kieran Hand, Tjeerd van Staa, Victoria Palin

Abstract:

Background: This study aimed to predict risks of potentially inappropriate antibiotic type and repeat prescribing and assess changes during COVID-19. Methods: With the approval of NHS England, we used the OpenSAFELY platform to access the TPP SystmOne electronic health record (EHR) system and selected patients prescribed antibiotics from 2019 to 2021. Multinomial logistic regression models predicted the patient’s probability of receiving an inappropriate antibiotic type or repeating the antibiotic course for each common infection. Findings: The population included 9.1 million patients with 29.2 million antibiotic prescriptions. 29.1% of prescriptions were identified as repeat prescribing. Those with same-day incident infection coded in the EHR had considerably lower rates of repeat prescribing (18.0%), and 8.6% had a potentially inappropriate type. No major changes in the rates of repeat antibiotic prescribing during COVID-19 were found. In the ten risk prediction models, good levels of calibration and moderate levels of discrimination were found. Important predictors included age, prior antibiotic prescribing, and region. Patients varied in their predicted risks. For sore throat, the range from 2.5 to 97.5th percentile was 2.7 to 23.5% (inappropriate type) and 6.0 to 27.2% (repeat prescription). For otitis externa, these numbers were 25.9 to 63.9% and 8.5 to 37.1%, respectively. Interpretation: Our study found no evidence of changes in the level of inappropriate or repeat antibiotic prescribing after the start of COVID-19. Repeat antibiotic prescribing was frequent and varied according to regional and patient characteristics. There is a need for treatment guidelines to be developed around antibiotic failure and clinicians provided with individualised patient information.

Keywords: antibiotics, infection, COVID-19 pandemic, antibiotic stewardship, primary care

Procedia PDF Downloads 120
4620 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices

Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese

Abstract:

Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.

Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis

Procedia PDF Downloads 176
4619 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data

Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill

Abstract:

Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.

Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function

Procedia PDF Downloads 279
4618 Application of NBR 14861: 2011 for the Design of Prestress Hollow Core Slabs Subjected to Shear

Authors: Alessandra Aparecida Vieira França, Adriana de Paula Lacerda Santos, Mauro Lacerda Santos Filho

Abstract:

The purpose of this research i to study the behavior of precast prestressed hollow core slabs subjected to shear. In order to achieve this goal, shear tests were performed using hollow core slabs 26,5cm thick, with and without a concrete cover of 5 cm, without cores filled, with two cores filled and three cores filled with concrete. The tests were performed according to the procedures recommended by FIP (1992), the EN 1168:2005 and following the method presented in Costa (2009). The ultimate shear strength obtained within the tests was compared with the values of theoretical resistant shear calculated in accordance with the codes, which are being used in Brazil, noted: NBR 6118:2003 and NBR 14861:2011. When calculating the shear resistance through the equations presented in NBR 14861:2011, it was found that provision is much more accurate for the calculation of the shear strength of hollow core slabs than the NBR 6118 code. Due to the large difference between the calculated results, even for slabs without cores filled, the authors consulted the committee that drafted the NBR 14861:2011 and found that there is an error in the text of the standard, because the coefficient that is suggested, actually presents the double value than the needed one! The ABNT, later on, soon issued an amendment of NBR 14861:2011 with the necessary corrections. During the tests for the present study, it was confirmed that the concrete filling the cores contributes to increase the shear strength of hollow core slabs. But in case of slabs 26,5 cm thick, the quantity should be limited to a maximum of two cores filled, because most of the results for slabs with three cores filled were smaller. This confirmed the recommendation of NBR 14861:2011which is consistent with standard practice. After analyzing the configuration of cracking and failure mechanisms of hollow core slabs during the shear tests, strut and tie models were developed representing the forces acting on the slab at the moment of rupture. Through these models the authors were able to calculate the tensile stress acting on the concrete ties (ribs) and scaled the geometry of these ties. The conclusions of the research performed are the experiments results have shown that the mechanism of failure of the hollow-core slabs can be predicted using the strut-and-tie procedure, within a good range of accuracy. In addition, the needed of the correction of the Brazilian standard to review the correction factor σcp duplicated (in NBR14861/2011), and the limitation of the number of cores (Holes) to be filled with concrete, to increase the strength of the slab for the shear resistance. It is also suggested the increasing the amount of test results with 26.5 cm thick, and a larger range of thickness slabs, in order to obtain results of shear tests with cores concreted after the release of prestressing force. Another set of shear tests on slabs must be performed in slabs with cores filled and cover concrete reinforced with welded steel mesh for comparison with results of theoretical values calculated by the new revision of the standard NBR 14861:2011.

Keywords: prestressed hollow core slabs, shear, strut, tie models

Procedia PDF Downloads 333
4617 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 249
4616 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur

Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh

Abstract:

Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.

Keywords: hanging, channelling, blast furnace, coke

Procedia PDF Downloads 195
4615 Fires in Historic Buildings: Assessment of Evacuation of People by Computational Simulation

Authors: Ivana R. Moser, Joao C. Souza

Abstract:

Building fires are random phenomena that can be extremely violent, and safe evacuation of people is the most guaranteed tactic in saving lives. The correct evacuation of buildings, and other spaces occupied by people, means leaving the place in a short time and by the appropriate way. It depends on the perception of spaces by the individual, the architectural layout and the presence of appropriate routing systems. As historical buildings were constructed in other times, when, as in general, the current security requirements were not available yet, it is necessary to adapt these spaces to make them safe. Computer models of evacuation simulation are widely used tools for assessing the safety of people in a building or agglomeration sites and these are associated with the analysis of human behaviour, makes the results of emergency evacuation more correct and conclusive. The objective of this research is the performance evaluation of historical interest buildings, regarding the safe evacuation of people, through computer simulation, using PTV Viswalk software. The buildings objects of study are the Colégio Catarinense, centennial building, located in the city of Florianópolis, Santa Catarina / Brazil. The software used uses the variables of human behaviour, such as: avoid collision with other pedestrians and avoid obstacles. Scenarios were run on the three-dimensional models and the contribution to safety in risk situations was verified as an alternative measure, especially in the impossibility of applying those measures foreseen by the current fire safety codes in Brazil. The simulations verified the evacuation time in situations of normality and emergency situations, as well as indicate the bottlenecks and critical points of the studied buildings, to seek solutions to prevent and correct these undesirable events. It is understood that adopting an advanced computational performance-based approach promotes greater knowledge of the building and how people behave in these specific environments, in emergency situations.

Keywords: computer simulation, escape routes, fire safety, historic buildings, human behavior

Procedia PDF Downloads 187
4614 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations

Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang

Abstract:

Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.

Keywords: source identification, ordinary differential equations, label propagation, complex networks

Procedia PDF Downloads 20