Search results for: time series models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24585

Search results for: time series models

22635 Utilization of Low-Cost Adsorbent Fly Ash for the Removal of Phenol from Water

Authors: Ihsanullah, Muataz Ali Atieh

Abstract:

In this study, a low-cost adsorbent carbon fly ash (CFA) was used for the removal of Phenol from the water. The adsorbent characteristics were observed by the Thermogravimetric Analysis (TGA), BET specific surface area analyzer, Zeta Potential and Field Emission Scanning Electron Microscopy (FE-SEM). The effect of pH, agitation speed, contact time, adsorbent dosage, and initial concentration of phenol were studied on the removal of phenol from the water. The optimum values of these variables for maximum removal of phenol were also determined. Both Freundlich and Langmuir isotherm models were successfully applied to describe the experimental data. Results showed that low-cost adsorbent phenol can be successfully applied for the removal of Phenol from the water.

Keywords: phenol, fly ash, adsorption, carbon adsorbents

Procedia PDF Downloads 325
22634 The Model Establishment and Analysis of TRACE/FRAPTRAN for Chinshan Nuclear Power Plant Spent Fuel Pool

Authors: J. R. Wang, H. T. Lin, Y. S. Tseng, W. Y. Li, H. C. Chen, S. W. Chen, C. Shih

Abstract:

TRACE is developed by U.S. NRC for the nuclear power plants (NPPs) safety analysis. We focus on the establishment and application of TRACE/FRAPTRAN/SNAP models for Chinshan NPP (BWR/4) spent fuel pool in this research. The geometry is 12.17 m × 7.87 m × 11.61 m for the spent fuel pool. In this study, there are three TRACE/SNAP models: one-channel, two-channel, and multi-channel TRACE/SNAP model. Additionally, the cooling system failure of the spent fuel pool was simulated and analyzed by using the above models. According to the analysis results, the peak cladding temperature response was more accurate in the multi-channel TRACE/SNAP model. The results depicted that the uncovered of the fuels occurred at 2.7 day after the cooling system failed. In order to estimate the detailed fuel rods performance, FRAPTRAN code was used in this research. According to the results of FRAPTRAN, the highest cladding temperature located on the node 21 of the fuel rod (the highest node at node 23) and the cladding burst roughly after 3.7 day.

Keywords: TRACE, FRAPTRAN, BWR, spent fuel pool

Procedia PDF Downloads 357
22633 Analytical Description of Disordered Structures in Continuum Models of Pattern Formation

Authors: Gyula I. Tóth, Shaho Abdalla

Abstract:

Even though numerical simulations indeed have a significant precursory/supportive role in exploring the disordered phase displaying no long-range order in pattern formation models, studying the stability properties of this phase and determining the order of the ordered-disordered phase transition in these models necessitate an analytical description of the disordered phase. First, we will present the results of a comprehensive statistical analysis of a large number (1,000-10,000) of numerical simulations in the Swift-Hohenberg model, where the bulk disordered (or amorphous) phase is stable. We will show that the average free energy density (over configurations) converges, while the variance of the energy density vanishes with increasing system size in numerical simulations, which suggest that the disordered phase is a thermodynamic phase (i.e., its properties are independent of the configuration in the macroscopic limit). Furthermore, the structural analysis of this phase in the Fourier space suggests that the phase can be modeled by a colored isotropic Gaussian noise, where any instant of the noise describes a possible configuration. Based on these results, we developed the general mathematical framework of finding a pool of solutions to partial differential equations in the sense of continuous probability measure, which we will present briefly. Applying the general idea to the Swift-Hohenberg model we show, that the amorphous phase can be found, and its properties can be determined analytically. As the general mathematical framework is not restricted to continuum theories, we hope that the proposed methodology will open a new chapter in studying disordered phases.

Keywords: fundamental theory, mathematical physics, continuum models, analytical description

Procedia PDF Downloads 134
22632 The Application of the Analytic Basis Function Expansion Triangular-z Nodal Method for Neutron Diffusion Calculation

Authors: Kunpeng Wang, Hongchun, Wu, Liangzhi Cao, Chuanqi Zhao

Abstract:

The distributions of homogeneous neutron flux within a node were expanded into a set of analytic basis functions which satisfy the diffusion equation at any point in a triangular-z node for each energy group, and nodes were coupled with each other with both the zero- and first-order partial neutron current moments across all the interfaces of the triangular prism at the same time. Based this method, a code TABFEN has been developed and applied to solve the neutron diffusion equation in a complicated geometry. In addition, after a series of numerical derivation, one can get the neutron adjoint diffusion equations in matrix form which is the same with the neutron diffusion equation; therefore, it can be solved by TABFEN, and the low-high scan strategy is adopted to improve the efficiency. Four benchmark problems are tested by this method to verify its feasibility, the results show good agreement with the references which demonstrates the efficiency and feasibility of this method.

Keywords: analytic basis function expansion method, arbitrary triangular-z node, adjoint neutron flux, complicated geometry

Procedia PDF Downloads 445
22631 A Polynomial Relationship for Prediction of COD Removal Efficiency of Cyanide-Inhibited Wastewater in Aerobic Systems

Authors: Eze R. Onukwugha

Abstract:

The presence of cyanide in wastewater is known to inhibit the normal functioning of bio-reactors since it has the tendency to poison reactor micro-organisms. Bench scale models of activated sludge reactors with varying aspect ratios were operated for the treatment of cassava wastewater at several values of hydraulic retention time (HRT). The different values of HRT were achieved by the use of a peristaltic pump to vary the rate of introduction of the wastewater into the reactor. The main parameters monitored are the cyanide concentration and respective COD values of the influent and effluent. These observed values were then transformed into a mathematical model for the prediction of treatment efficiency.

Keywords: wastewater, aspect ratio, cyanide-inhibited wastewater, modeling

Procedia PDF Downloads 78
22630 Numerical Investigation of the Jacketing Method of Reinforced Concrete Column

Authors: S. Boukais, A. Nekmouche, N. Khelil, A. Kezmane

Abstract:

The first intent of this study is to develop a finite element model that can predict correctly the behavior of the reinforced concrete column. Second aim is to use the finite element model to investigate and evaluate the effect of the strengthening method by jacketing of the reinforced concrete column, by considering different interface contact between the old and the new concrete. Four models were evaluated, one by considering perfect contact, the other three models by using friction coefficient of 0.1, 0.3 and 0.5. The simulation was carried out by using Abaqus software. The obtained results show that the jacketing reinforcement led to significant increase of the global performance of the behavior of the simulated reinforced concrete column.

Keywords: strengthening, jacketing, rienforced concrete column, Abaqus, simulation

Procedia PDF Downloads 146
22629 Neural Networks Underlying the Generation of Neural Sequences in the HVC

Authors: Zeina Bou Diab, Arij Daou

Abstract:

The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.

Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird

Procedia PDF Downloads 70
22628 Automation of Student Attendance Management System Using BPM

Authors: Kh. Alaa, Sh. Sarah, J. Khowlah, S. Liyakathunsia

Abstract:

Education has become very important nowadays and with the rapidly increasing number of student, taking the attendance manually is getting very difficult and time wasting. In order to solve this problem, an automated solution is required. An effective automated system can be implemented to manage student attendance in different ways. This research will discuss a unique class attendance system which integrates both Face Recognition and RFID technique. This system focuses on reducing the time spent on submitting of the lecture and the wastage of time on submitting and getting approval for the absence excuse and sick leaves. As a result, the suggested solution will enhance not only the time, also it will also be helpful in eliminating fake attendance.

Keywords: attendance system, face recognition, RFID, process model, cost, time

Procedia PDF Downloads 375
22627 Seismic Hazard Assessment of Offshore Platforms

Authors: F. D. Konstandakopoulou, G. A. Papagiannopoulos, N. G. Pnevmatikos, G. D. Hatzigeorgiou

Abstract:

This paper examines the effects of pile-soil-structure interaction on the dynamic response of offshore platforms under the action of near-fault earthquakes. Two offshore platforms models are investigated, one with completely fixed supports and one with piles which are clamped into deformable layered soil. The soil deformability for the second model is simulated using non-linear springs. These platform models are subjected to near-fault seismic ground motions. The role of fault mechanism on platforms’ response is additionally investigated, while the study also examines the effects of different angles of incidence of seismic records on the maximum response of each platform.

Keywords: hazard analysis, offshore platforms, earthquakes, safety

Procedia PDF Downloads 148
22626 Load Balancing Technique for Energy - Efficiency in Cloud Computing

Authors: Rani Danavath, V. B. Narsimha

Abstract:

Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.

Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission

Procedia PDF Downloads 449
22625 A Biometric Template Security Approach to Fingerprints Based on Polynomial Transformations

Authors: Ramon Santana

Abstract:

The use of biometric identifiers in the field of information security, access control to resources, authentication in ATMs and banking among others, are of great concern because of the safety of biometric data. In the general architecture of a biometric system have been detected eight vulnerabilities, six of them allow obtaining minutiae template in plain text. The main consequence of obtaining minutia templates is the loss of biometric identifier for life. To mitigate these vulnerabilities several models to protect minutiae templates have been proposed. Several vulnerabilities in the cryptographic security of these models allow to obtain biometric data in plain text. In order to increase the cryptographic security and ease of reversibility, a minutiae templates protection model is proposed. The model aims to make the cryptographic protection and facilitate the reversibility of data using two levels of security. The first level of security is the data transformation level. In this level generates invariant data to rotation and translation, further transformation is irreversible. The second level of security is the evaluation level, where the encryption key is generated and data is evaluated using a defined evaluation function. The model is aimed at mitigating known vulnerabilities of the proposed models, basing its security on the impossibility of the polynomial reconstruction.

Keywords: fingerprint, template protection, bio-cryptography, minutiae protection

Procedia PDF Downloads 170
22624 Method Comprising One to One Web Based Real Time Communications

Authors: Lata Kiran Dey, Rajendra Kumar, Biren Karmakar

Abstract:

Web Real Time Communications is a collection of standards, protocols, which provides real-time communications capabilities between web browsers and devices. This paper outlines the design and further implementation of web real-time communications on secure web applications having audio and video call capabilities. This proposed application may put up a system that will be able to work over both desktops as well as the mobile browser. Though, WebRTC also gives a set of JavaScript standard RTC APIs, which primarily works over the real-time communication framework. This helps to build a suitable communication application, which enables the audio, video, and message transfer in between the today’s modern browsers having WebRTC support.

Keywords: WebRTC, SIP, RTC, JavaScript, SRTP, secure web sockets, browser

Procedia PDF Downloads 148
22623 What the Future Holds for Social Media Data Analysis

Authors: P. Wlodarczak, J. Soar, M. Ally

Abstract:

The dramatic rise in the use of Social Media (SM) platforms such as Facebook and Twitter provide access to an unprecedented amount of user data. Users may post reviews on products and services they bought, write about their interests, share ideas or give their opinions and views on political issues. There is a growing interest in the analysis of SM data from organisations for detecting new trends, obtaining user opinions on their products and services or finding out about their online reputations. A recent research trend in SM analysis is making predictions based on sentiment analysis of SM. Often indicators of historic SM data are represented as time series and correlated with a variety of real world phenomena like the outcome of elections, the development of financial indicators, box office revenue and disease outbreaks. This paper examines the current state of research in the area of SM mining and predictive analysis and gives an overview of the analysis methods using opinion mining and machine learning techniques.

Keywords: social media, text mining, knowledge discovery, predictive analysis, machine learning

Procedia PDF Downloads 423
22622 A Study of the Relationship between Time Management Behaviour and Job Satisfaction of Higher Education Institutes in India

Authors: Sania K. Rao, Feza T. Azmi

Abstract:

The purpose of the present study is to explore the relationship between time management behaviour and job satisfaction of academicians of higher education institutes in India. The analyses of this study were carried out with AMOS (version 20.0); and Confirmatory Factor Analysis (CFA) and Structural Equation Modelling (SEM) were conducted. The factor analysis and findings show that perceived control of time serves as the partial mediating factor to have a significant and positive influence on job satisfaction. Further, at the end, a number of suggestions to improve one’s time management behaviour were provided.

Keywords: time management behaviour, job satisfaction, higher education, India, mediation analysis

Procedia PDF Downloads 389
22621 Segregation Patterns of Trees and Grass Based on a Modified Age-Structured Continuous-Space Forest Model

Authors: Jian Yang, Atsushi Yagi

Abstract:

Tree-grass coexistence system is of great importance for forest ecology. Mathematical models are being proposed to study the dynamics of tree-grass coexistence and the stability of the systems. However, few of the models concentrates on spatial dynamics of the tree-grass coexistence. In this study, we modified an age-structured continuous-space population model for forests, obtaining an age-structured continuous-space population model for the tree-grass competition model. In the model, for thermal competitions, adult trees can out-compete grass, and grass can out-compete seedlings. We mathematically studied the model to make sure tree-grass coexistence solutions exist. Numerical experiments demonstrated that a fraction of area that trees or grass occupies can affect whether the coexistence is stable or not. We also tried regulating the mortality of adult trees with other parameters and the fraction of area trees and grass occupies were fixed; results show that the mortality of adult trees is also a factor affecting the stability of the tree-grass coexistence in this model.

Keywords: population-structured models, stabilities of ecosystems, thermal competitions, tree-grass coexistence systems

Procedia PDF Downloads 160
22620 Underwater Image Enhancement and Reconstruction Using CNN and the MultiUNet Model

Authors: Snehal G. Teli, R. J. Shelke

Abstract:

CNN and MultiUNet models are the framework for the proposed method for enhancing and reconstructing underwater images. Multiscale merging of features and regeneration are both performed by the MultiUNet. CNN collects relevant features. Extensive tests on benchmark datasets show that the proposed strategy performs better than the latest methods. As a result of this work, underwater images can be represented and interpreted in a number of underwater applications with greater clarity. This strategy will advance underwater exploration and marine research by enhancing real-time underwater image processing systems, underwater robotic vision, and underwater surveillance.

Keywords: convolutional neural network, image enhancement, machine learning, multiunet, underwater images

Procedia PDF Downloads 75
22619 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.

Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes

Procedia PDF Downloads 39
22618 Churn Prediction for Savings Bank Customers: A Machine Learning Approach

Authors: Prashant Verma

Abstract:

Commercial banks are facing immense pressure, including financial disintermediation, interest rate volatility and digital ways of finance. Retaining an existing customer is 5 to 25 less expensive than acquiring a new one. This paper explores customer churn prediction, based on various statistical & machine learning models and uses under-sampling, to improve the predictive power of these models. The results show that out of the various machine learning models, Random Forest which predicts the churn with 78% accuracy, has been found to be the most powerful model for the scenario. Customer vintage, customer’s age, average balance, occupation code, population code, average withdrawal amount, and an average number of transactions were found to be the variables with high predictive power for the churn prediction model. The model can be deployed by the commercial banks in order to avoid the customer churn so that they may retain the funds, which are kept by savings bank (SB) customers. The article suggests a customized campaign to be initiated by commercial banks to avoid SB customer churn. Hence, by giving better customer satisfaction and experience, the commercial banks can limit the customer churn and maintain their deposits.

Keywords: savings bank, customer churn, customer retention, random forests, machine learning, under-sampling

Procedia PDF Downloads 143
22617 Optimal Location of the I/O Point in the Parking System

Authors: Jing Zhang, Jie Chen

Abstract:

In this paper, we deal with the optimal I/O point location in an automated parking system. In this system, the S/R machine (storage and retrieve machine) travels independently in vertical and horizontal directions. Based on the characteristics of the parking system and the basic principle of AS/RS system (Automated Storage and Retrieval System), we obtain the continuous model in units of time. For the single command cycle using the randomized storage policy, we calculate the probability density function for the system travel time and thus we develop the travel time model. And we confirm that the travel time model shows a good performance by comparing with discrete case. Finally in this part, we establish the optimal model by minimizing the expected travel time model and it is shown that the optimal location of the I/O point is located at the middle of the left-hand above corner.

Keywords: parking system, optimal location, response time, S/R machine

Procedia PDF Downloads 409
22616 Assessing the Citizens' Adoption of E-Government Platforms in the North West Province Local Governments, South Africa

Authors: Matsobane Mosetja, Nehemiah Mavetera, Ernest Mnkandla

Abstract:

Local governments in South Africa are responsible for the provision of basic services. There are countless benefits that come with e-Government platforms if they are properly implemented to help local governments deliver these basic services to citizens. This study investigates factors influencing the adoption and use of e-Government platforms by citizens in the North West Province, South. The study is set against a background of significant change in South Africa where government services are electronically delivered. The outcome of the study revealed that: 1) decisions on the development of e-Government platforms are made based on a series of consultative forums; 2) the municipalities are open to constructive criticism on their online platform; 3) the municipalities have room for dialogue on how best to improve service delivery; 4) the municipalities are accessible to the citizens all the time; 5) the municipalities are making means and ways to empower them to be part of the collective and lastly e-Government provides room for online discussion.

Keywords: e-government, e-government platforms, user acceptance, local government

Procedia PDF Downloads 393
22615 Approach for the Mathematical Calculation of the Damping Factor of Railway Bridges with Ballasted Track

Authors: Andreas Stollwitzer, Lara Bettinelli, Josef Fink

Abstract:

The expansion of the high-speed rail network over the past decades has resulted in new challenges for engineers, including traffic-induced resonance vibrations of railway bridges. Excessive resonance-induced speed-dependent accelerations of railway bridges during high-speed traffic can lead to negative consequences such as fatigue symptoms, distortion of the track, destabilisation of the ballast bed, and potentially even derailment. A realistic prognosis of bridge vibrations during high-speed traffic must not only rely on the right choice of an adequate calculation model for both bridge and train but first and foremost on the use of dynamic model parameters which reflect reality appropriately. However, comparisons between measured and calculated bridge vibrations are often characterised by considerable discrepancies, whereas dynamic calculations overestimate the actual responses and therefore lead to uneconomical results. This gap between measurement and calculation constitutes a complex research issue and can be traced to several causes. One major cause is found in the dynamic properties of the ballasted track, more specifically in the persisting, substantial uncertainties regarding the consideration of the ballasted track (mechanical model and input parameters) in dynamic calculations. Furthermore, the discrepancy is particularly pronounced concerning the damping values of the bridge, as conservative values have to be used in the calculations due to normative specifications and lack of knowledge. By using a large-scale test facility, the analysis of the dynamic behaviour of ballasted track has been a major research topic at the Institute of Structural Engineering/Steel Construction at TU Wien in recent years. This highly specialised test facility is designed for isolated research of the ballasted track's dynamic stiffness and damping properties – independent of the bearing structure. Several mechanical models for the ballasted track consisting of one or more continuous spring-damper elements were developed based on the knowledge gained. These mechanical models can subsequently be integrated into bridge models for dynamic calculations. Furthermore, based on measurements at the test facility, model-dependent stiffness and damping parameters were determined for these mechanical models. As a result, realistic mechanical models of the railway bridge with different levels of detail and sufficiently precise characteristic values are available for bridge engineers. Besides that, this contribution also presents another practical application of such a bridge model: Based on the bridge model, determination equations for the damping factor (as Lehr's damping factor) can be derived. This approach constitutes a first-time method that makes the damping factor of a railway bridge calculable. A comparison of this mathematical approach with measured dynamic parameters of existing railway bridges illustrates, on the one hand, the apparent deviation between normatively prescribed and in-situ measured damping factors. On the other hand, it is also shown that a new approach, which makes it possible to calculate the damping factor, provides results that are close to reality and thus raises potentials for minimising the discrepancy between measurement and calculation.

Keywords: ballasted track, bridge dynamics, damping, model design, railway bridges

Procedia PDF Downloads 164
22614 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.

Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival

Procedia PDF Downloads 335
22613 Effect of the Deposition Time of Hydrogenated Nanocrystalline Si Grown on Porous Alumina Film on Glass Substrate by Plasma Processing Chemical Vapor Deposition

Authors: F. Laatar, S. Ktifa, H. Ezzaouia

Abstract:

Plasma Enhanced Chemical Vapor Deposition (PECVD) method is used to deposit hydrogenated nanocrystalline silicon films (nc-Si: H) on Porous Anodic Alumina Films (PAF) on glass substrate at different deposition duration. Influence of the deposition time on the physical properties of nc-Si: H grown on PAF was investigated through an extensive correlation between micro-structural and optical properties of these films. In this paper, we present an extensive study of the morphological, structural and optical properties of these films by Atomic Force Microscopy (AFM), X-Ray Diffraction (XRD) techniques and a UV-Vis-NIR spectrometer. It was found that the changes in DT can modify the films thickness, the surface roughness and eventually improve the optical properties of the composite. Optical properties (optical thicknesses, refractive indexes (n), absorption coefficients (α), extinction coefficients (k), and the values of the optical transitions EG) of this kind of samples were obtained using the data of the transmittance T and reflectance R spectra’s recorded by the UV–Vis–NIR spectrometer. We used Cauchy and Wemple–DiDomenico models for the analysis of the dispersion of the refractive index and the determination of the optical properties of these films.

Keywords: hydragenated nanocrystalline silicon, plasma processing chemical vapor deposition, X-ray diffraction, optical properties

Procedia PDF Downloads 377
22612 Time to CT in Major Trauma in Coffs Harbour Health Campus - The Australian Rural Centre Experience

Authors: Thampi Rawther, Jack Cecire, Andrew Sutherland

Abstract:

Introduction: CT facilitates the diagnosis of potentially life-threatening injuries and facilitates early management. There is evidence that reduced CT acquisition time reduces mortality and length of hospital stay. Currently, there are variable recommendations for ideal timing. Indeed, the NHS standard contract for a major trauma service and STAG both recommend immediate access to CT within a maximum time of 60min and appropriate reporting within 60min of the scan. At Coffs Harbour Health Campus (CHHC), a CT radiographer is on site between 8am-11pm. Aim: To investigate the average time to CT at CHHC and assess for any significant relationship between time to CT and injury severity score (ISS) or time of triage. Method: All major trauma calls between Jan 2021-Oct 2021 were audited (N=87). Patients were excluded if they went from ED to the theatre. Time to CT is defined as the time between triage to the timestamp on the first CT image. Median and interquartile range was used as a measure of central tendency as the data was not normally distributed, and Chi-square test was used to determine association. Results: The median time to CT is 51.5min (IQR 40-74). We found no relationship between time to CT and ISS (P=0.18) and time of triage to time to CT (P=0.35). We compared this to other centres such as John Hunter Hospital and Gold Coast Hospital. We found that the median CT acquisition times were 76min (IQR 52-115) and 43min, respectively. Conclusion: This shows an avenue for improvement given 35% of CT’s were >30min. Furthermore, being proactive and aware of time to CT as an important factor to trauma management can be another avenue for improvement. Based on this, we will re-audit in 12-24months to assess if any improvement has been made.

Keywords: imaging, rural surgery, trauma surgery, improvement

Procedia PDF Downloads 102
22611 Heuristic to Generate Random X-Monotone Polygons

Authors: Kamaljit Pati, Manas Kumar Mohanty, Sanjib Sadhu

Abstract:

A heuristic has been designed to generate a random simple monotone polygon from a given set of ‘n’ points lying on a 2-Dimensional plane. Our heuristic generates a random monotone polygon in O(n) time after O(nℓogn) preprocessing time which is improved over the previous work where a random monotone polygon is produced in the same O(n) time but the preprocessing time is O(k) for n < k < n2. However, our heuristic does not generate all possible random polygons with uniform probability. The space complexity of our proposed heuristic is O(n).

Keywords: sorting, monotone polygon, visibility, chain

Procedia PDF Downloads 427
22610 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction

Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan

Abstract:

Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.

Keywords: decision trees, neural network, myocardial infarction, Data Mining

Procedia PDF Downloads 429
22609 Signs-Only Compressed Row Storage Format for Exact Diagonalization Study of Quantum Fermionic Models

Authors: Michael Danilov, Sergei Iskakov, Vladimir Mazurenko

Abstract:

The present paper describes a high-performance parallel realization of an exact diagonalization solver for quantum-electron models in a shared memory computing system. The proposed algorithm contains a storage format for efficient computing eigenvalues and eigenvectors of a quantum electron Hamiltonian matrix. The results of the test calculations carried out for 15 sites Hubbard model demonstrate reduction in the required memory and good multiprocessor scalability, while maintaining performance of the same order as compressed row storage.

Keywords: sparse matrix, compressed format, Hubbard model, Anderson model

Procedia PDF Downloads 402
22608 The Road Ahead: Merging Human Cyber Security Expertise with Generative AI

Authors: Brennan Lodge

Abstract:

Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLM’s (Large Language Models) is exciting, such models do have their downsides. LLM’s cannot easily expand or revise their memory, and they can’t straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.

Keywords: cybersecurity, gen AI, retrieval augmented generation, cybersecurity defense strategies

Procedia PDF Downloads 95
22607 A Comparative Study on Electrical Characteristics of Au/n-SiC structure, with and Without Zn-Doped PVA Interfacial Layer at Room Temperature

Authors: M. H. Aldahrob, A. Kokce, S. Altindal, H. E. Lapa

Abstract:

In order to obtain the detailed information about the effect of (Zn-doped PVA) interfacial layer, surface states (Nss) and series resistance (Rs) on electrical characteristics, both Au/n- type 4H-SiC (MS) with and without (Zn doped PVA) interfacial layer were fabricated to compare. The main electrical parameters of them were investigated using forward and reverse bias current-voltage (I-V), capacitance-voltage (C-V) and conductance –voltage (G/W –V) measurements were performed at room temperature. Experimental results show that the value of ideality factor (n), zero –bias barrier height (ΦBo), Rs, rectifier rate (RR=IF/IR) and the density of Nss are strong functions interfacial layer and applied bias voltage. The energy distribution profile of Nss was obtained from forward bias I-V data by taking into account voltage dependent effective BH (ΦBo) and ideality factor (n(V)). Voltage dependent profile of Rs was also obtained both by using Ohm’s law and Nicollian and Brew methods. The other main diode parameters such as the concentration of doping donor atom (ND), Fermi energy level (EF).BH (ΦBo), depletion layer with (WD) were obtained by using the intercept and slope of the reverse bias C-2 vs V plots. It was found that (Zn-doped PVA) interfacial layer lead to a quite decrease in the values Nss, Rs and leakage current and increase in shunt resistance (Rsh) and RR. Therefore, we can say that the use of thin (Zn-doped PVA) interfacial layer can quite improved the performance of MS structure.

Keywords: interfacial polymer layer, thickness dependence, electric and dielectric properties, series resistance, interface state

Procedia PDF Downloads 248
22606 Application of Signature Verification Models for Document Recognition

Authors: Boris M. Fedorov, Liudmila P. Goncharenko, Sergey A. Sybachin, Natalia A. Mamedova, Ekaterina V. Makarenkova, Saule Rakhimova

Abstract:

In modern economic conditions, the question of the possibility of correct recognition of a signature on digital documents in order to verify the expression of will or confirm a certain operation is relevant. The additional complexity of processing lies in the dynamic variability of the signature for each individual, as well as in the way information is processed because the signature refers to biometric data. The article discusses the issues of using artificial intelligence models in order to improve the quality of signature confirmation in document recognition. The analysis of several possible options for using the model is carried out. The results of the study are given, in which it is possible to correctly determine the authenticity of the signature on small samples.

Keywords: signature recognition, biometric data, artificial intelligence, neural networks

Procedia PDF Downloads 148