Search results for: discrete hazard model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17081

Search results for: discrete hazard model

16871 Discrete Estimation of Spectral Density for Alpha Stable Signals Observed with an Additive Error

Authors: R. Sabre, W. Horrigue, J. C. Simon

Abstract:

This paper is interested in two difficulties encountered in practice when observing a continuous time process. The first is that we cannot observe a process over a time interval; we only take discrete observations. The second is the process frequently observed with a constant additive error. It is important to give an estimator of the spectral density of such a process taking into account the additive observation error and the choice of the discrete observation times. In this work, we propose an estimator based on the spectral smoothing of the periodogram by the polynomial Jackson kernel reducing the additive error. In order to solve the aliasing phenomenon, this estimator is constructed from observations taken at well-chosen times so as to reduce the estimator to the field where the spectral density is not zero. We show that the proposed estimator is asymptotically unbiased and consistent. Thus we obtain an estimate solving the two difficulties concerning the choice of the instants of observations of a continuous time process and the observations affected by a constant error.

Keywords: spectral density, stable processes, aliasing, periodogram

Procedia PDF Downloads 114
16870 The Normal-Generalized Hyperbolic Secant Distribution: Properties and Applications

Authors: Hazem M. Al-Mofleh

Abstract:

In this paper, a new four-parameter univariate continuous distribution called the Normal-Generalized Hyperbolic Secant Distribution (NGHS) is defined and studied. Some general and structural distributional properties are investigated and discussed, including: central and non-central n-th moments and incomplete moments, quantile and generating functions, hazard function, Rényi and Shannon entropies, shapes: skewed right, skewed left, and symmetric, modality regions: unimodal and bimodal, maximum likelihood (MLE) estimators for the parameters. Finally, two real data sets are used to demonstrate empirically its flexibility and prove the strength of the new distribution.

Keywords: bimodality, estimation, hazard function, moments, Shannon’s entropy

Procedia PDF Downloads 310
16869 Deriving an Index of Adoption Rate and Assessing Factors Affecting Adoption of an Agroforestry-Based Farming System in Dhanusha District, Nepal

Authors: Arun Dhakal, Geoff Cockfield, Tek Narayan Maraseni

Abstract:

This paper attempts to fulfil the gap in measuring adoption in agroforestry studies. It explains the derivation of an index of adoption rate in a Nepalese context and examines the factors affecting adoption of agroforestry-based land management practice (AFLMP) in the Dhanusha District of Nepal. Data about the different farm practices and the factors (bio-physical, socio-economic) influencing adoption were collected during focus group discussion and from the randomly selected households using a household survey questionnaire, respectively. A multivariate regression model was used to determine the factors. The factors (variables) found to significantly affect adoption of AFLMP were: farm size, availability of irrigation water, education of household heads, agricultural labour force, frequency of visits by extension workers, expenditure on farm inputs purchase, household’s experience in agroforestry, and distance from home to government forest. The regression model explained about 75% of variation in adoption decision. The model rejected ‘erosion hazard’, ‘flood hazard’ and ‘gender’ as determinants of adoption, which in case of single agroforestry practice were major variables and played positive role. Out of eight variables, farm size played the most powerful role in explaining the variation in adoption, followed by availability of irrigation water and education of household heads. The results of this study suggest that policies to promote the provision of irrigation water, extension services and motivation to obtaining higher education would probably provide the incentive to adopt agroforestry elsewhere in the terai of Nepal.

Keywords: agroforestry, adoption index, determinants of adoption, step-wise linear regression, Nepal

Procedia PDF Downloads 466
16868 Modelling of Heating and Evaporation of Biodiesel Fuel Droplets

Authors: Mansour Al Qubeissi, Sergei S. Sazhin, Cyril Crua, Morgan R. Heikal

Abstract:

This paper presents the application of the Discrete Component Model for heating and evaporation to multi-component biodiesel fuel droplets in direct injection internal combustion engines. This model takes into account the effects of temperature gradient, recirculation and species diffusion inside droplets. A distinctive feature of the model used in the analysis is that it is based on the analytical solutions to the temperature and species diffusion equations inside the droplets. Nineteen types of biodiesel fuels are considered. It is shown that a simplistic model, based on the approximation of biodiesel fuel by a single component or ignoring the diffusion of components of biodiesel fuel, leads to noticeable errors in predicted droplet evaporation time and time evolution of droplet surface temperature and radius.

Keywords: heat/mass transfer, biodiesel, multi-component fuel, droplet

Procedia PDF Downloads 528
16867 Calibration of Discrete Element Method Parameters for Modelling DRI Pellets Flow

Authors: A. Hossein Madadi-Najafabadi, Masoud Nasiri

Abstract:

The discrete element method is a powerful technique for numerical modeling the flow of granular materials such as direct reduced iron. It would enable us to study processes and equipment related to the production and handling of the material. However, the characteristics and properties of the granules have to be adjusted precisely to achieve reliable results in a DEM simulation. The main properties for DEM simulation are size distribution, density, Young's modulus, Poisson's ratio and the contact coefficients of restitution, rolling friction and sliding friction. In the present paper, the mentioned properties are determined for DEM simulation of DRI pellets. A reliable DEM simulation would contribute to optimizing the handling system of DRIs in an iron-making plant. Among the mentioned properties, Young's modulus is the most important parameter, which is usually hard to get for particulate solids. Here, an especial method is utilized to precisely determine this parameter for DRI.

Keywords: discrete element method, direct reduced iron, simulation parameters, granular material

Procedia PDF Downloads 151
16866 Investigating the Shear Behaviour of Fouled Ballast Using Discrete Element Modelling

Authors: Ngoc Trung Ngo, Buddhima Indraratna, Cholachat Rujikiathmakjornr

Abstract:

For several hundred years, the design of railway tracks has practically remained unchanged. Traditionally, rail tracks are placed on a ballast layer due to several reasons, including economy, rapid drainage, and high load bearing capacity. The primary function of ballast is to distributing dynamic track loads to sub-ballast and subgrade layers, while also providing lateral resistance and allowing for rapid drainage. Upon repeated trainloads, the ballast becomes fouled due to ballast degradation and the intrusion of fines which adversely affects the strength and deformation behaviour of ballast. This paper presents the use of three-dimensional discrete element method (DEM) in studying the shear behaviour of the fouled ballast subjected to direct shear loading. Irregularly shaped particles of ballast were modelled by grouping many spherical balls together in appropriate sizes to simulate representative ballast aggregates. Fouled ballast was modelled by injecting a specified number of miniature spherical particles into the void spaces. The DEM simulation highlights that the peak shear stress of the ballast assembly decreases and the dilation of fouled ballast increases with an increase level of fouling. Additionally, the distributions of contact force chain and particle displacement vectors were captured during shearing progress, explaining the formation of shear band and the evolutions of volumetric change of fouled ballast.

Keywords: railway ballast, coal fouling, discrete element modelling, discrete element method

Procedia PDF Downloads 425
16865 Sand Production Modelled with Darcy Fluid Flow Using Discrete Element Method

Authors: M. N. Nwodo, Y. P. Cheng, N. H. Minh

Abstract:

In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.

Keywords: discrete element method, fluid flow, parametric study, sand production/bonds failure

Procedia PDF Downloads 294
16864 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network

Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem

Abstract:

This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.

Keywords: electricity price, k-factor GARMA, LLWNN, G-GARCH, forecasting

Procedia PDF Downloads 203
16863 Impact of an Onboard Fire for the Evacuation of a Rolling Stock

Authors: Guillaume Craveur

Abstract:

This study highlights the impact of an onboard fire for the evacuation of a rolling stock. Two fires models are achieved. The first one is a zone model realized with the CFAST software. Then, this fire is imported in a building EXODUS model in order to determine the evacuation time with effects of fire effluents (temperature, smoke opacity, smoke toxicity) on passengers. The second fire is achieved with Fire Dynamics Simulator software. The fire defined is directly imported in the FDS+Evac model which will permit to determine the evacuation time and effects of fire effluents on passengers. These effects will be compared with tenability criteria defined in some standards in order to see if the situation is acceptable. Different power of fire will be underlined to see from what power source the hazard become unacceptable.

Keywords: fire safety engineering, numerical tools, rolling stock, evacuation

Procedia PDF Downloads 174
16862 Reallocation of Bed Capacity in a Hospital Combining Discrete Event Simulation and Integer Linear Programming

Authors: Muhammed Ordu, Eren Demir, Chris Tofallis

Abstract:

The number of inpatient admissions in the UK has been significantly increasing over the past decade. These increases cause bed occupancy rates to exceed the target level (85%) set by the Department of Health in England. Therefore, hospital service managers are struggling to better manage key resource such as beds. On the other hand, this severe demand pressure might lead to confusion in wards. For example, patients can be admitted to the ward of another inpatient specialty due to lack of resources (i.e., bed). This study aims to develop a simulation-optimization model to reallocate the available number of beds in a mid-sized hospital in the UK. A hospital simulation model was developed to capture the stochastic behaviours of the hospital by taking into account the accident and emergency department, all outpatient and inpatient services, and the interactions between each other. A couple of outputs of the simulation model (e.g., average length of stay and revenue) were generated as inputs to be used in the optimization model. An integer linear programming was developed under a number of constraints (financial, demand, target level of bed occupancy rate and staffing level) with the aims of maximizing number of admitted patients. In addition, a sensitivity analysis was carried out by taking into account unexpected increases on inpatient demand over the next 12 months. As a result, the major findings of the approach proposed in this study optimally reallocate the available number of beds for each inpatient speciality and reveal that 74 beds are idle. In addition, the findings of the study indicate that the hospital wards will be able to cope with 14% demand increase at most in the projected year. In conclusion, this paper sheds a new light on how best to reallocate beds in order to cope with current and future demand for healthcare services.

Keywords: bed occupancy rate, bed reallocation, discrete event simulation, inpatient admissions, integer linear programming, projected usage

Procedia PDF Downloads 116
16861 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects

Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm

Abstract:

Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.

Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology

Procedia PDF Downloads 141
16860 Formulation of a Rapid Earthquake Risk Ranking Criteria for National Bridges in the National Capital Region Affected by the West Valley Fault Using GIS Data Integration

Authors: George Mariano Soriano

Abstract:

In this study, a Rapid Earthquake Risk Ranking Criteria was formulated by integrating various existing maps and databases by the Department of Public Works and Highways (DPWH) and Philippine Institute of Volcanology and Seismology (PHIVOLCS). Utilizing Geographic Information System (GIS) software, the above-mentioned maps and databases were used in extracting seismic hazard parameters and bridge vulnerability characteristics in order to rank the seismic damage risk rating of bridges in the National Capital Region.

Keywords: bridge, earthquake, GIS, hazard, risk, vulnerability

Procedia PDF Downloads 376
16859 Chaotic Semiflows with General Acting Topological Monoids

Authors: Alica Miller

Abstract:

A semiflow is a triple consisting of a Hausdorff topological space $X$, a commutative topological monoid $T$ and a continuous monoid action of $T$ on $X$. The acting monoid $T$ is usually either the discrete monoid $\N_0$ of nonnegative integers (in which case the semiflow can be defined as a pair $(X,f)$ consisting of a phase space $X$ and a continuous function $f:X\to X$), or the monoid $\R_+$ of nonnegative real numbers (the so-called one-parameter monoid). However, it turns out that there are real-life situations where it is useful to consider the acting monoids that are a combination of discrete and continuous monoids. That, for example, happens, when we are observing certain dynamical system at discrete moments, but after some time realize that it would be beneficial to continue our observations in real time. The acting monoid in that case would be $T=\{0, t_0, 2t_0, \dots, (n-1)t_0\} \cup [nt_0,\infty)$ with the operation and topology induced from real numbers. This partly explains the motivation for the level of generality which is pursued in our research. We introduce the PSP monoids, which include all but ``pathological'' monoids, and most of our statements hold for them. The topic of our presentation are some recent results about chaos-related properties in semiflows, indecomposability and sensitivity of semiflows in the described general context.

Keywords: chaos, indecomposability, PSP monoids, semiflow, sensitivity

Procedia PDF Downloads 260
16858 Sparse Modelling of Cancer Patients’ Survival Based on Genomic Copy Number Alterations

Authors: Khaled M. Alqahtani

Abstract:

Copy number alterations (CNA) are variations in the structure of the genome, where certain regions deviate from the typical two chromosomal copies. These alterations are pivotal in understanding tumor progression and are indicative of patients' survival outcomes. However, effectively modeling patients' survival based on their genomic CNA profiles while identifying relevant genomic regions remains a statistical challenge. Various methods, such as the Cox proportional hazard (PH) model with ridge, lasso, or elastic net penalties, have been proposed but often overlook the inherent dependencies between genomic regions, leading to results that are hard to interpret. In this study, we enhance the elastic net penalty by incorporating an additional penalty that accounts for these dependencies. This approach yields smooth parameter estimates and facilitates variable selection, resulting in a sparse solution. Our findings demonstrate that this method outperforms other models in predicting survival outcomes, as evidenced by our simulation study. Moreover, it allows for a more meaningful interpretation of genomic regions associated with patients' survival. We demonstrate the efficacy of our approach using both real data from a lung cancer cohort and simulated datasets.

Keywords: copy number alterations, cox proportional hazard, lung cancer, regression, sparse solution

Procedia PDF Downloads 14
16857 Reliability and Probability Weighted Moment Estimation for Three Parameter Mukherjee-Islam Failure Model

Authors: Ariful Islam, Showkat Ahmad Lone

Abstract:

The Mukherjee-Islam Model is commonly used as a simple life time distribution to assess system reliability. The model exhibits a better fit for failure information and provides more appropriate information about hazard rate and other reliability measures as shown by various authors. It is possible to introduce a location parameter at a time (i.e., a time before which failure cannot occur) which makes it a more useful failure distribution than the existing ones. Even after shifting the location of the distribution, it represents a decreasing, constant and increasing failure rate. It has been shown to represent the appropriate lower tail of the distribution of random variables having fixed lower bound. This study presents the reliability computations and probability weighted moment estimation of three parameter model. A comparative analysis is carried out between three parameters finite range model and some existing bathtub shaped curve fitting models. Since probability weighted moment method is used, the results obtained can also be applied on small sample cases. Maximum likelihood estimation method is also applied in this study.

Keywords: comparative analysis, maximum likelihood estimation, Mukherjee-Islam failure model, probability weighted moment estimation, reliability

Procedia PDF Downloads 244
16856 The Potential of On-Demand Shuttle Services to Reduce Private Car Use

Authors: B. Mack, K. Tampe-Mai, E. Diesch

Abstract:

Findings of an ongoing discrete choice study of future transport mode choice will be presented. Many urban centers face the triple challenge of having to cope with ever increasing traffic congestion, environmental pollution, and greenhouse gas emission brought about by private car use. In principle, private car use may be diminished by extending public transport systems like bus lines, trams, tubes, and trains. However, there are limits to increasing the (perceived) spatial and temporal flexibility and reducing peak-time crowding of classical public transport systems. An emerging new type of system, publicly or privately operated on-demand shuttle bus services, seem suitable to ameliorate the situation. A fleet of on-demand shuttle busses operates without fixed stops and schedules. It may be deployed efficiently in that each bus picks up passengers whose itineraries may be combined into an optimized route. Crowding may be minimized by limiting the number of seats and the inter-seat distance for each bus. The study is conducted as a discrete choice experiment. The choice between private car, public transport, and shuttle service is registered as a function of several push and pull factors (financial costs, travel time, walking distances, mobility tax/congestion charge, and waiting time/parking space search time). After the completion of the discrete choice items, the study participant is asked to rate the three modes of transport with regard to the pull factors of comfort, safety, privacy, and opportunity to engage in activities like reading or surfing the internet. These ratings are entered as additional predictors into the discrete choice experiment regression model. The study is conducted in the region of Stuttgart in southern Germany. N=1000 participants are being recruited. Participants are between 18 and 69 years of age, hold a driver’s license, and live in the city or the surrounding region of Stuttgart. In the discrete choice experiment, participants are asked to assume they lived within the Stuttgart region, but outside of the city, and were planning the journey from their apartment to their place of work, training, or education during the peak traffic time in the morning. Then, for each item of the discrete choice experiment, they are asked to choose between the transport modes of private car, public transport, and on-demand shuttle in the light of particular values of the push and pull factors studied. The study will provide valuable information on the potential of switching from private car use to the use of on-demand shuttles, but also on the less desirable potential of switching from public transport to on-demand shuttle services. Furthermore, information will be provided on the modulation of these switching potentials by pull and push factors.

Keywords: determinants of travel mode choice, on-demand shuttle services, private car use, public transport

Procedia PDF Downloads 151
16855 Legal Judgment Prediction through Indictments via Data Visualization in Chinese

Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun

Abstract:

Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.

Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization

Procedia PDF Downloads 96
16854 Numerical Simulation of Ultraviolet Disinfection in a Water Reactor

Authors: H. Shokouhmand, H. Sobhani, B. Sajadi, M. Degheh

Abstract:

In recent years, experimental and numerical investigation of water UV reactors has increased significantly. The main drawback of experimental methods is confined and expensive survey of UV reactors features. In this study, a CFD model utilizing the eulerian-lagrangian framework is applied to analysis the disinfection performance of a closed conduit reactor which contains four UV lamps perpendicular to the flow. A discrete ordinates (DO) model was employed to evaluate the UV irradiance field. To investigate the importance of each of lamps on the inactivation performance, in addition to the reference model (with 4 bright lamps), several models with one or two bright lamps in various arrangements were considered. All results were reported in three inactivation kinetics. The results showed that the log inactivation of the two central bright lamps model was between 88-99 percent, close to the reference model results. Also, whatever the lamps are closer to the main flow region, they have more effect on microbial inactivation. The effect of some operational parameters such as water flow rate, inlet water temperature, and lamps power were also studied.

Keywords: Eulerian-Lagrangian framework, inactivation kinetics, log inactivation, water UV reactor

Procedia PDF Downloads 223
16853 A Hazard Rate Function for the Time of Ruin

Authors: Sule Sahin, Basak Bulut Karageyik

Abstract:

This paper introduces a hazard rate function for the time of ruin to calculate the conditional probability of ruin for very small intervals. We call this function the force of ruin (FoR). We obtain the expected time of ruin and conditional expected time of ruin from the exact finite time ruin probability with exponential claim amounts. Then we introduce the FoR which gives the conditional probability of ruin and the condition is that ruin has not occurred at time t. We analyse the behavior of the FoR function for different initial surpluses over a specific time interval. We also obtain FoR under the excess of loss reinsurance arrangement and examine the effect of reinsurance on the FoR.

Keywords: conditional time of ruin, finite time ruin probability, force of ruin, reinsurance

Procedia PDF Downloads 360
16852 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field

Authors: Yana Snegireva

Abstract:

Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.

Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model

Procedia PDF Downloads 45
16851 Two-Phase Flow Study of Airborne Transmission Control in Dental Practices

Authors: Mojtaba Zabihi, Stephen Munro, Jonathan Little, Ri Li, Joshua Brinkerhoff, Sina Kheirkhah

Abstract:

Occupational Safety and Health Administration (OSHA) identified dental workers at the highest risk of contracting COVID-19. This is because aerosol-generating procedures (AGP) during dental practices generate aerosols ( < 5µm) and droplets. These particles travel at varying speeds, in varying directions, and for varying durations. If these particles bear infectious viruses, their spreading causes airborne transmission of the virus in the dental room, exposing dentists, hygienists, dental assistants, and even other dental clinic clients to the infection risk. Computational fluid dynamics (CFD) simulation of two-phase flows based on a discrete phase model (DPM) is carried out to study the spreading of aerosol and droplets in a dental room. The simulation includes momentum, heat, and mass transfers between the particles and the airflow. Two simulations are conducted and compared. One simulation focuses on the effects of room ventilation in winter and summer on the particles' travel. The other simulation focuses on the control of aerosol and droplets' spreading. A suction collector is added near the source of aerosol and droplets, creating a flow sink in order to remove the particles. The effects of the suction flow on the aerosol and droplet travel are studied. The suction flow can remove aerosols and also reduce the spreading of droplets.

Keywords: aerosols, computational fluid dynamics, COVID-19, dental, discrete phase model, droplets, two-phase flow

Procedia PDF Downloads 233
16850 Performance Evaluation of Discrete Fourier Transform Algorithm Based PMU for Wide Area Measurement System

Authors: Alpesh Adeshara, Rajendrasinh Jadeja, Praghnesh Bhatt

Abstract:

Implementation of advanced technologies requires sophisticated instruments that deal with the operation, control, restoration and protection of rapidly growing power system network under normal and abnormal conditions. Presently, the applications of Phasor Measurement Unit (PMU) are widely found in real time operation, monitoring, controlling and analysis of power system network as it eliminates the various limitations of Supervisory Control and Data Acquisition System (SCADA) conventionally used in power system. The use of PMU data is very rapidly increasing its importance for online and offline analysis. Wide Area Measurement System (WAMS) is developed as new technology by use of multiple PMUs in power system. The present paper proposes a model of MATLAB based PMU using Discrete Fourier Transform (DFT) algorithm and evaluation of its operation under different contingencies. In this paper, PMU based two bus system having WAMS network is presented as a case study.

Keywords: GPS global positioning system, PMU phasor measurement system, WAMS wide area monitoring system, DFT, PDC

Procedia PDF Downloads 463
16849 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor

Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro

Abstract:

Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.

Keywords: control, DC motor, discrete PID, discrete state feedback

Procedia PDF Downloads 227
16848 Chairussyuhur Arman, Totti Tjiptosumirat, Muhammad Gunawan, Mastur, Joko Priyono, Baiq Tri Ratna Erawati

Authors: Maria M. Giannakou, Athanasios K. Ziliaskopoulos

Abstract:

Transmission pipelines carrying natural gas are often routed through populated cities, industrial and environmentally sensitive areas. While the need for these networks is unquestionable, there are serious concerns about the risk these lifeline networks pose to the people, to their habitat and to the critical infrastructures, especially in view of natural disasters such as earthquakes. This work presents an Integrated Pipeline Risk Management methodology (IPRM) for assessing the hazard associated with a natural gas pipeline failure due to natural or manmade disasters. IPRM aims to optimize the allocation of the available resources to countermeasures in order to minimize the impacts of pipeline failure to humans, the environment, the infrastructure and the economic activity. A proposed knapsack mathematical programming formulation is introduced that optimally selects the proper mitigation policies based on the estimated cost – benefit ratios. The proposed model is demonstrated with a small numerical example. The vulnerability analysis of these pipelines and the quantification of consequences from such failures can be useful for natural gas industries on deciding which mitigation measures to implement on the existing pipeline networks with the minimum cost in an acceptable level of hazard.

Keywords: cost benefit analysis, knapsack problem, natural gas distribution network, risk management, risk mitigation

Procedia PDF Downloads 266
16847 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder

Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa

Abstract:

Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.

Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami

Procedia PDF Downloads 462
16846 Artificial Neural Network in Predicting the Soil Response in the Discrete Element Method Simulation

Authors: Zhaofeng Li, Jun Kang Chow, Yu-Hsing Wang

Abstract:

This paper attempts to bridge the soil properties and the mechanical response of soil in the discrete element method (DEM) simulation. The artificial neural network (ANN) was therefore adopted, aiming to reproduce the stress-strain-volumetric response when soil properties are given. 31 biaxial shearing tests with varying soil parameters (e.g., initial void ratio and interparticle friction coefficient) were generated using the DEM simulations. Based on these 45 sets of training data, a three-layer neural network was established which can output the entire stress-strain-volumetric curve during the shearing process from the input soil parameters. Beyond the training data, 2 additional sets of data were generated to examine the validity of the network, and the stress-strain-volumetric curves for both cases were well reproduced using this network. Overall, the ANN was found promising in predicting the soil behavior and reducing repetitive simulation work.

Keywords: artificial neural network, discrete element method, soil properties, stress-strain-volumetric response

Procedia PDF Downloads 369
16845 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea

Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal

Abstract:

Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.

Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism

Procedia PDF Downloads 236
16844 Blind Watermarking Using Discrete Wavelet Transform Algorithm with Patchwork

Authors: Toni Maristela C. Estabillo, Michaela V. Matienzo, Mikaela L. Sabangan, Rosette M. Tienzo, Justine L. Bahinting

Abstract:

This study is about blind watermarking on images with different categories and properties using two algorithms namely, Discrete Wavelet Transform and Patchwork Algorithm. A program is created to perform watermark embedding, extraction and evaluation. The evaluation is based on three watermarking criteria namely: image quality degradation, perceptual transparency and security. Image quality is measured by comparing the original properties with the processed one. Perceptual transparency is measured by a visual inspection on a survey. Security is measured by implementing geometrical and non-geometrical attacks through a pass or fail testing. Values used to measure the following criteria are mostly based on Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The results are based on statistical methods used to interpret and collect data such as averaging, z Test and survey. The study concluded that the combined DWT and Patchwork algorithms were less efficient and less capable of watermarking than DWT algorithm only.

Keywords: blind watermarking, discrete wavelet transform algorithm, patchwork algorithm, digital watermark

Procedia PDF Downloads 242
16843 Process Safety Evaluation of a Nuclear Power Plant through Virtual Process Hazard Analysis (PHA) using the What-If Technique

Authors: Lormaine Anne Branzuela, Elysa Largo, Julie Marisol Pagalilauan, Neil Concibido, Monet Concepcion Detras

Abstract:

Energy is a necessity both for the people and the country. The demand for energy is continually increasing, but the supply is not doing the same. The reopening of the Bataan Nuclear Power Plant (BNPP) in the Philippines has been circulating in the media for the current time. The general public has been hesitant in accepting the inclusion of nuclear energy in the Philippine energy mix due to perceived unsafe conditions of the plant. This study evaluated the possible operations of a nuclear power plant, which is of the same type as the BNPP, considering the safety of the workers, the public, and the environment using a Process Hazard Analysis (PHA) method. What-If Technique was utilized to identify the hazards and consequences on the operations of the plant, together with the level of risk it entails. Through the brainstorming sessions of the PHA team, it was found that the most critical system on the plant is the primary system. Possible leakages on pipes and equipment due to weakened seals and welds and blockages on coolant path due to fouling were the most common scenarios identified, which further caused the most critical scenario – radioactive leak through sump contamination, nuclear meltdown, and equipment damage and explosion which could result to multiple injuries and fatalities, and environmental impacts.

Keywords: process safety management, process hazard analysis, what-If technique, nuclear power plant

Procedia PDF Downloads 179
16842 Optimal Hedging of a Portfolio of European Options in an Extended Binomial Model under Proportional Transaction Costs

Authors: Norm Josephy, Lucy Kimball, Victoria Steblovskaya

Abstract:

Hedging of a portfolio of European options under proportional transaction costs is considered. Our discrete time financial market model extends the binomial market model with transaction costs to the case where the underlying stock price ratios are distributed over a bounded interval rather than over a two-point set. An optimal hedging strategy is chosen from a set of admissible non-self-financing hedging strategies. Our approach to optimal hedging of a portfolio of options is based on theoretical foundation that includes determination of a no-arbitrage option price interval as well as on properties of the non-self-financing strategies and their residuals. A computational algorithm for optimizing an investor relevant criterion over the set of admissible non-self-financing hedging strategies is developed. Applicability of our approach is demonstrated using both simulated data and real market data.

Keywords: extended binomial model, non-self-financing hedging, optimization, proportional transaction costs

Procedia PDF Downloads 228