Search results for: uncertainty analysis
28203 Solving One of the Variants of Necktie Paradox for Business Proposals
Authors: Natarajan Vijayarangan, Viswanath Kumar Ganesan, G. Kumudhavalli
Abstract:
This abstract figures out an uncertainty problem pertaining to evaluating business proposals or concept notes in an organisation. Let us consider business proposal evaluation process (BPEP) for execution of corporate research cum business projects in the organisation. Assume that two concept notes X and Y of BPEP are approved: one of them is a full-fledged type (100% financial approval given by the organisation) - X and other one is a conditional type (a partial financial approval given by the organisation) - Y. Then a penalty criteria has been introduced during the process. At the end of annual appraisal, if both of them complete as per the goals and objectives committed or figured out at the time of concept note submission, then both will get an incentive of $N from the organisation. If one of them doesn't fulfill the goals and objectives at the year-end appraisal, then d% reduction or cut will be levied on the project budget for the next year. If X fulfills the goals and objectives and Y doesn't , then X gets a gain of d% on Y's previous year budget and Y gets a loss of d% from the previous year budget for the next year. And vice-versa. Further, an incentive of $N will be given to those who gains. This process is a part of Necktie paradox and inherits an uncertainty principle on X or Y getting more than $N even if X or Y performs well.Solving the above problem and generalizing on finitely many concept notes will be a challenging task.Keywords: concept notes, necktie paradox, annual appraisal, project budget and gain or loss
Procedia PDF Downloads 46928202 Flexural Analysis of Palm Fiber Reinforced Hybrid Polymer Matrix Composite
Authors: G.Venkatachalam, Gautham Shankar, Dasarath Raghav, Krishna Kuar, Santhosh Kiran, Bhargav Mahesh
Abstract:
Uncertainty in the availability of fossil fuels in the future and global warming increased the need for more environment-friendly materials. In this work, an attempt is made to fabricate a hybrid polymer matrix composite. The blend is a mixture of General Purpose Resin and Cashew Nut Shell Liquid, a natural resin extracted from cashew plant. Palm fiber, which has high strength, is used as a reinforcement material. The fiber is treated with alkali (NaOH) solution to increase its strength and adhesiveness. Parametric study of flexure strength is carried out by varying alkali concentration, duration of alkali treatment and fiber volume. Taguchi L9 Orthogonal array is followed in the design of experiments procedure for simplification. With the help of ANOVA technique, regression equations are obtained which gives the level of influence of each parameter on the flexure strength of the composite.Keywords: Adhesion, CNSL, Flexural Analysis, Hybrid Matrix Composite, Palm Fiber
Procedia PDF Downloads 40428201 Fuzzy Logic in Detecting Children with Behavioral Disorders
Authors: David G. Maxinez, Andrés Ferreyra Ramírez, Liliana Castillo Sánchez, Nancy Adán Mendoza, Carlos Aviles Cruz
Abstract:
This research describes the use of fuzzy logic in detection, assessment, analysis and evaluation of children with behavioral disorders. It shows how to acquire and analyze ambiguous, vague and full of uncertainty data coming from the input variables to get an accurate assessment result for each of the typologies presented by children with behavior problems. Behavior disorders analyzed in this paper are: hyperactivity (H), attention deficit with hyperactivity (DAH), conduct disorder (TD) and attention deficit (AD).Keywords: alteration, behavior, centroid, detection, disorders, economic, fuzzy logic, hyperactivity, impulsivity, social
Procedia PDF Downloads 56228200 Efficiency, Effectiveness, and Technological Change in Armed Forces: Indonesian Case
Authors: Citra Pertiwi, Muhammad Fikruzzaman Rahawarin
Abstract:
Government of Indonesia had committed to increasing its national defense the budget up to 1,5 percent of GDP. However, the budget increase does not necessarily allocate efficiently and effectively. Using Data Envelopment Analysis (DEA), the operational units of Indonesian Armed Forces are considered as a proxy to measure those two aspects. The bootstrap technique is being used as well to reduce uncertainty in the estimation. Additionally, technological change is being measured as a nonstationary component. Nearly half of the units are being estimated as fully efficient, with less than a third is considered as effective. Longer and larger sets of data might increase the robustness of the estimation in the future.Keywords: bootstrap, effectiveness, efficiency, DEA, military, Malmquist, technological change
Procedia PDF Downloads 30328199 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry
Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood
Abstract:
The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.Keywords: ADV, experimental data, multiple Reynolds number, post-processing
Procedia PDF Downloads 14728198 Defining a Framework for Holistic Life Cycle Assessment of Building Components by Considering Parameters Such as Circularity, Material Health, Biodiversity, Pollution Control, Cost, Social Impacts, and Uncertainty
Authors: Naomi Grigoryan, Alexandros Loutsioli Daskalakis, Anna Elisse Uy, Yihe Huang, Aude Laurent (Webanck)
Abstract:
In response to the building and construction sectors accounting for a third of all energy demand and emissions, the European Union has placed new laws and regulations in the construction sector that emphasize material circularity, energy efficiency, biodiversity, and social impact. Existing design tools assess sustainability in early-stage design for products or buildings; however, there is no standardized methodology for measuring the circularity performance of building components. Existing assessment methods for building components focus primarily on carbon footprint but lack the comprehensive analysis required to design for circularity. The research conducted in this paper covers the parameters needed to assess sustainability in the design process of architectural products such as doors, windows, and facades. It maps a framework for a tool that assists designers with real-time sustainability metrics. Considering the life cycle of building components such as façades, windows, and doors involves the life cycle stages applied to product design and many of the methods used in the life cycle analysis of buildings. The current industry standards of sustainability assessment for metal building components follow cradle-to-grave life cycle assessment (LCA), track Global Warming Potential (GWP), and document the parameters used for an Environmental Product Declaration (EPD). Developed by the Ellen Macarthur Foundation, the Material Circularity Indicator (MCI) is a methodology utilizing the data from LCA and EPDs to rate circularity, with a "value between 0 and 1 where higher values indicate a higher circularity+". Expanding on the MCI with additional indicators such as the Water Circularity Index (WCI), the Energy Circularity Index (ECI), the Social Circularity Index (SCI), Life Cycle Economic Value (EV), and calculating biodiversity risk and uncertainty, the assessment methodology of an architectural product's impact can be targeted more specifically based on product requirements, performance, and lifespan. Broadening the scope of LCA calculation for products to incorporate aspects of building design allows product designers to account for the disassembly of architectural components. For example, the Material Circularity Indicator for architectural products such as windows and facades is typically low due to the impact of glass, as 70% of glass ends up in landfills due to damage in the disassembly process. The low MCI can be combatted by expanding beyond cradle-to-grave assessment and focusing the design process on disassembly, recycling, and repurposing with the help of real-time assessment tools. Design for Disassembly and Urban Mining has been integrated within the construction field on small scales as project-based exercises, not addressing the entire supply chain of architectural products. By adopting more comprehensive sustainability metrics and incorporating uncertainty calculations, the sustainability assessment of building components can be more accurately assessed with decarbonization and disassembly in mind, addressing the large-scale commercial markets within construction, some of the most significant contributors to climate change.Keywords: architectural products, early-stage design, life cycle assessment, material circularity indicator
Procedia PDF Downloads 8728197 Calibration of Syringe Pumps Using Interferometry and Optical Methods
Authors: E. Batista, R. Mendes, A. Furtado, M. C. Ferreira, I. Godinho, J. A. Sousa, M. Alvares, R. Martins
Abstract:
Syringe pumps are commonly used for drug delivery in hospitals and clinical environments. These instruments are critical in neonatology and oncology, where any variation in the flow rate and drug dosing quantity can lead to severe incidents and even death of the patient. Therefore it is very important to determine the accuracy and precision of these devices using the suitable calibration methods. The Volume Laboratory of the Portuguese Institute for Quality (LVC/IPQ) uses two different methods to calibrate syringe pumps from 16 nL/min up to 20 mL/min. The Interferometric method uses an interferometer to monitor the distance travelled by a pusher block of the syringe pump in order to determine the flow rate. Therefore, knowing the internal diameter of the syringe with very high precision, the travelled distance, and the time needed for that travelled distance, it was possible to calculate the flow rate of the fluid inside the syringe and its uncertainty. As an alternative to the gravimetric and the interferometric method, a methodology based on the application of optical technology was also developed to measure flow rates. Mainly this method relies on measuring the increase of volume of a drop over time. The objective of this work is to compare the results of the calibration of two syringe pumps using the different methodologies described above. The obtained results were consistent for the three methods used. The uncertainties values were very similar for all the three methods, being higher for the optical drop method due to setup limitations.Keywords: calibration, flow, interferometry, syringe pump, uncertainty
Procedia PDF Downloads 10828196 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements
Authors: Denis A. Sokolov, Andrey V. Mazurkevich
Abstract:
In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement
Procedia PDF Downloads 5828195 The Changes in Consumer Behavior and the Decision-making Process After Covid-19 in Greece
Authors: Markou Vasiliki, Serdaris Panagiotis
Abstract:
The consumer behavior and decision-making process of consumers is a process that is affected by the factor of uncertainty. The onslaught of the Covid 19 pandemic has changed the consumer decision-making process in many ways. This change can be seen both in the buying process (how and where they shop) but also in the types of goods and services they are looking for. In addition, due to the mainly economic uncertainty that came from this event, but also the effects on both society and the economy in general, new consumer behaviors were created. Traditional forms of shopping are no longer a primary choice, consumers have turned to digital channels such as e-commerce and social media to fulfill needs. The purpose of this particular article is to examine how much the consumer's decision-making process has been affected after the pandemic and if consumer behavior has changed. An online survey was conducted to examine the change in decision making. Essentially, the demographic factors that influence the decision-making process were examined, as well as the social and economic factors. The research is divided into two parts. The first part included a literature review of the research that has been carried out to identify the factors, and the second part where the empirical investigation was carried out using a questionnaire and was done electronically with the help of Google Forms. The questionnaire was divided into several sections. They included questions about consumer behavior, but mainly about how they make decisions today, whether those decisions have changed due to the pandemic, and whether those changes are permanent. Also, for decision-making, goods were divided into essential products, high-tech products, transactions with the state and others. Αbout 500 consumers aged between 18 and 75 participated in the research. The data was processed with both descriptive statistics and econometric models. The results showed that the consumer behavior and decision-making process has changed. Now consumers widely use the internet for shopping, consumer behaviors and consumer patterns have changed. Social and economic factors play an important role. Income, gender and other factors were found to be statistically significant. In addition, it is worth noting that the percentage who made purchases during the pandemic through the internet for the first time was remarkable and related to age. Essentially, the arrival of the pandemic caused uncertainty for individuals, mainly financial, and this affected the decision-making process. In addition, shopping through the internet is now the first choice, especially among young people, and it seems that it is about to become established.Keywords: consumer behavior, decision making, COVID-19, Greece, behavior change
Procedia PDF Downloads 4528194 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports
Authors: A. Falenski, A. Kaesbohrer, M. Filter
Abstract:
Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.Keywords: import risk assessment, review, tools, food import
Procedia PDF Downloads 30228193 Improved Image Retrieval for Efficient Localization in Urban Areas Using Location Uncertainty Data
Authors: Mahdi Salarian, Xi Xu, Rashid Ansari
Abstract:
Accurate localization of mobile devices based on camera-acquired visual media information usually requires a search over a very large GPS-referenced image database. This paper proposes an efficient method for limiting the search space for image retrieval engine by extracting and leveraging additional media information about Estimated Positional Error (EP E) to address complexity and accuracy issues in the search, especially to be used for compensating GPS location inaccuracy in dense urban areas. The improved performance is achieved by up to a hundred-fold reduction in the search area used in available reference methods while providing improved accuracy. To test our procedure we created a database by acquiring Google Street View (GSV) images for down town of Chicago. Other available databases are not suitable for our approach due to lack of EP E for the query images. We tested the procedure using more than 200 query images along with EP E acquired mostly in the densest areas of Chicago with different phones and in different conditions such as low illumination and from under rail tracks. The effectiveness of our approach and the effect of size and sector angle of the search area are discussed and experimental results demonstrate how our proposed method can improve performance just by utilizing a data that is available for mobile systems such as smart phones.Keywords: localization, retrieval, GPS uncertainty, bag of word
Procedia PDF Downloads 28328192 Evaluation of Spatial Distribution Prediction for Site-Scale Soil Contaminants Based on Partition Interpolation
Authors: Pengwei Qiao, Sucai Yang, Wenxia Wei
Abstract:
Soil pollution has become an important issue in China. Accurate spatial distribution prediction of pollutants with interpolation methods is the basis for soil remediation in the site. However, a relatively strong variability of pollutants would decrease the prediction accuracy. Theoretically, partition interpolation can result in accurate prediction results. In order to verify the applicability of partition interpolation for a site, benzo (b) fluoranthene (BbF) in four soil layers was adopted as the research object in this paper. IDW (inverse distance weighting)-, RBF (radial basis function)-and OK (ordinary kriging)-based partition interpolation accuracies were evaluated, and their influential factors were analyzed; then, the uncertainty and applicability of partition interpolation were determined. Three conclusions were drawn. (1) The prediction error of partitioned interpolation decreased by 70% compared to unpartitioned interpolation. (2) Partition interpolation reduced the impact of high CV (coefficient of variation) and high concentration value on the prediction accuracy. (3) The prediction accuracy of IDW-based partition interpolation was higher than that of RBF- and OK-based partition interpolation, and it was suitable for the identification of highly polluted areas at a contaminated site. These results provide a useful method to obtain relatively accurate spatial distribution information of pollutants and to identify highly polluted areas, which is important for soil pollution remediation in the site.Keywords: accuracy, applicability, partition interpolation, site, soil pollution, uncertainty
Procedia PDF Downloads 14428191 Hybrid Structure Learning Approach for Assessing the Phosphate Laundries Impact
Authors: Emna Benmohamed, Hela Ltifi, Mounir Ben Ayed
Abstract:
Bayesian Network (BN) is one of the most efficient classification methods. It is widely used in several fields (i.e., medical diagnostics, risk analysis, bioinformatics research). The BN is defined as a probabilistic graphical model that represents a formalism for reasoning under uncertainty. This classification method has a high-performance rate in the extraction of new knowledge from data. The construction of this model consists of two phases for structure learning and parameter learning. For solving this problem, the K2 algorithm is one of the representative data-driven algorithms, which is based on score and search approach. In addition, the integration of the expert's knowledge in the structure learning process allows the obtainment of the highest accuracy. In this paper, we propose a hybrid approach combining the improvement of the K2 algorithm called K2 algorithm for Parents and Children search (K2PC) and the expert-driven method for learning the structure of BN. The evaluation of the experimental results, using the well-known benchmarks, proves that our K2PC algorithm has better performance in terms of correct structure detection. The real application of our model shows its efficiency in the analysis of the phosphate laundry effluents' impact on the watershed in the Gafsa area (southwestern Tunisia).Keywords: Bayesian network, classification, expert knowledge, structure learning, surface water analysis
Procedia PDF Downloads 12828190 Stochastic Fleet Sizing and Routing in Drone Delivery
Authors: Amin Karimi, Lele Zhang, Mark Fackrell
Abstract:
Rural-to-urban population migrations are a global phenomenon, with projections indicating that by 2050, 68% of the world's population will inhabit densely populated urban centers. Concurrently, the popularity of e-commerce shopping has surged, evidenced by a 51% increase in total e-commerce sales from 2017 to 2021. Consequently, distribution and logistics systems, integral to effective supply chain management, confront escalating hurdles in efficiently delivering and distributing products within bustling urban environments. Additionally, events like environmental challenges and the COVID-19 pandemic have indicated that decision-makers are facing numerous sources of uncertainty. Therefore, to design an efficient and reliable logistics system, uncertainty must be considered. In this study, it examine fleet sizing and routing while considering uncertainty in demand rate. Fleet sizing is typically a strategic-level decision, while routing is an operational-level one. In this study, a carrier must make two types of decisions: strategic-level decisions regarding the number and types of drones to be purchased, and operational-level decisions regarding planning routes based on available fleet and realized demand. If the available fleets are insufficient to serve some customers, the carrier must outsource that delivery at a relatively high cost, calculated per order. With this hierarchy of decisions, it can model the problem using two-stage stochastic programming. The first-stage decisions involve planning the number and type of drones to be purchased, while the second-stage decisions involve planning routes. To solve this model, it employ logic-based benders decomposition, which decomposes the problem into a master problem and a set of sub-problems. The master problem becomes a mixed integer programming model to find the best fleet sizing decisions, and the sub-problems become capacitated vehicle routing problems considering battery status. Additionally, it assume a heterogeneous fleet based on load and battery capacity, and it consider that battery health deteriorates over time as it plan for multiple periods.Keywords: drone-delivery, stochastic demand, VRP, fleet sizing
Procedia PDF Downloads 5628189 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China
Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan
Abstract:
The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32
Procedia PDF Downloads 17728188 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis
Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen
Abstract:
Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.Keywords: hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection
Procedia PDF Downloads 30528187 A Comparative Analysis of a Custom Optimization Experiment with Confidence Intervals in Anylogic and Optquest
Authors: Felipe Haro, Soheila Antar
Abstract:
This paper introduces a custom optimization experiment developed in AnyLogic, based on genetic algorithms, designed to ensure reliable optimization results by incorporating Montecarlo simulations and achieving a specified confidence level. To validate the custom experiment, we compared its performance with AnyLogic's built-in OptQuest optimization method across three distinct problems. Statistical analyses, including Welch's t-test, were conducted to assess the differences in performance. The results demonstrate that while the custom experiment shows advantages in certain scenarios, both methods perform comparably in others, confirming the custom approach as a reliable and effective tool for optimization under uncertainty.Keywords: optimization, confidence intervals, Montecarlo simulation, optQuest, AnyLogic
Procedia PDF Downloads 1728186 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies
Authors: Roberta Martino, Viviana Ventre
Abstract:
Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty
Procedia PDF Downloads 12928185 Production Planning for Animal Food Industry under Demand Uncertainty
Authors: Pirom Thangchitpianpol, Suttipong Jumroonrut
Abstract:
This research investigates the distribution of food demand for animal food and the optimum amount of that food production at minimum cost. The data consist of customer purchase orders for the food of laying hens, price of food for laying hens, cost per unit for the food inventory, cost related to food of laying hens in which the food is out of stock, such as fine, overtime, urgent purchase for material. They were collected from January, 1990 to December, 2013 from a factory in Nakhonratchasima province. The collected data are analyzed in order to explore the distribution of the monthly food demand for the laying hens and to see the rate of inventory per unit. The results are used in a stochastic linear programming model for aggregate planning in which the optimum production or minimum cost could be obtained. Programming algorithms in MATLAB and tools in Linprog software are used to get the solution. The distribution of the food demand for laying hens and the random numbers are used in the model. The study shows that the distribution of monthly food demand for laying has a normal distribution, the monthly average amount (unit: 30 kg) of production from January to December. The minimum total cost average for 12 months is Baht 62,329,181.77. Therefore, the production planning can reduce the cost by 14.64% from real cost.Keywords: animal food, stochastic linear programming, aggregate planning, production planning, demand uncertainty
Procedia PDF Downloads 37828184 Enhancing Project Management Performance in Prefabricated Building Construction under Uncertainty: A Comprehensive Approach
Authors: Niyongabo Elyse
Abstract:
Prefabricated building construction is a pioneering approach that combines design, production, and assembly to attain energy efficiency, environmental sustainability, and economic feasibility. Despite continuous development in the industry in China, the low technical maturity of standardized design, factory production, and construction assembly introduces uncertainties affecting prefabricated component production and on-site assembly processes. This research focuses on enhancing project management performance under uncertainty to help enterprises navigate these challenges and optimize project resources. The study introduces a perspective on how uncertain factors influence the implementation of prefabricated building construction projects. It proposes a theoretical model considering project process management ability, adaptability to uncertain environments, and collaboration ability of project participants. The impact of uncertain factors is demonstrated through case studies and quantitative analysis, revealing constraints on implementation time, cost, quality, and safety. To address uncertainties in prefabricated component production scheduling, a fuzzy model is presented, expressing processing times in interval values. The model utilizes a cooperative co-evolution evolution algorithm (CCEA) to optimize scheduling, demonstrated through a real case study showcasing reduced project duration and minimized effects of processing time disturbances. Additionally, the research addresses on-site assembly construction scheduling, considering the relationship between task processing times and assigned resources. A multi-objective model with fuzzy activity durations is proposed, employing a hybrid cooperative co-evolution evolution algorithm (HCCEA) to optimize project scheduling. Results from real case studies indicate improved project performance in terms of duration, cost, and resilience to processing time delays and resource changes. The study also introduces a multistage dynamic process control model, utilizing IoT technology for real-time monitoring during component production and construction assembly. This approach dynamically adjusts schedules when constraints arise, leading to enhanced project management performance, as demonstrated in a real prefabricated housing project. Key contributions include a fuzzy prefabricated components production scheduling model, a multi-objective multi-mode resource-constrained construction project scheduling model with fuzzy activity durations, a multi-stage dynamic process control model, and a cooperative co-evolution evolution algorithm. The integrated mathematical model addresses the complexity of prefabricated building construction project management, providing a theoretical foundation for practical decision-making in the field.Keywords: prefabricated construction, project management performance, uncertainty, fuzzy scheduling
Procedia PDF Downloads 4928183 Two-stage Robust Optimization for Collaborative Distribution Network Design Under Uncertainty
Authors: Reza Alikhani
Abstract:
This research focuses on the establishment of horizontal cooperation among companies to enhance their operational efficiency and competitiveness. The study proposes an approach to horizontal collaboration, called coalition configuration, which involves partnering companies sharing distribution centers in a network design problem. The paper investigates which coalition should be formed in each distribution center to minimize the total cost of the network. Moreover, potential uncertainties, such as operational and disruption risks, are considered during the collaborative design phase. To address this problem, a two-stage robust optimization model for collaborative distribution network design under surging demand and facility disruptions is presented, along with a column-and-constraint generation algorithm to obtain exact solutions tailored to the proposed formulation. Extensive numerical experiments are conducted to analyze solutions obtained by the model in various scenarios, including decisions ranging from fully centralized to fully decentralized settings, collaborative versus non-collaborative approaches, and different amounts of uncertainty budgets. The results show that the coalition formation mechanism proposes some solutions that are competitive with the savings of the grand coalition. The research also highlights that collaboration increases network flexibility and resilience while reducing costs associated with demand and capacity uncertainties.Keywords: logistics, warehouse sharing, robust facility location, collaboration for resilience
Procedia PDF Downloads 6828182 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions
Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal
Abstract:
We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport
Procedia PDF Downloads 44028181 Using Interval Type-2 Fuzzy Controller for Diabetes Mellitus
Authors: Nafiseh Mollaei, Reihaneh Kardehi Moghaddam
Abstract:
In case of Diabetes Mellitus the controlling of insulin is very difficult. This illness is an incurable disease affecting millions of people worldwide. Glucose is a sugar which provides energy to the cells. Insulin is a hormone which supports the absorption of glucose. Fuzzy control strategy is attractive for glucose control because it mimics the first and second phase responses that the pancreas beta cells use to control glucose. We propose two control algorithms a type-1 fuzzy controller and an interval type-2 fuzzy method for the insulin infusion. The closed loop system has been simulated for different patients with different parameters, in present of the food intake disturbance and it has been shown that the blood glucose concentrations at a normoglycemic level of 110 mg/dl in the reasonable amount of time. This paper deals with type 1 diabetes as a nonlinear model, which has been simulated in MATLAB-SIMULINK environment. The novel model, termed the Augmented Minimal Model is used in the simulations. There are some uncertainties in this model due to factors such as blood glucose, daily meals or sudden stress. In addition to eliminate the effects of uncertainty, different control methods may be utilized. In this article, fuzzy controller performance were assessed in terms of its ability to track a normoglycemic set point (110 mg/dl) in response to a [0-10] g meal disturbance. Finally, the development reported in this paper is supposed to simplify the insulin delivery, so increasing the quality of life of the patient.Keywords: interval type-2, fuzzy controller, minimal augmented model, uncertainty
Procedia PDF Downloads 42828180 Net Work Meta Analysis to Identify the Most Effective Dressings to Treat Pressure Injury
Authors: Lukman Thalib, Luis Furuya-Kanamori, Rachel Walker, Brigid Gillespie, Suhail Doi
Abstract:
Background and objectives: There are many topical treatments available for Pressure Injury (PI) treatment, yet there is a lack of evidence with regards to the most effective treatment. The objective of this study was to compare the effect of various topical treatments and identify the best treatment choice(s) for PI healing. Methods: Network meta-analysis of published randomized controlled trials that compared the two or more of the following dressing groups: basic, foam, active, hydroactive, and other wound dressings. The outcome complete healing following treatment and the generalised pair-wise modelling framework was used to generate mixed treatment effects against hydroactive wound dressing, currently the standard of treatment for PIs. All treatments were then ranked by their point estimates. Main Results: 40 studies (1,757 participants) comparing 5 dressing groups were included in the analysis. All dressings groups ranked better than basic (i.e. saline gauze or similar inert dressing). The foam (RR 1.18; 95%CI 0.95-1.48) and active wound dressing (RR 1.16; 95%CI 0.92-1.47) ranked better than hydroactive wound dressing in terms of healing of PIs when the latter was used as the reference group. Conclusion & Recommendations: There was considerable uncertainty around the estimates, yet, the use of hydroactive wound dressings appear to perform better than basic dressings. Foam and active wound dressing groups show promise and need further investigation. High-quality research on clinical effectiveness of the topical treatments are warranted to identify if foam and active wound dressings do provide advantages over hydroactive dressings.Keywords: Net work Meta Analysis, Pressure Injury, Dresssing, Pressure Ulcer
Procedia PDF Downloads 11428179 Infilling Strategies for Surrogate Model Based Multi-disciplinary Analysis and Applications to Velocity Prediction Programs
Authors: Malo Pocheau-Lesteven, Olivier Le Maître
Abstract:
Engineering and optimisation of complex systems is often achieved through multi-disciplinary analysis of the system, where each subsystem is modeled and interacts with other subsystems to model the complete system. The coherence of the output of the different sub-systems is achieved through the use of compatibility constraints, which enforce the coupling between the different subsystems. Due to the complexity of some sub-systems and the computational cost of evaluating their respective models, it is often necessary to build surrogate models of these subsystems to allow repeated evaluation these subsystems at a relatively low computational cost. In this paper, gaussian processes are used, as their probabilistic nature is leveraged to evaluate the likelihood of satisfying the compatibility constraints. This paper presents infilling strategies to build accurate surrogate models of the subsystems in areas where they are likely to meet the compatibility constraint. It is shown that these infilling strategies can reduce the computational cost of building surrogate models for a given level of accuracy. An application of these methods to velocity prediction programs used in offshore racing naval architecture further demonstrates these method's applicability in a real engineering context. Also, some examples of the application of uncertainty quantification to field of naval architecture are presented.Keywords: infilling strategy, gaussian process, multi disciplinary analysis, velocity prediction program
Procedia PDF Downloads 15628178 Adopting the Two-Stage Nested Mixed Analysis of Variance Test to the Eco Indicator 99 to Evaluate Building Technologies under LCA Uncertainties
Authors: Svetlana Pushkar
Abstract:
Eco-indicator 99 (EI99) considers fundamental life cycle assessment (LCA) uncertainties via egalitarian/egalitarian (e/e), hierarchist/hierarchist (h/h), individualist/individualist (i/i), individualist/average (i/a), egalitarian/average (e/a), and hierarchist/average (h/a) methodological options. The objective of this study is to provide a reliable two-stage nested mixed balanced Analysis of Variance (ANOVA) test as a supplemental test to EI99 to address the problematic combination of similarly and not similarly produced materials usually found in building technologies. The robustness of the test was determined from both the “EI99 (all options)” stage (including e/e, i/i, h/h, e/a, i/a, and h/a - all methodological options) and the “EI99 (perspectives)” stage (including e/e, i/i, and h/h methodological options of EI99 - the methodological options with their particular weighting set or e/a, i/a, and h/a methodological options of EI99 - the methodological options with the average weighting set) of evaluating building technologies.Keywords: building technologies, LCA uncertainty, Eco-indicator 99, two-stage nested mixed ANOVA test
Procedia PDF Downloads 30728177 Artificial Neural Network to Predict the Optimum Performance of Air Conditioners under Environmental Conditions in Saudi Arabia
Authors: Amr Sadek, Abdelrahaman Al-Qahtany, Turkey Salem Al-Qahtany
Abstract:
In this study, a backpropagation artificial neural network (ANN) model has been used to predict the cooling and heating capacities of air conditioners (AC) under different conditions. Sufficiently large measurement results were obtained from the national energy-efficiency laboratories in Saudi Arabia and were used for the learning process of the ANN model. The parameters affecting the performance of the AC, including temperature, humidity level, specific heat enthalpy indoors and outdoors, and the air volume flow rate of indoor units, have been considered. These parameters were used as inputs for the ANN model, while the cooling and heating capacity values were set as the targets. A backpropagation ANN model with two hidden layers and one output layer could successfully correlate the input parameters with the targets. The characteristics of the ANN model including the input-processing, transfer, neurons-distance, topology, and training functions have been discussed. The performance of the ANN model was monitored over the training epochs and assessed using the mean squared error function. The model was then used to predict the performance of the AC under conditions that were not included in the measurement results. The optimum performance of the AC was also predicted under the different environmental conditions in Saudi Arabia. The uncertainty of the ANN model predictions has been evaluated taking into account the randomness of the data and lack of learning.Keywords: artificial neural network, uncertainty of model predictions, efficiency of air conditioners, cooling and heating capacities
Procedia PDF Downloads 7228176 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines
Authors: Alexander Guzman Urbina, Atsushi Aoyama
Abstract:
The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.Keywords: deep learning, risk assessment, neuro fuzzy, pipelines
Procedia PDF Downloads 29228175 Second Time’s a Charm: The Intervention of the European Patent Office on the Strategic Use of Divisional Applications
Authors: Alissa Lefebre
Abstract:
It might seem intuitive to hope for a fast decision on the patent grant. After all, a granted patent provides you with a monopoly position, which allows you to obstruct others from using your technology. However, this does not take into account the strategic advantages one can obtain from keeping their patent applications pending. First, you have the financial advantage of postponing certain fees, although many applicants would probably agree that this is not the main benefit. As the scope of the patent protection is only decided upon at the grant, the pendency period introduces uncertainty amongst rivals. This uncertainty entails not knowing whether the patent will actually get granted and what the scope of protection will be. Consequently, rivals can only depend upon limited and uncertain information when deciding what technology is worth pursuing. One way to keep patent applications pending, is the use of divisional applications. These applicants can be filed out of a parent application as long as that parent application is still pending. This allows the applicant to pursue (part of) the content of the parent application in another application, as the divisional application cannot exceed the scope of the parent application. In a fast-moving and complex market such as the tele- and digital communications, it might allow applicants to obtain an actual monopoly position as competitors are discouraged to pursue a certain technology. Nevertheless, this practice also has downsides to it. First of all, it has an impact on the workload of the examiners at the patent office. As the number of patent filings have been increasing over the last decades, using strategies that increase this number even more, is not desirable from the patent examiners point of view. Secondly, a pending patent does not provide you with the protection of a granted patent, thus not only create uncertainty for the rivals, but also for the applicant. Consequently, the European patent office (EPO) has come up with a “raising the bar initiative” in which they have decided to tackle the strategic use of divisional applications. Over the past years, two rules have been implemented. The first rule in 2010 introduced a time limit, upon which divisional applications could only be filed within a 24-month limit after the first communication with the patent office. However, after carrying-out a user feedback survey, the EPO abolished the rule again in 2014 and replaced it by a fee mechanism. The fee mechanism is still in place today, which might be an indication of a better result compared to the first rule change. This study tests the impact of these rules on the strategic use of divisional applications in the tele- and digital communication industry and provides empirical evidence on their success. Upon using three different survival models, we find overall evidence that divisional applications prolong the pendency time and that only the second rule is able to tackle the strategic patenting and thus decrease the pendency time.Keywords: divisional applications, regulatory changes, strategic patenting, EPO
Procedia PDF Downloads 12628174 The Establishment of RELAP5/SNAP Model for Kuosheng Nuclear Power Plant
Authors: C. Shih, J. R. Wang, H. C. Chang, S. W. Chen, S. C. Chiang, T. Y. Yu
Abstract:
After the measurement uncertainty recapture (MUR) power uprates, Kuosheng nuclear power plant (NPP) was uprated the power from 2894 MWt to 2943 MWt. For power upgrade, several codes (e.g., TRACE, RELAP5, etc.) were applied to assess the safety of Kuosheng NPP. Hence, the main work of this research is to establish a RELAP5/MOD3.3 model of Kuosheng NPP with SNAP interface. The establishment of RELAP5/SNAP model was referred to the FSAR, training documents, and TRACE model which has been developed and verified before. After completing the model establishment, the startup test scenarios would be applied to the RELAP5/SNAP model. With comparing the startup test data and TRACE analysis results, the applicability of RELAP5/SNAP model would be assessed.Keywords: RELAP5, TRACE, SNAP, BWR
Procedia PDF Downloads 428