Search results for: Esterina De Carlo
288 Analyzing of Speed Disparity in Mixed Vehicle Technologies on Horizontal Curves
Authors: Tahmina Sultana, Yasser Hassan
Abstract:
Vehicle technologies rapidly evolving due to their multifaceted advantages. Adapted different vehicle technologies like connectivity and automation on the same roads with conventional vehicles controlled by human drivers may increase speed disparity in mixed vehicle technologies. Identifying relationships between speed distribution measures of different vehicles and road geometry can be an indicator of speed disparity in mixed technologies. Previous studies proved that speed disparity measures and traffic accidents are inextricably related. Horizontal curves from three geographic areas were selected based on relevant criteria, and speed data were collected at the midpoint of the preceding tangent and starting, ending, and middle point of the curve. Multiple linear mixed effect models (LME) were developed using the instantaneous speed measures representing the speed of vehicles at different points of horizontal curves to recognize relationships between speed variance (standard deviation) and road geometry. A simulation-based framework (Monte Carlo) was introduced to check the speed disparity on horizontal curves in mixed vehicle technologies when consideration is given to the interactions among connected vehicles (CVs), autonomous vehicles (AVs), and non-connected vehicles (NCVs) on horizontal curves. The Monte Carlo method was used in the simulation to randomly sample values for the various parameters from their respective distributions. Theresults show that NCVs had higher speed variation than CVs and AVs. In addition, AVs and CVs contributed to reduce speed disparity in the mixed vehicle technologies in any penetration rates.Keywords: autonomous vehicles, connected vehicles, non-connected vehicles, speed variance
Procedia PDF Downloads 145287 Advanced Numerical and Analytical Methods for Assessing Concrete Sewers and Their Remaining Service Life
Authors: Amir Alani, Mojtaba Mahmoodian, Anna Romanova, Asaad Faramarzi
Abstract:
Pipelines are extensively used engineering structures which convey fluid from one place to another. Most of the time, pipelines are placed underground and are encumbered by soil weight and traffic loads. Corrosion of pipe material is the most common form of pipeline deterioration and should be considered in both the strength and serviceability analysis of pipes. The study in this research focuses on concrete pipes in sewage systems (concrete sewers). This research firstly investigates how to involve the effect of corrosion as a time dependent process of deterioration in the structural and failure analysis of this type of pipe. Then three probabilistic time dependent reliability analysis methods including the first passage probability theory, the gamma distributed degradation model and the Monte Carlo simulation technique are discussed and developed. Sensitivity analysis indexes which can be used to identify the most important parameters that affect pipe failure are also discussed. The reliability analysis methods developed in this paper contribute as rational tools for decision makers with regard to the strengthening and rehabilitation of existing pipelines. The results can be used to obtain a cost-effective strategy for the management of the sewer system.Keywords: reliability analysis, service life prediction, Monte Carlo simulation method, first passage probability theory, gamma distributed degradation model
Procedia PDF Downloads 457286 Evaluation of the Performance of Solar Stills as an Alternative for Brine Treatment Applying the Monte Carlo Ray Tracing Method
Authors: B. E. Tarazona-Romero, J. G. Ascanio-Villabona, O. Lengerke-Perez, A. D. Rincon-Quintero, C. L. Sandoval-Rodriguez
Abstract:
Desalination offers solutions for the shortage of water in the world, however, the process of eliminating salts generates a by-product known as brine, generally eliminated in the environment through techniques that mitigate its impact. Brine treatment techniques are vital to developing an environmentally sustainable desalination process. Consequently, this document evaluates three different geometric configurations of solar stills as an alternative for brine treatment to be integrated into a low-scale desalination process. The geometric scenarios to be studied were selected because they have characteristics that adapt to the concept of appropriate technology; low cost, intensive labor and material resources for local manufacturing, modularity, and simplicity in construction. Additionally, the conceptual design of the collectors was carried out, and the ray tracing methodology was applied through the open access software SolTrace and Tonatiuh. The simulation process used 600.00 rays and modified two input parameters; direct normal radiation (DNI) and reflectance. In summary, for the scenarios evaluated, the ladder-type distiller presented higher efficiency values compared to the pyramid-type and single-slope collectors. Finally, the efficiency of the collectors studied was directly related to their geometry, that is, large geometries allow them to receive a greater number of solar rays in various paths, affecting the efficiency of the device.Keywords: appropriate technology, brine treatment techniques, desalination, monte carlo ray tracing
Procedia PDF Downloads 72285 A Critical Review of Assessments of Geological CO2 Storage Resources in Pennsylvania and the Surrounding Region
Authors: Levent Taylan Ozgur Yildirim, Qihao Qian, John Yilin Wang
Abstract:
A critical review of assessments of geological carbon dioxide (CO2) storage resources in Pennsylvania and the surrounding region was completed with a focus on the studies of Midwest Regional Carbon Sequestration Partnership (MRCSP), United States Department of Energy (US-DOE), and United States Geological Survey (USGS). Pennsylvania Geological Survey participated in the MRCSP Phase I research to characterize potential storage formations in Pennsylvania. The MRCSP’s volumetric method estimated ~89 gigatonnes (Gt) of total CO2 storage resources in deep saline formations, depleted oil and gas reservoirs, coals, and shales in Pennsylvania. Meanwhile, the US-DOE calculated storage efficiency factors using log-odds normal distribution and Monte Carlo sampling, revealing contingent storage resources of ~18 Gt to ~20 Gt in deep saline formations, depleted oil and gas reservoirs, and coals in Pennsylvania. Additionally, the USGS employed Beta-PERT distribution and Monte Carlo sampling to determine buoyant and residual storage efficiency factors, resulting in 20 Gt of contingent storage resources across four storage assessment units in Appalachian Basin. However, few studies have explored CO2 storage resources in shales in the region, yielding inconclusive findings. This article provides a critical and most up to date review and analysis of geological CO2 storage resources in Pennsylvania and the region.Keywords: carbon capture and storage, geological CO2 storage, pennsylvania, appalachian basin
Procedia PDF Downloads 56284 Electricity Load Modeling: An Application to Italian Market
Authors: Giovanni Masala, Stefania Marica
Abstract:
Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression
Procedia PDF Downloads 398283 Uncertainty Assessment in Building Energy Performance
Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud
Abstract:
The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method
Procedia PDF Downloads 460282 Application Reliability Method for Concrete Dams
Authors: Mustapha Kamel Mihoubi, Mohamed Essadik Kerkar
Abstract:
Probabilistic risk analysis models are used to provide a better understanding of the reliability and structural failure of works, including when calculating the stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of reliability analysis methods including the methods used in engineering. It is in our case, the use of level 2 methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type first order risk method (FORM) and the second order risk method (SORM). By way of comparison, a level three method was used which generates a full analysis of the problem and involves an integration of the probability density function of random variables extended to the field of security using the Monte Carlo simulation method. Taking into account the change in stress following load combinations: normal, exceptional and extreme acting on the dam, calculation of the results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities, thus causing a significant decrease in strength, shear forces then induce a shift that threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case the increase of uplift in a hypothetical default of the drainage system.Keywords: dam, failure, limit-state, monte-carlo, reliability, probability, simulation, sliding, taylor
Procedia PDF Downloads 326281 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process
Authors: Johannes Gantner, Michael Held, Matthias Fischer
Abstract:
The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation
Procedia PDF Downloads 286280 Investigating the Minimum RVE Size to Simulate Poly (Propylene carbonate) Composites Reinforced with Cellulose Nanocrystals as a Bio-Nanocomposite
Authors: Hamed Nazeri, Pierre Mertiny, Yongsheng Ma, Kajsa Duke
Abstract:
The background of the present study is the use of environment-friendly biopolymer and biocomposite materials. Among the recently introduced biopolymers, poly (propylene carbonate) (PPC) has been gaining attention. This study focuses on the size of representative volume elements (RVE) in order to simulate PPC composites reinforced by cellulose nanocrystals (CNCs) as a bio-nanocomposite. Before manufacturing nanocomposites, numerical modeling should be implemented to explore and predict mechanical properties, which may be accomplished by creating and studying a suitable RVE. In other studies, modeling of composites with rod shaped fillers has been reported assuming that fillers are unidirectionally aligned. But, modeling of non-aligned filler dispersions is considerably more difficult. This study investigates the minimum RVE size to enable subsequent FEA modeling. The matrix and nano-fillers were modeled using the finite element software ABAQUS, assuming randomly dispersed fillers with a filler mass fraction of 1.5%. To simulate filler dispersion, a Monte Carlo technique was employed. The numerical simulation was implemented to find composite elastic moduli. After commencing the simulation with a single filler particle, the number of particles was increased to assess the minimum number of filler particles that satisfies the requirements for an RVE, providing the composite elastic modulus in a reliable fashion.Keywords: biocomposite, Monte Carlo method, nanocomposite, representative volume element
Procedia PDF Downloads 444279 Monte Carlo Simulation Study on Improving the Flatting Filter-Free Radiotherapy Beam Quality Using Filters from Low- z Material
Authors: H. M. Alfrihidi, H.A. Albarakaty
Abstract:
Flattening filter-free (FFF) photon beam radiotherapy has increased in the last decade, which is enabled by advancements in treatment planning systems and radiation delivery techniques like multi-leave collimators. FFF beams have higher dose rates, which reduces treatment time. On the other hand, FFF beams have a higher surface dose, which is due to the loss of beam hardening effect caused by the presence of the flatting filter (FF). The possibility of improving FFF beam quality using filters from low-z materials such as steel and aluminium (Al) was investigated using Monte Carlo (MC) simulations. The attenuation coefficient of low-z materials for low-energy photons is higher than that of high-energy photons, which leads to the hardening of the FFF beam and, consequently, a reduction in the surface dose. BEAMnrc user code, based on Electron Gamma Shower (EGSnrc) MC code, is used to simulate the beam of a 6 MV True-Beam linac. A phase-space (phosphor) file provided by Varian Medical Systems was used as a radiation source in the simulation. This phosphor file was scored just above the jaws at 27.88 cm from the target. The linac from the jaw downward was constructed, and radiation passing was simulated and scored at 100 cm from the target. To study the effect of low-z filters, steel and Al filters with a thickness of 1 cm were added below the jaws, and the phosphor file was scored at 100 cm from the target. For comparison, the FF beam was simulated using a similar setup. (BEAM Data Processor (BEAMdp) is used to analyse the energy spectrum in the phosphorus files. Then, the dose distribution resulting from these beams was simulated in a homogeneous water phantom using DOSXYZnrc. The dose profile was evaluated according to the surface dose, the lateral dose distribution, and the percentage depth dose (PDD). The energy spectra of the beams show that the FFF beam is softer than the FF beam. The energy peaks for the FFF and FF beams are 0.525 MeV and 1.52 MeV, respectively. The FFF beam's energy peak becomes 1.1 MeV using a steel filter, while the Al filter does not affect the peak position. Steel and Al's filters reduced the surface dose by 5% and 1.7%, respectively. The dose at a depth of 10 cm (D10) rises by around 2% and 0.5% due to using a steel and Al filter, respectively. On the other hand, steel and Al filters reduce the dose rate of the FFF beam by 34% and 14%, respectively. However, their effect on the dose rate is less than that of the tungsten FF, which reduces the dose rate by about 60%. In conclusion, filters from low-z material decrease the surface dose and increase the D10 dose, allowing for a high-dose delivery to deep tumors with a low skin dose. Although using these filters affects the dose rate, this effect is much lower than the effect of the FF.Keywords: flattening filter free, monte carlo, radiotherapy, surface dose
Procedia PDF Downloads 73278 The BNCT Project Using the Cf-252 Source: Monte Carlo Simulations
Authors: Marta Błażkiewicz-Mazurek, Adam Konefał
Abstract:
The project can be divided into three main parts: i. modeling the Cf-252 neutron source and conducting an experiment to verify the correctness of the obtained results, ii. design of the BNCT system infrastructure, iii. analysis of the results from the logical detector. Modeling of the Cf-252 source included designing the shape and size of the source as well as the energy and spatial distribution of emitted neutrons. Two options were considered: a point source and a cylindrical spatial source. The energy distribution corresponded to various spectra taken from specialized literature. Directionally isotropic neutron emission was simulated. The simulation results were compared with experimental values determined using the activation detector method using indium foils and cadmium shields. The relative fluence rate of thermal and resonance neutrons was compared in the chosen places in the vicinity of the source. The second part of the project related to the modeling of the BNCT infrastructure consisted of developing a simulation program taking into account all the essential components of this system. Materials with moderating, absorbing, and backscattering properties of neutrons were adopted into the project. Additionally, a gamma radiation filter was introduced into the beam output system. The analysis of the simulation results obtained using a logical detector located at the beam exit from the BNCT infrastructure included neutron energy and their spatial distribution. Optimization of the system involved changing the size and materials of the system to obtain a suitable collimated beam of thermal neutrons.Keywords: BNCT, Monte Carlo, neutrons, simulation, modeling
Procedia PDF Downloads 34277 Estimation of the Mean of the Selected Population
Authors: Kalu Ram Meena, Aditi Kar Gangopadhyay, Satrajit Mandal
Abstract:
Two normal populations with different means and same variance are considered, where the variances are known. The population with the smaller sample mean is selected. Various estimators are constructed for the mean of the selected normal population. Finally, they are compared with respect to the bias and MSE risks by the method of Monte-Carlo simulation and their performances are analysed with the help of graphs.Keywords: estimation after selection, Brewster-Zidek technique, estimators, selected populations
Procedia PDF Downloads 512276 Molecular Simulation of NO, NH3 Adsorption in MFI and H-ZSM5
Authors: Z. Jamalzadeh, A. Niaei, H. Erfannia, S. G. Hosseini, A. S. Razmgir
Abstract:
Due to developing the industries, the emission of pollutants such as NOx, SOx, and CO2 are rapidly increased. Generally, NOx is attributed to the mono nitrogen oxides of NO and NO2 that is one of the most important atmospheric contaminants. Hence, controlling the emission of nitrogen oxides is urgent environmentally. Selective Catalytic Reduction of NOx is one of the most common techniques for NOx removal in which Zeolites have wide application due to their high performance. In zeolitic processes, the catalytic reaction occurs mostly in the pores. Therefore, investigation the adsorption phenomena of the molecules in order to gain an insight and understand the catalytic cycle is of important. Hence, in current study, molecular simulations is applied for studying the adsorption phenomena in nanocatalysts applied for SCR of NOx process. The effect of cation addition to the support in the catalysts’ behavior through adsorption step was explored by Mont Carlo (MC). Simulation time of 1 Ns accompanying 1 fs time step, COMPASS27 Force Field and the cut off radios of 12.5 Ȧ was applied for performed runs. It was observed that the adsorption capacity increases in the presence of cations. The sorption isotherms demonstrated the behavior of type I isotherm categories and sorption capacity diminished with increase in temperature whereas an increase was observed at high pressures. Besides, NO sorption showed higher sorption capacity than NH3 in H–ZSM5. In this respect, the Energy distributions signified that the molecules could adsorb in just one sorption site at the catalyst and the sorption energy of NO was stronger than the NH3 in H-ZSM5. Furthermore, the isosteric heat of sorption data showed nearly same values for the molecules; however, it indicated stronger interactions of NO molecules with H-ZSM5 Zeolite compared to the isosteric heat of NH3 which was low in value.Keywords: Monte Carlo simulation, adsorption, NOx, ZSM5
Procedia PDF Downloads 378275 The Bayesian Premium Under Entropy Loss
Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita
Abstract:
Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation
Procedia PDF Downloads 335274 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 202273 Process Development of pVAX1/lacZ Plasmid DNA Purification Using Design of Experiment
Authors: Asavasereerat K., Teacharsripaitoon T., Tungyingyong P., Charupongrat S., Noppiboon S. Hochareon L., Kitsuban P.
Abstract:
Third generation of vaccines is based on gene therapy where DNA is introduced into patients. The antigenic or therapeutic proteins encoded from transgenes DNA triggers an immune-response to counteract various diseases. Moreover, DNA vaccine offers the customization of its ability on protection and treatment with high stability. The production of DNA vaccines become of interest. According to USFDA guidance for industry, the recommended limits for impurities from host cell are lower than 1%, and the active conformation homogeneity supercoiled DNA, is more than 80%. Thus, the purification strategy using two-steps chromatography has been established and verified for its robustness. Herein, pVax1/lacZ, a pre-approved USFDA DNA vaccine backbone, was used and transformed into E. coli strain DH5α. Three purification process parameters including sample-loading flow rate, the salt concentration in washing and eluting buffer, were studied and the experiment was designed using response surface method with central composite face-centered (CCF) as a model. The designed range of selected parameters was 10% variation from the optimized set point as a safety factor. The purity in the percentage of supercoiled conformation obtained from each chromatography step, AIEX and HIC, were analyzed by HPLC. The response data were used to establish regression model and statistically analyzed followed by Monte Carlo simulation using SAS JMP. The results on the purity of the product obtained from AIEX and HIC are between 89.4 to 92.5% and 88.3 to 100.0%, respectively. Monte Carlo simulation showed that the pVAX1/lacZ purification process is robust with confidence intervals of 0.90 in range of 90.18-91.00% and 95.88-100.00%, for AIEX and HIC respectively.Keywords: AIEX, DNA vaccine, HIC, puification, response surface method, robustness
Procedia PDF Downloads 209272 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis
Authors: Kimberly Samaha
Abstract:
In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.Keywords: bio-economy, investment risk, circular design, economic modelling
Procedia PDF Downloads 101271 Assessment Using Copulas of Simultaneous Damage to Multiple Buildings Due to Tsunamis
Authors: Yo Fukutani, Shuji Moriguchi, Takuma Kotani, Terada Kenjiro
Abstract:
If risk management of the assets owned by companies, risk assessment of real estate portfolio, and risk identification of the entire region are to be implemented, it is necessary to consider simultaneous damage to multiple buildings. In this research, the Sagami Trough earthquake tsunami that could have a significant effect on the Japanese capital region is focused on, and a method is proposed for simultaneous damage assessment using copulas that can take into consideration the correlation of tsunami depths and building damage between two sites. First, the tsunami inundation depths at two sites were simulated by using a nonlinear long-wave equation. The tsunamis were simulated by varying the slip amount (five cases) and the depths (five cases) for each of 10 sources of the Sagami Trough. For each source, the frequency distributions of the tsunami inundation depth were evaluated by using the response surface method. Then, Monte-Carlo simulation was conducted, and frequency distributions of tsunami inundation depth were evaluated at the target sites for all sources of the Sagami Trough. These are marginal distributions. Kendall’s tau for the tsunami inundation simulation at two sites was 0.83. Based on this value, the Gaussian copula, t-copula, Clayton copula, and Gumbel copula (n = 10,000) were generated. Then, the simultaneous distributions of the damage rate were evaluated using the marginal distributions and the copulas. For the correlation of the tsunami inundation depth at the two sites, the expected value hardly changed compared with the case of no correlation, but the damage rate of the ninety-ninth percentile value was approximately 2%, and the maximum value was approximately 6% when using the Gumbel copula.Keywords: copulas, Monte-Carlo simulation, probabilistic risk assessment, tsunamis
Procedia PDF Downloads 143270 Probabilistic Analysis of Bearing Capacity of Isolated Footing using Monte Carlo Simulation
Authors: Sameer Jung Karki, Gokhan Saygili
Abstract:
The allowable bearing capacity of foundation systems is determined by applying a factor of safety to the ultimate bearing capacity. Conventional ultimate bearing capacity calculations routines are based on deterministic input parameters where the nonuniformity and inhomogeneity of soil and site properties are not accounted for. Hence, the laws of mathematics like probability calculus and statistical analysis cannot be directly applied to foundation engineering. It’s assumed that the Factor of Safety, typically as high as 3.0, incorporates the uncertainty of the input parameters. This factor of safety is estimated based on subjective judgement rather than objective facts. It is an ambiguous term. Hence, a probabilistic analysis of the bearing capacity of an isolated footing on a clayey soil is carried out by using the Monte Carlo Simulation method. This simulated model was compared with the traditional discrete model. It was found out that the bearing capacity of soil was found higher for the simulated model compared with the discrete model. This was verified by doing the sensitivity analysis. As the number of simulations was increased, there was a significant % increase of the bearing capacity compared with discrete bearing capacity. The bearing capacity values obtained by simulation was found to follow a normal distribution. While using the traditional value of Factor of safety 3, the allowable bearing capacity had lower probability (0.03717) of occurring in the field compared to a higher probability (0.15866), while using the simulation derived factor of safety of 1.5. This means the traditional factor of safety is giving us bearing capacity that is less likely occurring/available in the field. This shows the subjective nature of factor of safety, and hence probability method is suggested to address the variability of the input parameters in bearing capacity equations.Keywords: bearing capacity, factor of safety, isolated footing, montecarlo simulation
Procedia PDF Downloads 187269 The Impact of Dispatching with Rolling Horizon Control in Sizing Thermal Storage for Solar Tower Plant Participating in Wholesale Spot Electricity Market
Authors: Navid Mohammadzadeh, Huy Truong-Ba, Michael Cholette
Abstract:
The solar tower (ST) plant is a promising technology to exploit large-scale solar irradiation. With thermal energy storage, ST plant has the potential to shift generation to high electricity price periods. However, the size of storage limits the dispatchability of the plant, particularly when it should compete with uncertainty in forecasts of solar irradiation and electricity prices. The purpose of this study is to explore the size of storage when Rolling Horizon Control (RHC) is employed for dispatch scheduling. To this end, RHC is benchmarked against perfect knowledge (PK) forecast and two day-ahead dispatching policies. With optimisation of dispatch planning using PK policy, the optimal achievable profit for a specific size of the storage is determined. A sensitivity analysis using Monte-Carlo simulation is conducted, and the size of storage for RHC and day-ahead policies is determined with the objective of reaching the profit obtained from the PK policy. A case study is conducted for a hypothetical ST plant with thermal storage located in South Australia and intends to dispatch under two market scenarios: 1) fixed price and 2) wholesale spot price. The impact of each individual source of uncertainty on storage size is examined for January and August. The exploration of results shows that dispatching with RH controller reaches optimal achievable profit with ~15% smaller storage compared to that in day-ahead policies. The results of this study may be applied to the CSP plant design procedure.Keywords: solar tower plant, spot market, thermal storage system, optimized dispatch planning, sensitivity analysis, Monte Carlo simulation
Procedia PDF Downloads 125268 Optimal Design of Tuned Inerter Damper-Based System for the Control of Wind-Induced Vibration in Tall Buildings through Cultural Algorithm
Authors: Luis Lara-Valencia, Mateo Ramirez-Acevedo, Daniel Caicedo, Jose Brito, Yosef Farbiarz
Abstract:
Controlling wind-induced vibrations as well as aerodynamic forces, is an essential part of the structural design of tall buildings in order to guarantee the serviceability limit state of the structure. This paper presents a numerical investigation on the optimal design parameters of a Tuned Inerter Damper (TID) based system for the control of wind-induced vibration in tall buildings. The control system is based on the conventional TID, with the main difference that its location is changed from the ground level to the last two story-levels of the structural system. The TID tuning procedure is based on an evolutionary cultural algorithm in which the optimum design variables defined as the frequency and damping ratios were searched according to the optimization criteria of minimizing the root mean square (RMS) response of displacements at the nth story of the structure. A Monte Carlo simulation was used to represent the dynamic action of the wind in the time domain in which a time-series derived from the Davenport spectrum using eleven harmonic functions with randomly chosen phase angles was reproduced. The above-mentioned methodology was applied on a case-study derived from a 37-story prestressed concrete building with 144 m height, in which the wind action overcomes the seismic action. The results showed that the optimally tuned TID is effective to reduce the RMS response of displacements up to 25%, which demonstrates the feasibility of the system for the control of wind-induced vibrations in tall buildings.Keywords: evolutionary cultural algorithm, Monte Carlo simulation, tuned inerter damper, wind-induced vibrations
Procedia PDF Downloads 135267 Passive Vibration Isolation Analysis and Optimization for Mechanical Systems
Authors: Ozan Yavuz Baytemir, Ender Cigeroglu, Gokhan Osman Ozgen
Abstract:
Vibration is an important issue in the design of various components of aerospace, marine and vehicular applications. In order not to lose the components’ function and operational performance, vibration isolation design involving the optimum isolator properties selection and isolator positioning processes appear to be a critical study. Knowing the growing need for the vibration isolation system design, this paper aims to present two types of software capable of implementing modal analysis, response analysis for both random and harmonic types of excitations, static deflection analysis, Monte Carlo simulations in addition to study of parameter and location optimization for different types of isolation problem scenarios. Investigating the literature, there is no such study developing a software-based tool that is capable of implementing all those analysis, simulation and optimization studies in one platform simultaneously. In this paper, the theoretical system model is generated for a 6-DOF rigid body. The vibration isolation system of any mechanical structure is able to be optimized using hybrid method involving both global search and gradient-based methods. Defining the optimization design variables, different types of optimization scenarios are listed in detail. Being aware of the need for a user friendly vibration isolation problem solver, two types of graphical user interfaces (GUIs) are prepared and verified using a commercial finite element analysis program, Ansys Workbench 14.0. Using the analysis and optimization capabilities of those GUIs, a real application used in an air-platform is also presented as a case study at the end of the paper.Keywords: hybrid optimization, Monte Carlo simulation, multi-degree-of-freedom system, parameter optimization, location optimization, passive vibration isolation analysis
Procedia PDF Downloads 565266 Numerical Response of Coaxial HPGe Detector for Skull and Knee Measurement
Authors: Pabitra Sahu, M. Manohari, S. Priyadharshini, R. Santhanam, S. Chandrasekaran, B. Venkatraman
Abstract:
Radiation workers of reprocessing plants have a potential for internal exposure due to actinides and fission products. Radionuclides like Americium, lead, Polonium and Europium are bone seekers and get accumulated in the skeletal part. As the major skeletal content is in the skull (13%) and knee (22%), measurements of old intake have to be carried out in the skull and knee. At the Indira Gandhi Centre for Atomic Research, a twin HPGe-based actinide monitor is used for the measurement of actinides present in bone. Efficiency estimation, which is one of the prerequisites for the quantification of radionuclides, requires anthropomorphic phantoms. Such phantoms are very limited. Hence, in this study, efficiency curves for a Twin HPGe-based actinide monitoring system are established theoretically using the FLUKA Monte Carlo method and ICRP adult male voxel phantom. In the case of skull measurement, the detector is placed over the forehead, and for knee measurement, one detector is placed over each knee. The efficiency values of radionuclides present in the knee and skull vary from 3.72E-04 to 4.19E-04 CPS/photon and 5.22E-04 to 7.07E-04 CPS/photon, respectively, for the energy range 17 to 3000keV. The efficiency curves for the measurement are established, and it is found that initially, the efficiency value increases up to 100 keV and then starts decreasing. It is found that the skull efficiency values are 4% to 63% higher than that of the knee, depending on the energy for all the energies except 17.74 keV. The reason is the closeness of the detector to the skull compared to the knee. But for 17.74 keV the efficiency of the knee is more than the skull due to the higher attenuation caused in the skull bones because of its greater thickness. The Minimum Detectable Activity (MDA) for 241Am present in the skull and knee is 9 Bq. 239Pu has a MDA of 950 Bq and 1270 Bq for knee and skull, respectively, for a counting time of 1800 sec. This paper discusses the simulation method and the results obtained in the study.Keywords: FLUKA Monte Carlo Method, ICRP adult male voxel phantom, knee, Skull.
Procedia PDF Downloads 51265 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 225264 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion
Authors: Omran M. Kenshel, Alan J. O'Connor
Abstract:
Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability
Procedia PDF Downloads 473263 Reliability Analysis of Variable Stiffness Composite Laminate Structures
Authors: A. Sohouli, A. Suleman
Abstract:
This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures
Procedia PDF Downloads 520262 Development of a Robust Protein Classifier to Predict EMT Status of Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC) Tumors
Authors: ZhenlinJu, Christopher P. Vellano, RehanAkbani, Yiling Lu, Gordon B. Mills
Abstract:
The epithelial–mesenchymal transition (EMT) is a process by which epithelial cells acquire mesenchymal characteristics, such as profound disruption of cell-cell junctions, loss of apical-basolateral polarity, and extensive reorganization of the actin cytoskeleton to induce cell motility and invasion. A hallmark of EMT is its capacity to promote metastasis, which is due in part to activation of several transcription factors and subsequent downregulation of E-cadherin. Unfortunately, current approaches have yet to uncover robust protein marker sets that can classify tumors as possessing strong EMT signatures. In this study, we utilize reverse phase protein array (RPPA) data and consensus clustering methods to successfully classify a subset of cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC) tumors into an EMT protein signaling group (EMT group). The overall survival (OS) of patients in the EMT group is significantly worse than those in the other Hormone and PI3K/AKT signaling groups. In addition to a shrinkage and selection method for linear regression (LASSO), we applied training/test set and Monte Carlo resampling approaches to identify a set of protein markers that predicts the EMT status of CESC tumors. We fit a logistic model to these protein markers and developed a classifier, which was fixed in the training set and validated in the testing set. The classifier robustly predicted the EMT status of the testing set with an area under the curve (AUC) of 0.975 by Receiver Operating Characteristic (ROC) analysis. This method not only identifies a core set of proteins underlying an EMT signature in cervical cancer patients, but also provides a tool to examine protein predictors that drive molecular subtypes in other diseases.Keywords: consensus clustering, TCGA CESC, Silhouette, Monte Carlo LASSO
Procedia PDF Downloads 470261 Consistent Testing for an Implication of Supermodular Dominance with an Application to Verifying the Effect of Geographic Knowledge Spillover
Authors: Chung Danbi, Linton Oliver, Whang Yoon-Jae
Abstract:
Supermodularity, or complementarity, is a popular concept in economics which can characterize many objective functions such as utility, social welfare, and production functions. Further, supermodular dominance captures a preference for greater interdependence among inputs of those functions, and it can be applied to examine which input set would produce higher expected utility, social welfare, or production. Therefore, we propose and justify a consistent testing for a useful implication of supermodular dominance. We also conduct Monte Carlo simulations to explore the finite sample performance of our test, with critical values obtained from the recentered bootstrap method, with and without the selective recentering, and the subsampling method. Under various parameter settings, we confirmed that our test has reasonably good size and power performance. Finally, we apply our test to compare the geographic and distant knowledge spillover in terms of their effects on social welfare using the National Bureau of Economic Research (NBER) patent data. We expect localized citing to supermodularly dominate distant citing if the geographic knowledge spillover engenders greater social welfare than distant knowledge spillover. Taking subgroups based on firm and patent characteristics, we found that there is industry-wise and patent subclass-wise difference in the pattern of supermodular dominance between localized and distant citing. We also compare the results from analyzing different time periods to see if the development of Internet and communication technology has changed the pattern of the dominance. In addition, to appropriately deal with the sparse nature of the data, we apply high-dimensional methods to efficiently select relevant data.Keywords: supermodularity, supermodular dominance, stochastic dominance, Monte Carlo simulation, bootstrap, subsampling
Procedia PDF Downloads 130260 Environmental Radioactivity Analysis by a Sequential Approach
Authors: G. Medkour Ishak-Boushaki, A. Taibi, M. Allab
Abstract:
Quantitative environmental radioactivity measurements are needed to determine the level of exposure of a population to ionizing radiations and for the assessment of the associated risks. Gamma spectrometry remains a very powerful tool for the analysis of radionuclides present in an environmental sample but the basic problem in such measurements is the low rate of detected events. Using large environmental samples could help to get around this difficulty but, unfortunately, new issues are raised by gamma rays attenuation and self-absorption. Recently, a new method has been suggested, to detect and identify without quantification, in a short time, a gamma ray of a low count source. This method does not require, as usually adopted in gamma spectrometry measurements, a pulse height spectrum acquisition. It is based on a chronological record of each detected photon by simultaneous measurements of its energy ε and its arrival time τ on the detector, the pair parameters [ε,τ] defining an event mode sequence (EMS). The EMS serials are analyzed sequentially by a Bayesian approach to detect the presence of a given radioactive source. The main object of the present work is to test the applicability of this sequential approach in radioactive environmental materials detection. Moreover, for an appropriate health oversight of the public and of the concerned workers, the analysis has been extended to get a reliable quantification of the radionuclides present in environmental samples. For illustration, we consider as an example, the problem of detection and quantification of 238U. Monte Carlo simulated experience is carried out consisting in the detection, by a Ge(Hp) semiconductor junction, of gamma rays of 63 keV emitted by 234Th (progeny of 238U). The generated EMS serials are analyzed by a Bayesian inference. The application of the sequential Bayesian approach, in environmental radioactivity analysis, offers the possibility of reducing the measurements time without requiring large environmental samples and consequently avoids the attached inconvenient. The work is still in progress.Keywords: Bayesian approach, event mode sequence, gamma spectrometry, Monte Carlo method
Procedia PDF Downloads 497259 Self-Image of Police Officers
Authors: Leo Carlo B. Rondina
Abstract:
Self-image is an important factor to improve the self-esteem of the personnel. The purpose of the study is to determine the self-image of the police. The respondents were the 503 policemen assigned in different Police Station in Davao City, and they were chosen with the used of random sampling. With the used of Exploratory Factor Analysis (EFA), latent construct variables of police image were identified as follows; professionalism, obedience, morality and justice and fairness. Further, ordinal regression indicates statistical characteristics on ages 21-40 which means the age of the respondent statistically improves self-image.Keywords: police image, exploratory factor analysis, ordinal regression, Galatea effect
Procedia PDF Downloads 289