Search results for: Probability distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2310

Search results for: Probability distribution

1770 Computer Aided X-Ray Diffraction Intensity Analysis for Spinels: Hands-On Computing Experience

Authors: Ashish R. Tanna, Hiren H. Joshi

Abstract:

The mineral having chemical compositional formula MgAl2O4 is called “spinel". The ferrites crystallize in spinel structure are known as spinel-ferrites or ferro-spinels. The spinel structure has a fcc cage of oxygen ions and the metallic cations are distributed among tetrahedral (A) and octahedral (B) interstitial voids (sites). The X-ray diffraction (XRD) intensity of each Bragg plane is sensitive to the distribution of cations in the interstitial voids of the spinel lattice. This leads to the method of determination of distribution of cations in the spinel oxides through XRD intensity analysis. The computer program for XRD intensity analysis has been developed in C language and also tested for the real experimental situation by synthesizing the spinel ferrite materials Mg0.6Zn0.4AlxFe2- xO4 and characterized them by X-ray diffractometry. The compositions of Mg0.6Zn0.4AlxFe2-xO4(x = 0.0 to 0.6) ferrites have been prepared by ceramic method and powder X-ray diffraction patterns were recorded. Thus, the authenticity of the program is checked by comparing the theoretically calculated data using computer simulation with the experimental ones. Further, the deduced cation distributions were used to fit the magnetization data using Localized canting of spins approach to explain the “recovery" of collinear spin structure due to Al3+ - substitution in Mg-Zn ferrites which is the case if A-site magnetic dilution and non-collinear spin structure. Since the distribution of cations in the spinel ferrites plays a very important role with regard to their electrical and magnetic properties, it is essential to determine the cation distribution in spinel lattice.

Keywords: Spinel ferrites, Localized canting of spins, X-ray diffraction, Programming in Borland C.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3807
1769 Approximations to the Distribution of the Sample Correlation Coefficient

Authors: John N. Haddad, Serge B. Provost

Abstract:

Given a bivariate normal sample of correlated variables, (Xi, Yi), i = 1, . . . , n, an alternative estimator of Pearson’s correlation coefficient is obtained in terms of the ranges, |Xi − Yi|. An approximate confidence interval for ρX,Y is then derived, and a simulation study reveals that the resulting coverage probabilities are in close agreement with the set confidence levels. As well, a new approximant is provided for the density function of R, the sample correlation coefficient. A mixture involving the proposed approximate density of R, denoted by hR(r), and a density function determined from a known approximation due to R. A. Fisher is shown to accurately approximate the distribution of R. Finally, nearly exact density approximants are obtained on adjusting hR(r) by a 7th degree polynomial.

Keywords: Sample correlation coefficient, density approximation, confidence intervals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2270
1768 Study on Performance of Wigner Ville Distribution for Linear FM and Transient Signal Analysis

Authors: Azeemsha Thacham Poyil, Nasimudeen KM

Abstract:

This research paper presents some methods to assess the performance of Wigner Ville Distribution for Time-Frequency representation of non-stationary signals, in comparison with the other representations like STFT, Spectrogram etc. The simultaneous timefrequency resolution of WVD is one of the important properties which makes it preferable for analysis and detection of linear FM and transient signals. There are two algorithms proposed here to assess the resolution and to compare the performance of signal detection. First method is based on the measurement of area under timefrequency plot; in case of a linear FM signal analysis. A second method is based on the instantaneous power calculation and is used in case of transient, non-stationary signals. The implementation is explained briefly for both methods with suitable diagrams. The accuracy of the measurements is validated to show the better performance of WVD representation in comparison with STFT and Spectrograms.

Keywords: WVD: Wigner Ville Distribution, STFT: Short Time Fourier Transform, FT: Fourier Transform, TFR: Time-Frequency Representation, FM: Frequency Modulation, LFM Signal: Linear FM Signal, JTFA: Joint time frequency analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2423
1767 Spatial Distribution of Local Sheep Breeds in Antalya Province

Authors: Serife Gulden Yilmaz, Suleyman Karaman

Abstract:

Sheep breeding is important in terms of meeting both the demand of red meat consumption and the availability of industrial raw materials and the employment of the rural sector in Turkey. It is also very important to ensure the selection and continuity of the breeds that are raised in order to increase quality and productive products related to sheep breeding. The protection of local breeds and crossbreds also enables the development of the sector in the region and the reduction of imports. In this study, the data were obtained from the records of the Turkish Statistical Institute and Antalya Sheep & Goat Breeders' Association. Spatial distribution of sheep breeds in Antalya is reviewed statistically in terms of concentration at the local level for 2015 period spatially. For this reason; mapping, box plot, linear regression are used in this study. Concentration is introduced by means of studbook data on sheep breeding as locals and total sheep farm by mapping. It is observed that Pırlak breed (17.5%) and Merinos crossbreed (16.3%) have the highest concentration in the region. These breeds are respectively followed by Akkaraman breed (11%), Pirlak crossbreed (8%), Merinos breed (7.9%) Akkaraman crossbreed (7.9%) and Ivesi breed (7.2%).

Keywords: Antalya, sheep breeds, spatial distribution, local.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1231
1766 Comparison of Pore Space Features by Thin Sections and X-Ray Microtomography

Authors: H. Alves, J. T. Assis, M. Geraldes, I. Lima, R. T. Lopes

Abstract:

Microtomographic images and thin section (TS) images were analyzed and compared against some parameters of geological interest such as porosity and its distribution along the samples. The results show that microtomography (CT) analysis, although limited by its resolution, have some interesting information about the distribution of porosity (homogeneous or not) and can also quantify the connected and non-connected pores, i.e., total porosity. TS have no limitations concerning resolution, but are limited by the experimental data available in regards to a few glass sheets for analysis and also can give only information about the connected pores, i.e., effective porosity. Those two methods have their own virtues and flaws but when paired together they are able to complement one another, making for a more reliable and complete analysis.

Keywords: Microtomography, petrographical microscopy, sediments, thin sections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2329
1765 SVC and DSTATCOM Comparison for Voltage Improvement in RDS Using ANFIS

Authors: U. Ramesh Babu, V. Vijaya Kumar Reddy, S. Tara Kalyani

Abstract:

This paper investigates the performance comparison of SVC (Static VAR Compensator) and DSTATCOM (Distribution Static Synchronous Compensator) to improve voltage stability in Radial Distribution System (RDS) which are efficient FACTS (Flexible AC Transmission System) devices that are capable of controlling the active and reactive power flows in a power system line by appropriately controlling parameters using ANFIS. Simulations are carried out in MATLAB/Simulink environment for the IEEE-4 bus system to test the ability of increasing load. It is found that these controllers significantly increase the margin of load in the power systems.

Keywords: SVC, DSTATCOM, voltage improvement, ANFIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
1764 Plug and Play Interferometer Configuration using Single Modulator Technique

Authors: Norshamsuri Ali, Hafizulfika, Salim Ali Al-Kathiri, Abdulla Al-Attas, Suhairi Saharudin, Mohamed Ridza Wahiddin

Abstract:

We demonstrate single-photon interference over 10 km using a plug and play system for quantum key distribution. The quality of the interferometer is measured by using the interferometer visibility. The coding of the signal is based on the phase coding and the value of visibility is based on the interference effect, which result a number of count. The setup gives full control of polarization inside the interferometer. The quality measurement of the interferometer is based on number of count per second and the system produces 94 % visibility in one of the detectors.

Keywords: single photon, interferometer, quantum key distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621
1763 Managing Iterations in Product Design and Development

Authors: K. Aravindhan, Trishit Bandyopadhyay, Mahesh Mehendale, Supriya Kumar De

Abstract:

The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.

Keywords: Decision Points, Iteration, Product Design, Rework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2192
1762 Distributed Load Flow Analysis using Graph Theory

Authors: D. P. Sharma, A. Chaturvedi, G.Purohit , R.Shivarudraswamy

Abstract:

In today scenario, to meet enhanced demand imposed by domestic, commercial and industrial consumers, various operational & control activities of Radial Distribution Network (RDN) requires a focused attention. Irrespective of sub-domains research aspects of RDN like network reconfiguration, reactive power compensation and economic load scheduling etc, network performance parameters are usually estimated by an iterative process and is commonly known as load (power) flow algorithm. In this paper, a simple mechanism is presented to implement the load flow analysis (LFA) algorithm. The reported algorithm utilizes graph theory principles and is tested on a 69- bus RDN.

Keywords: Radial Distribution network, Graph, Load-flow, Array.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3143
1761 Numerical Study of Effects of Air Dam on the Flow Field and Pressure Distribution of a Passenger Car

Authors: Min Ye Koo, Ji Ho Ahn, Byung Il You, Gyo Woo Lee

Abstract:

Everything that is attached to the outside of the vehicle to improve the driving performance of the vehicle by changing the flow characteristics of the surrounding air or to pursue the external personality is called a tuning part. Typical tuning components include front or rear air dam, also known as spoilers, splitter, and side air dam. Particularly, the front air dam prevents the airflow flowing into the lower portion of the vehicle and increases the amount of air flow to the side and front of the vehicle body, thereby reducing lift force generation that lifts the vehicle body, and thus, improving the steering and driving performance of the vehicle. The purpose of this study was to investigate the role of anterior air dam in the flow around a sedan passenger car using computational fluid dynamics. The effects of flow velocity, trajectory of fluid particles on static pressure distribution and pressure distribution on body surface were investigated by varying flow velocity and size of air dam. As a result, it has been confirmed that the front air dam improves the flow characteristics, thereby reducing the generation of lift force of the vehicle, so it helps in steering and driving characteristics.

Keywords: Numerical study, computational fluid dynamics, air dam, tuning parts, drag, lift force.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
1760 Secondary Ion Mass Spectrometry of Proteins

Authors: Santanu Ray, Alexander G. Shard

Abstract:

The adsorption of bovine serum albumin (BSA), immunoglobulin G (IgG) and fibrinogen (Fgn) on fluorinated selfassembled monolayers have been studied using time of flight secondary ion mass spectrometry (ToF-SIMS) and Spectroscopic Ellipsometry (SE). The objective of the work has to establish the utility of ToF-SIMS for the determination of the amount of protein adsorbed on the surface. Quantification of surface adsorbed proteins was carried out using SE and a good correlation between ToF-SIMS results and SE was achieved. The surface distribution of proteins were also analysed using Atomic Force Microscopy (AFM). We show that the surface distribution of proteins strongly affect the ToFSIMS results.

Keywords: ToF-SIMS, Spectroscopic Ellipsometry, Protein, Atomic Force Microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940
1759 Statistical Modeling of Constituents in Ash Evolved From Pulverized Coal Combustion

Authors: Esam Jassim

Abstract:

Industries using conventional fossil fuels have an  interest in better understanding the mechanism of particulate  formation during combustion since such is responsible for emission  of undesired inorganic elements that directly impact the atmospheric  pollution level. Fine and ultrafine particulates have tendency to  escape the flue gas cleaning devices to the atmosphere. They also  preferentially collect on surfaces in power systems resulting in  ascending in corrosion inclination, descending in the heat transfer  thermal unit, and severe impact on human health. This adverseness  manifests particularly in the regions of world where coal is the  dominated source of energy for consumption.  This study highlights the behavior of calcium transformation as  mineral grains verses organically associated inorganic components  during pulverized coal combustion. The influence of existing type of  calcium on the coarse, fine and ultrafine mode formation mechanisms  is also presented. The impact of two sub-bituminous coals on particle  size and calcium composition evolution during combustion is to be  assessed. Three mixed blends named Blends 1, 2, and 3 are selected  according to the ration of coal A to coal B by weight. Calcium  percentage in original coal increases as going from Blend 1 to 3.  A mathematical model and a new approach of describing  constituent distribution are proposed. Analysis of experiments of  calcium distribution in ash is also modeled using Poisson distribution.  A novel parameter, called elemental index λ, is introduced as a  measuring factor of element distribution.  Results show that calcium in ash that originally in coal as mineral  grains has index of 17, whereas organically associated calcium  transformed to fly ash shown to be best described when elemental  index λ is 7.  As an alkaline-earth element, calcium is considered the  fundamental element responsible for boiler deficiency since it is the  major player in the mechanism of ash slagging process. The  mechanism of particle size distribution and mineral species of ash  particles are presented using CCSEM and size-segregated ash  characteristics. Conclusions are drawn from the analysis of  pulverized coal ash generated from a utility-scale boiler.

 

Keywords: Calcium transformation, Coal Combustion, Inorganic Element, Poisson distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957
1758 Radiation Dose Distribution for Workers in South Korean Nuclear Power Plants

Authors: B. I. Lee, S. I. Kim, D. H. Suh, J. I. Kim, Y. K. Lim

Abstract:

A total of 33,680 nuclear power plants (NPPs) workers were monitored and recorded from 1990 to 2007. According to the record, the average individual radiation dose has been decreasing continually from it 3.20 mSv/man in 1990 to 1.12 mSv/man at the end of 2007. After the International Commission on Radiological Protection (ICRP) 60 recommendation was generalized in South Korea, no nuclear power plant workers received above 20 mSv radiation, and the numbers of relatively highly exposed workers have been decreasing continuously. The age distribution of radiation workers in nuclear power plants was composed of mainly 20-30- year-olds (83%) for 1990 ~ 1994 and 30-40-year-olds (75%) for 2003 ~ 2007. The difference in individual average dose by age was not significant. Most (77%) of NPP radiation exposures from 1990 to 2007 occurred mostly during the refueling period. With regard to exposure type, the majority of exposures were external exposures, representing 95% of the total exposures, while internal exposures represented only 5%. External effective dose was affected mainly by gamma radiation exposure, with an insignificant amount of neutron exposure. As for internal effective dose, tritium (3H) in the pressurized heavy water reactor (PHWR) was the biggest cause of exposure.

Keywords: Dose distribution, External exposure, Nuclear powerplant, Occupational radiation dose

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2567
1757 Urban and Rural Population Pyramids in Georgia Since 1950s

Authors: Shorena Tsiklauri, Avtandil Sulaberidze, Nino Gomelauri

Abstract:

In the years followed independence, an economic crisis and some conflicts led to the displacement of many people inside Georgia. The growing poverty, unemployment, low income and its unequal distribution limited access to basic social service have had a clear direct impact on Georgian population dynamics and its age-sex structure. Factors influencing the changing population age structure and urbanization include mortality, fertility, migration and expansion of urban. In this paper presents the main factors of changing the distribution by urban and rural areas. How different are the urban and rural age and sex structures? Does Georgia have the same age-sex structure among their urban and rural populations since 1950s?

Keywords: Age and sex structure of population, Georgia, migration, urban-rural population.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3031
1756 Clustering Mixed Data Using Non-normal Regression Tree for Process Monitoring

Authors: Youngji Yoo, Cheong-Sool Park, Jun Seok Kim, Young-Hak Lee, Sung-Shick Kim, Jun-Geol Baek

Abstract:

In the semiconductor manufacturing process, large amounts of data are collected from various sensors of multiple facilities. The collected data from sensors have several different characteristics due to variables such as types of products, former processes and recipes. In general, Statistical Quality Control (SQC) methods assume the normality of the data to detect out-of-control states of processes. Although the collected data have different characteristics, using the data as inputs of SQC will increase variations of data, require wide control limits, and decrease performance to detect outof- control. Therefore, it is necessary to separate similar data groups from mixed data for more accurate process control. In the paper, we propose a regression tree using split algorithm based on Pearson distribution to handle non-normal distribution in parametric method. The regression tree finds similar properties of data from different variables. The experiments using real semiconductor manufacturing process data show improved performance in fault detecting ability.

Keywords: Semiconductor, non-normal mixed process data, clustering, Statistical Quality Control (SQC), regression tree, Pearson distribution system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
1755 Evaluating Probable Bending of Frames for Near-Field and Far-Field Records

Authors: Majid Saaly, Shahriar Tavousi Tafreshi, Mehdi Nazari Afshar

Abstract:

Most reinforced concrete structures are designed only under heavy loads have large transverse reinforcement spacing values, and therefore suffer severe failure after intense ground movements. The main goal of this paper is to compare the shear- and axial failure of concrete bending frames available in Tehran using Incremental Dynamic Analysis (IDA) under near- and far-field records. For this purpose, IDA of 5, 10, and 15-story concrete structures were done under seven far-fault records and five near-faults records. The results show that in two-dimensional models of short-rise, mid-rise and high-rise reinforced concrete frames located on Type-3 soil, increasing the distance of the transverse reinforcement can increase the maximum inter-story drift ratio values up to 37%. According to the existing results on 5, 10, and 15-story reinforced concrete models located on Type-3 soil, records with characteristics such as fling-step and directivity create maximum drift values between floors more than far-fault earthquakes. The results indicated that in the case of seismic excitation modes under earthquake encompassing directivity or fling-step, the probability values of failure and failure possibility increasing rate values are much smaller than the corresponding values of far-fault earthquakes. However, in near-fault frame records, the probability of exceedance occurs at lower seismic intensities compared to far-fault records.

Keywords: Directivity, fling-step, fragility curve, IDA, inter story drift ratio.v

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 365
1754 Concept of Automation in Management of Electric Power Systems

Authors: Richard Joseph, Nerey Mvungi

Abstract:

An electric power system includes a generating, a transmission, a distribution, and consumers subsystems. An electrical power network in Tanzania keeps growing larger by the day and become more complex so that, most utilities have long wished for real-time monitoring and remote control of electrical power system elements such as substations, intelligent devices, power lines, capacitor banks, feeder switches, fault analyzers and other physical facilities. In this paper, the concept of automation of management of power systems from generation level to end user levels was determined by using Power System Simulator for Engineering (PSS/E) version 30.3.2.

Keywords: Automation, Distribution subsystem, Generating subsystem, PSS/E, TANESCO, Transmission subsystem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3607
1753 Medical Image Registration by Minimizing Divergence Measure Based on Tsallis Entropy

Authors: Shaoyan Sun, Liwei Zhang, Chonghui Guo

Abstract:

As the use of registration packages spreads, the number of the aligned image pairs in image databases (either by manual or automatic methods) increases dramatically. These image pairs can serve as a set of training data. Correspondingly, the images that are to be registered serve as testing data. In this paper, a novel medical image registration method is proposed which is based on the a priori knowledge of the expected joint intensity distribution estimated from pre-aligned training images. The goal of the registration is to find the optimal transformation such that the distance between the observed joint intensity distribution obtained from the testing image pair and the expected joint intensity distribution obtained from the corresponding training image pair is minimized. The distance is measured using the divergence measure based on Tsallis entropy. Experimental results show that, compared with the widely-used Shannon mutual information as well as Tsallis mutual information, the proposed method is computationally more efficient without sacrificing registration accuracy.

Keywords: Multimodality images, image registration, Shannonentropy, Tsallis entropy, mutual information, Powell optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
1752 Springback Property and Texture Distribution of Grained Pure Copper

Authors: Takashi Sakai, Hitoshi Omata, Jun-Ichi Koyama

Abstract:

To improve the material characteristics of single- and poly-crystals of pure copper, the respective relationships between crystallographic orientations and microstructures, and the bending and mechanical properties were examined. And texture distribution is also analyzed. A grain refinement procedure was performed to obtain a grained structure. Furthermore, some analytical results related to crystal direction maps, inverse pole figures, and textures were obtained from SEM-EBSD analyses. Results showed that these grained metallic materials have peculiar springback characteristics with various bending angles.

Keywords: Pure Copper, Grain Refinement, Environmental Materials, SEM-EBSD Analysis, Texture, Microstructure

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2169
1751 High Impedance Fault Detection using LVQ Neural Networks

Authors: Abhishek Bansal, G. N. Pillai

Abstract:

This paper presents a new method to detect high impedance faults in radial distribution systems. Magnitudes of third and fifth harmonic components of voltages and currents are used as a feature vector for fault discrimination. The proposed methodology uses a learning vector quantization (LVQ) neural network as a classifier for identifying high impedance arc-type faults. The network learns from the data obtained from simulation of a simple radial system under different fault and system conditions. Compared to a feed-forward neural network, a properly tuned LVQ network gives quicker response.

Keywords: Fault identification, distribution networks, high impedance arc-faults, feature vector, LVQ networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2214
1750 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1198
1749 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994
1748 Explicit Solution of an Investment Plan for a DC Pension Scheme with Voluntary Contributions and Return Clause under Logarithm Utility

Authors: Promise A. Azor, Avievie Igodo, Esabai M. Ase

Abstract:

The paper merged the return of premium clause and voluntary contributions to investigate retirees’ investment plan in a defined contributory (DC) pension scheme with a portfolio comprising of a risk-free asset and a risky asset whose price process is described by geometric Brownian motion (GBM). The paper considers additional voluntary contributions paid by members, charge on balance by pension fund administrators and the mortality risk of members of the scheme during the accumulation period by introducing return of premium clause. To achieve this, the Weilbull mortality force function is used to establish the mortality rate of members during accumulation phase. Furthermore, an optimization problem from the Hamilton Jacobi Bellman (HJB) equation is obtained using dynamic programming approach. Also, the Legendre transformation method is used to transform the HJB equation which is a nonlinear partial differential equation to a linear partial differential equation and solves the resultant equation for the value function and the optimal distribution plan under logarithm utility function. Finally, numerical simulations of the impact of some important parameters on the optimal distribution plan were obtained and it was observed that the optimal distribution plan is inversely proportional to the initial fund size, predetermined interest rate, additional voluntary contributions, charge on balance and instantaneous volatility.

Keywords: Legendre transform, logarithm utility, optimal distribution plan, return clause of premium, charge on balance, Weibull mortality function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 209
1747 Overview of Operational Risk Management Methods

Authors: Milan Rippel, Pert Teplý

Abstract:

Operational risk has become one of the most discussed topics in the financial industry in the recent years. The reasons for this attention can be attributed to higher investments in information systems and technology, the increasing wave of mergers and acquisitions and emergence of new financial instruments. In addition, the New Basel Capital Accord (known as Basel II) demands a capital requirement for operational risk and further motivates financial institutions to more precisely measure and manage this type of risk. The aim of this paper is to shed light on main characteristics of operational risk management and common applied methods: scenario analysis, key risk indicators, risk control self assessment and loss distribution approach.

Keywords: Operational risk, economic capital, key risk indicators, loss distribution approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3964
1746 Selection the Optimum Cooling Scheme for Generators based on the Electro-Thermal Analysis

Authors: Diako Azizi, Ahmad Gholami, Vahid Abbasi

Abstract:

Optimal selection of electrical insulations in electrical machinery insures reliability during operation. From the insulation studies of view for electrical machines, stator is the most important part. This fact reveals the requirement for inspection of the electrical machine insulation along with the electro-thermal stresses. In the first step of the study, a part of the whole structure of machine in which covers the general characteristics of the machine is chosen, then based on the electromagnetic analysis (finite element method), the machine operation is simulated. In the simulation results, the temperature distribution of the total structure is presented simultaneously by using electro-thermal analysis. The results of electro-thermal analysis can be used for designing an optimal cooling system. In order to design, review and comparing the cooling systems, four wiring structures in the slots of Stator are presented. The structures are compared to each other in terms of electrical, thermal distribution and remaining life of insulation by using Finite Element analysis. According to the steps of the study, an optimization algorithm has been presented for selection of appropriate structure.

Keywords: Electrical field, field distribution, insulation, winding, finite element method, electro thermal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748
1745 Current Distribution and Cathode Flooding Prediction in a PEM Fuel Cell

Authors: A. Jamekhorshid, G. Karimi, I. Noshadi, A. Jahangiri

Abstract:

Non-uniform current distribution in polymer electrolyte membrane fuel cells results in local over-heating, accelerated ageing, and lower power output than expected. This issue is very critical when fuel cell experiences water flooding. In this work, the performance of a PEM fuel cell is investigated under cathode flooding conditions. Two-dimensional partially flooded GDL models based on the conservation laws and electrochemical relations are proposed to study local current density distributions along flow fields over a wide range of cell operating conditions. The model results show a direct association between cathode inlet humidity increases and that of average current density but the system becomes more sensitive to flooding. The anode inlet relative humidity shows a similar effect. Operating the cell at higher temperatures would lead to higher average current densities and the chance of system being flooded is reduced. In addition, higher cathode stoichiometries prevent system flooding but the average current density remains almost constant. The higher anode stoichiometry leads to higher average current density and higher sensitivity to cathode flooding.

Keywords: Current distribution, Flooding, Hydrogen energysystem, PEM fuel cell.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2410
1744 Long Term Examination of the Profitability Estimation Focused on Benefits

Authors: Stephan Printz, Kristina Lahl, René Vossen, Sabina Jeschke

Abstract:

Strategic investment decisions are characterized by high innovation potential and long-term effects on the competitiveness of enterprises. Due to the uncertainty and risks involved in this complex decision making process, the need arises for well-structured support activities. A method that considers cost and the long-term added value is the cost-benefit effectiveness estimation. One of those methods is the “profitability estimation focused on benefits – PEFB”-method developed at the Institute of Management Cybernetics at RWTH Aachen University. The method copes with the challenges associated with strategic investment decisions by integrating long-term non-monetary aspects whilst also mapping the chronological sequence of an investment within the organization’s target system. Thus, this method is characterized as a holistic approach for the evaluation of costs and benefits of an investment. This participation-oriented method was applied to business environments in many workshops. The results of the workshops are a library of more than 96 cost aspects, as well as 122 benefit aspects. These aspects are preprocessed and comparatively analyzed with regards to their alignment to a series of risk levels. For the first time, an accumulation and a distribution of cost and benefit aspects regarding their impact and probability of occurrence are given. The results give evidence that the PEFB-method combines precise measures of financial accounting with the incorporation of benefits. Finally, the results constitute the basics for using information technology and data science for decision support when applying within the PEFB-method.

Keywords: Cost-benefit analysis, multi-criteria decision, profitability estimation focused on benefits, risk and uncertainty analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
1743 A Note on Penalized Power-Divergence Test Statistics

Authors: Aylin Alin

Abstract:

In this paper, penalized power-divergence test statistics have been defined and their exact size properties to test a nested sequence of log-linear models have been compared with ordinary power-divergence test statistics for various penalization, λ and main effect values. Since the ordinary and penalized power-divergence test statistics have the same asymptotic distribution, comparisons have been only made for small and moderate samples. Three-way contingency tables distributed according to a multinomial distribution have been considered. Simulation results reveal that penalized power-divergence test statistics perform much better than their ordinary counterparts.

Keywords: Contingency table, Log-linear models, Penalization, Power-divergence measure, Penalized power-divergence measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1316
1742 Advanced Hybrid Particle Swarm Optimization for Congestion and Power Loss Reduction in Distribution Networks with High Distributed Generation Penetration through Network Reconfiguration

Authors: C. Iraklis, G. Evmiridis, A. Iraklis

Abstract:

Renewable energy sources and distributed power generation units already have an important role in electrical power generation. A mixture of different technologies penetrating the electrical grid, adds complexity in the management of distribution networks. High penetration of distributed power generation units creates node over-voltages, huge power losses, unreliable power management, reverse power flow and congestion. This paper presents an optimization algorithm capable of reducing congestion and power losses, both described as a function of weighted sum. Two factors that describe congestion are being proposed. An upgraded selective particle swarm optimization algorithm (SPSO) is used as a solution tool focusing on the technique of network reconfiguration. The upgraded SPSO algorithm is achieved with the addition of a heuristic algorithm specializing in reduction of power losses, with several scenarios being tested. Results show significant improvement in minimization of losses and congestion while achieving very small calculation times.

Keywords: Congestion, distribution networks, loss reduction, particle swarm optimization, smart grid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 748
1741 Signing the First Packet in Amortization Scheme for Multicast Stream Authentication

Authors: Mohammed Shatnawi, Qusai Abuein, Susumu Shibusawa

Abstract:

Signature amortization schemes have been introduced for authenticating multicast streams, in which, a single signature is amortized over several packets. The hash value of each packet is computed, some hash values are appended to other packets, forming what is known as hash chain. These schemes divide the stream into blocks, each block is a number of packets, the signature packet in these schemes is either the first or the last packet of the block. Amortization schemes are efficient solutions in terms of computation and communication overhead, specially in real-time environment. The main effictive factor of amortization schemes is it-s hash chain construction. Some studies show that signing the first packet of each block reduces the receiver-s delay and prevents DoS attacks, other studies show that signing the last packet reduces the sender-s delay. To our knowledge, there is no studies that show which is better, to sign the first or the last packet in terms of authentication probability and resistance to packet loss. In th is paper we will introduce another scheme for authenticating multicast streams that is robust against packet loss, reduces the overhead, and prevents the DoS attacks experienced by the receiver in the same time. Our scheme-The Multiple Connected Chain signing the First packet (MCF) is to append the hash values of specific packets to other packets,then append some hashes to the signature packet which is sent as the first packet in the block. This scheme is aspecially efficient in terms of receiver-s delay. We discuss and evaluate the performance of our proposed scheme against those that sign the last packet of the block.

Keywords: multicast stream authentication, hash chain construction, signature amortization, authentication probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518