Search results for: fisher-snedecor distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5083

Search results for: fisher-snedecor distribution

4333 Modeling Loads Applied to Main and Crank Bearings in the Compression-Ignition Two-Stroke Engine

Authors: Marcin Szlachetka, Mateusz Paszko, Grzegorz Baranski

Abstract:

This paper discusses the AVL EXCITE Designer simulation research into loads applied to main and crank bearings in the compression-ignition two-stroke engine. There was created a model of engine lubrication system which covers the part of this system related to particular nodes of a bearing system, i.e. a connection of main bearings in an engine block with a crankshaft, a connection of crank pins with a connecting rod. The analysis focused on the load given as a distribution of hydrodynamic oil film pressure corresponding different values of radial internal clearance. There was also studied the impact of gas force on minimal oil film thickness in main and crank bearings versus crankshaft rotational speed. Our model calculates oil film parameters, an oil film pressure distribution, an oil temperature change and dimensions of bearings as well as an oil temperature distribution on surfaces of bearing seats. Accordingly, it was possible to select, for example, a correct clearance for each of the node bearings. The research was performed for several values of engine crankshaft speed ranging from 800 RPM to 4000 RPM. Bearing oil pressure was changed according to engine speed ranging between 1 bar and 5 bar and an oil temperature of 90°C. The main bearing clearances made initially for the calculation and research were: 0.015 mm, 0.025 mm, 0.035 mm, 0.05 mm, 0.1 mm. The oil used for the research corresponded the SAE 5W-40 classification. The paper presents the selected research results referring to certain specific operating points and bearing radial internal clearances. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: crank bearings, diesel engine, oil film, two-stroke engine

Procedia PDF Downloads 211
4332 Bi-objective Network Optimization in Disaster Relief Logistics

Authors: Katharina Eberhardt, Florian Klaus Kaiser, Frank Schultmann

Abstract:

Last-mile distribution is one of the most critical parts of a disaster relief operation. Various uncertainties, such as infrastructure conditions, resource availability, and fluctuating beneficiary demand, render last-mile distribution challenging in disaster relief operations. The need to balance critical performance criteria like response time, meeting demand and cost-effectiveness further complicates the task. The occurrence of disasters cannot be controlled, and the magnitude is often challenging to assess. In summary, these uncertainties create a need for additional flexibility, agility, and preparedness in logistics operations. As a result, strategic planning and efficient network design are critical for an effective and efficient response. Furthermore, the increasing frequency of disasters and the rising cost of logistical operations amplify the need to provide robust and resilient solutions in this area. Therefore, we formulate a scenario-based bi-objective optimization model that integrates pre-positioning, allocation, and distribution of relief supplies extending the general form of a covering location problem. The proposed model aims to minimize underlying logistics costs while maximizing demand coverage. Using a set of disruption scenarios, the model allows decision-makers to identify optimal network solutions to address the risk of disruptions. We provide an empirical case study of the public authorities’ emergency food storage strategy in Germany to illustrate the potential applicability of the model and provide implications for decision-makers in a real-world setting. Also, we conduct a sensitivity analysis focusing on the impact of varying stockpile capacities, single-site outages, and limited transportation capacities on the objective value. The results show that the stockpiling strategy needs to be consistent with the optimal number of depots and inventory based on minimizing costs and maximizing demand satisfaction. The strategy has the potential for optimization, as network coverage is insufficient and relies on very high transportation and personnel capacity levels. As such, the model provides decision support for public authorities to determine an efficient stockpiling strategy and distribution network and provides recommendations for increased resilience. However, certain factors have yet to be considered in this study and should be addressed in future works, such as additional network constraints and heuristic algorithms.

Keywords: humanitarian logistics, bi-objective optimization, pre-positioning, last mile distribution, decision support, disaster relief networks

Procedia PDF Downloads 79
4331 Food Foam Characterization: Rheology, Texture and Microstructure Studies

Authors: Rutuja Upadhyay, Anurag Mehra

Abstract:

Solid food foams/cellular foods are colloidal systems which impart structure, texture and mouthfeel to many food products such as bread, cakes, ice-cream, meringues, etc. Their heterogeneous morphology makes the quantification of structure/mechanical relationships complex. The porous structure of solid food foams is highly influenced by the processing conditions, ingredient composition, and their interactions. Sensory perceptions of food foams are dependent on bubble size, shape, orientation, quantity and distribution and determines the texture of foamed foods. The state and structure of the solid matrix control the deformation behavior of the food, such as elasticity/plasticity or fracture, which in turn has an effect on the force-deformation curves. The obvious step in obtaining the relationship between the mechanical properties and the porous structure is to quantify them simultaneously. Here, we attempt to research food foams such as bread dough, baked bread and steamed rice cakes to determine the link between ingredients and the corresponding effect of each of them on the rheology, microstructure, bubble size and texture of the final product. Dynamic rheometry (SAOS), confocal laser scanning microscopy, flatbed scanning, image analysis and texture profile analysis (TPA) has been used to characterize the foods studied. In all the above systems, there was a common observation that when the mean bubble diameter is smaller, the product becomes harder as evidenced by the increase in storage and loss modulus (G′, G″), whereas when the mean bubble diameter is large the product is softer with decrease in moduli values (G′, G″). Also, the bubble size distribution affects texture of foods. It was found that bread doughs with hydrocolloids (xanthan gum, alginate) aid a more uniform bubble size distribution. Bread baking experiments were done to study the rheological changes and mechanisms involved in the structural transition of dough to crumb. Steamed rice cakes with xanthan gum (XG) addition at 0.1% concentration resulted in lower hardness with a narrower pore size distribution and larger mean pore diameter. Thus, control of bubble size could be an important parameter defining final food texture.

Keywords: food foams, rheology, microstructure, texture

Procedia PDF Downloads 332
4330 A Fully-Automated Disturbance Analysis Vision for the Smart Grid Based on Smart Switch Data

Authors: Bernardo Cedano, Ahmed H. Eltom, Bob Hay, Jim Glass, Raga Ahmed

Abstract:

The deployment of smart grid devices such as smart meters and smart switches (SS) supported by a reliable and fast communications system makes automated distribution possible, and thus, provides great benefits to electric power consumers and providers alike. However, more research is needed before the full utility of smart switch data is realized. This paper presents new automated switching techniques using SS within the electric power grid. A concise background of the SS is provided, and operational examples are shown. Organization and presentation of data obtained from SS are shown in the context of the future goal of total automation of the distribution network. The description of application techniques, the examples of success with SS, and the vision outlined in this paper serve to motivate future research pertinent to disturbance analysis automation.

Keywords: disturbance automation, electric power grid, smart grid, smart switches

Procedia PDF Downloads 307
4329 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options

Authors: Wajih Abbassi, Zouhaier Ben Khelifa

Abstract:

The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.

Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options

Procedia PDF Downloads 427
4328 Tunisian Dung Beetles Fauna: Composition and Biogeographic Affinities

Authors: Imen Labidi, Said Nouira

Abstract:

Dung beetles Scarabaeides of Tunisia constitute a major component of soil fauna, especially in the Mediterranean region. In the first phase of the present study, an intensive investigation of this group following the gathering of all the bibliographic, museological data and based on a recent collection of 17020 specimens in 106 localities in Tunisia, allowed to confirm with certainty the presence of 94 species distributed in 43 genera, 4 families and 3 sub-families. Only 81 species distributed in 38 genres, 4 families, and 3 sub-families, have been found during our prospections. The population of dung beetles Scarabaeides is composed of 58% of Aphodiidae, 39.51% of Scarabaeidae, and 8.64% of Geotrupidae. Biogeographic affinities of the species were determined and showed that 42% of the identified species have a wide Palaearctic distribution, the endemism is very low, only 3 species are endemic to Tunisia Mecynodes demoflysi, Neobodilus marani, and Thorectes demoflysi, 29 species have a wide distribution, 35 are northern and 17 are southern species. Moreover, others are dependent on very specific Biotopes like Sisyphus schaefferi linked to the northwest of Tunisia and Scarabaeus semipunctatus related to the coastal area north of Tunisia.

Keywords: dung beetles, Tunisia, composition, biogeography

Procedia PDF Downloads 248
4327 Location Choice: The Effects of Network Configuration upon the Distribution of Economic Activities in the Chinese City of Nanning

Authors: Chuan Yang, Jing Bie, Zhong Wang, Panagiotis Psimoulis

Abstract:

Contemporary studies investigating the association between the spatial configuration of the urban network and economic activities at the street level were mostly conducted within space syntax conceptual framework. These findings supported the theory of 'movement economy' and demonstrated the impact of street configuration on the distribution of pedestrian movement and land-use shaping, especially retail activities. However, the effects varied between different urban contexts. In this paper, the relationship between economic activity distribution and the urban configurational characters was examined at the segment level. In the study area, three kinds of neighbourhood types, urban, suburban, and rural neighbourhood, were included. And among all neighbourhoods, three kinds of urban network form, 'tree-like', grid, and organic pattern, were recognised. To investigate the nested effects of urban configuration measured by space syntax approach and urban context, multilevel zero-inflated negative binomial (ZINB) regression models were constructed. Additionally, considering the spatial autocorrelation, spatial lag was also concluded in the model as an independent variable. The random effect ZINB model shows superiority over the ZINB model or multilevel linear (ML) model in the explanation of economic activities pattern shaping over the urban environment. And after adjusting for the neighbourhood type and network form effects, connectivity and syntax centrality significantly affect economic activities clustering. The comparison between accumulative and new established economic activities illustrated the different preferences for economic activity location choice.

Keywords: space syntax, economic activities, multilevel model, Chinese city

Procedia PDF Downloads 124
4326 Estimation of Energy Losses of Photovoltaic Systems in France Using Real Monitoring Data

Authors: Mohamed Amhal, Jose Sayritupac

Abstract:

Photovoltaic (PV) systems have risen as one of the modern renewable energy sources that are used in wide ranges to produce electricity and deliver it to the electrical grid. In parallel, monitoring systems have been deployed as a key element to track the energy production and to forecast the total production for the next days. The reliability of the PV energy production has become a crucial point in the analysis of PV systems. A deeper understanding of each phenomenon that causes a gain or a loss of energy is needed to better design, operate and maintain the PV systems. This work analyzes the current losses distribution in PV systems starting from the available solar energy, going through the DC side and AC side, to the delivery point. Most of the phenomena linked to energy losses and gains are considered and modeled, based on real time monitoring data and datasheets of the PV system components. An analysis of the order of magnitude of each loss is compared to the current literature and commercial software. To date, the analysis of PV systems performance based on a breakdown structure of energy losses and gains is not covered enough in the literature, except in some software where the concept is very common. The cutting-edge of the current analysis is the implementation of software tools for energy losses estimation in PV systems based on several energy losses definitions and estimation technics. The developed tools have been validated and tested on some PV plants in France, which are operating for years. Among the major findings of the current study: First, PV plants in France show very low rates of soiling and aging. Second, the distribution of other losses is comparable to the literature. Third, all losses reported are correlated to operational and environmental conditions. For future work, an extended analysis on further PV plants in France and abroad will be performed.

Keywords: energy gains, energy losses, losses distribution, monitoring, photovoltaic, photovoltaic systems

Procedia PDF Downloads 174
4325 Wheeled Robot Stable Braking Process under Asymmetric Traction Coefficients

Authors: Boguslaw Schreyer

Abstract:

During the wheeled robot’s braking process, the extra dynamic vertical forces act on all wheels: left, right, front or rear. Those forces are directed downward on the front wheels while directed upward on the rear wheels. In order to maximize the deceleration, therefore, minimize the braking time and braking distance, we need to calculate a correct torque distribution: the front braking torque should be increased, and rear torque should be decreased. At the same time, we need to provide better transversal stability. In a simple case of all adhesion coefficients being the same under all wheels, the torque distribution may secure the optimal (maximal) control of the robot braking process, securing the minimum braking distance and a minimum braking time. At the same time, the transversal stability is relatively good. At any time, we control the transversal acceleration. In the case of the transversal movement, we stop the braking process and re-apply braking torque after a defined period of time. If we correctly calculate the value of the torques, we may secure the traction coefficient under the front and rear wheels close to its maximum. Also, in order to provide an optimum braking control, we need to calculate the timing of the braking torque application and the timing of its release. The braking torques should be released shortly after the wheels passed a maximum traction coefficient (while a wheels’ slip increases) and applied again after the wheels pass a maximum of traction coefficient (while the slip decreases). The correct braking torque distribution secures the front and rear wheels, passing this maximum at the same time. It guarantees an optimum deceleration control, therefore, minimum braking time. In order to calculate a correct torque distribution, a control unit should receive the input signals of a rear torque value (which changes independently), the robot’s deceleration, and values of the vertical front and rear forces. In order to calculate the timing of torque application and torque release, more signals are needed: speed of the robot: angular speed, and angular deceleration of the wheels. In case of different adhesion coefficients under the left and right wheels, but the same under each pair of wheels- the same under right wheels and the same under left wheels, the Select-Low (SL) and select high (SH) methods are applied. The SL method is suggested if transversal stability is more important than braking efficiency. Often in the case of the robot, more important is braking efficiency; therefore, the SH method is applied with some control of the transversal stability. In the case that all adhesion coefficients are different under all wheels, the front-rear torque distribution is maintained as in all previous cases. However, the timing of the braking torque application and release is controlled by the rear wheels’ lowest adhesion coefficient. The Lagrange equations have been used to describe robot dynamics. Matlab has been used in order to simulate the process of wheeled robot braking, and in conclusion, the braking methods have been selected.

Keywords: wheeled robots, braking, traction coefficient, asymmetric

Procedia PDF Downloads 164
4324 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 114
4323 Research on Spatial Allocation Optimization of Urban Elderly Care Facilities Based on ArcGIS Technology

Authors: Qiao Qiao

Abstract:

With the development of The Times, the elderly demand for pension service facilities is increasing. Taking 26 street towns in Jiangjin District of Chongqing as examples, ArcGIS spatial analysis method was used to analyze the distribution status of the elderly population, the core density of the elderly population, and the spatial layout characteristics of institutional elderly care facilities in Jiangjin District of Chongqing. The results showed that there were differences in the structure and aging degree of the elderly population in each street town. There is a certain imbalance between the spatial distribution of the elderly population and the planning and construction of elderly care facilities. The accessibility of elderly care facilities is uneven. Therefore, a genetic algorithm is used to optimize the spatial layout of institutional elderly care facilities, improve the accessibility of facilities, strengthen the participation of multiple subjects, and provide a reference for the future construction planning of elderly care facilities.

Keywords: institutional pension facilities, spatial layout, accessibility, ArcGIS

Procedia PDF Downloads 6
4322 Influence of Maximum Fatigue Load on Probabilistic Aspect of Fatigue Crack Propagation Life at Specified Grown Crack in Magnesium Alloys

Authors: Seon Soon Choi

Abstract:

The principal purpose of this paper is to find the influence of maximum fatigue load on the probabilistic aspect of fatigue crack propagation life at a specified grown crack in magnesium alloys. The experiments of fatigue crack propagation are carried out in laboratory air under different conditions of the maximum fatigue loads to obtain the fatigue crack propagation data for the statistical analysis. In order to analyze the probabilistic aspect of fatigue crack propagation life, the goodness-of fit test for probability distribution of the fatigue crack propagation life at a specified grown crack is implemented through Anderson-Darling test. The good probability distribution of the fatigue crack propagation life is also verified under the conditions of the maximum fatigue loads.

Keywords: fatigue crack propagation life, magnesium alloys, maximum fatigue load, probability

Procedia PDF Downloads 386
4321 Distribution of Dynamical and Energy Parameters in Axisymmetric Air Plasma Jet

Authors: Vitas Valinčius, Rolandas Uscila, Viktorija Grigaitienė, Žydrūnas Kavaliauskas, Romualdas Kėželis

Abstract:

Determination of integral dynamical and energy characteristics of high-temperature gas flows is a very important task of gas-dynamic for hazardous substances destruction systems. They are also always necessary for the investigation of high-temperature turbulent flow dynamics, heat and mass transfer. It is well known that distribution of dynamical and thermal characteristics of high-temperature flows and jets is strongly related to heat flux variation over an imposed area of heating. As is visible from numerous experiments and theoretical considerations, the fundamental properties of an isothermal jet are well investigated. However, the establishment of regularities in high-temperature conditions meets certain specific behavior comparing with moderate-temperature jets and flows. Their structures have not been thoroughly studied yet, especially in the cases of plasma ambient. It is well known that the distribution of local plasma jet parameters in high temperature and isothermal jets and flows may significantly differ. High temperature axisymmetric air jet generated by atmospheric pressure DC arc plasma torch was investigated employing enthalpy probe 3.8∙10-3 m of diameter. Distribution of velocities and temperatures were established in different cross-sections of the plasma jet outflowing from 42∙10-3 m diameter pipe at the average mean velocity of 700 m∙s-1, and averaged temperature of 4000 K. It has been found that gas heating fractionally influences shape and values of a dimensionless profile of velocity and temperature in the main zone of plasma jet and has a significant influence in the initial zone of the plasma jet. The width of the initial zone of the plasma jet has been found to be lesser than in the case of isothermal flow. The relation between dynamical thickness and turbulent number of Prandtl has been established along jet axis. Experimental results were generalized in dimensionless form. The presence of convective heating shows that heat transfer in a moving high-temperature jet also occurs due to heat transfer by moving particles of the jet. In this case, the intensity of convective heat transfer is proportional to the instantaneous value of the flow velocity at a given point in space. Consequently, the configuration of the temperature field in moving jets and flows essentially depends on the configuration of the velocity field.

Keywords: plasma jet, plasma torch, heat transfer, enthalpy probe, turbulent number of Prandtl

Procedia PDF Downloads 181
4320 Pudhaiyal: A Maze-Based Treasure Hunt Game for Tamil Words

Authors: Aarthy Anandan, Anitha Narasimhan, Madhan Karky

Abstract:

Word-based games are popular in helping people to improve their vocabulary skills. Games like ‘word search’ and crosswords provide a smart way of increasing vocabulary skills. Word search games are fun to play, but also educational which actually helps to learn a language. Finding the words from word search puzzle helps the player to remember words in an easier way, and it also helps to learn the spellings of words. In this paper, we present a tile distribution algorithm for a Maze-Based Treasure Hunt Game 'Pudhaiyal’ for Tamil words, which describes how words can be distributed horizontally, vertically or diagonally in a 10 x 10 grid. Along with the tile distribution algorithm, we also present an algorithm for the scoring model of the game. The proposed game has been tested with 20,000 Tamil words.

Keywords: Pudhaiyal, Tamil word game, word search, scoring, maze, algorithm

Procedia PDF Downloads 438
4319 Analysis of Cyclic Elastic-Plastic Loading of Shaft Based on Kinematic Hardening Model

Authors: Isa Ahmadi, Ramin Khamedi

Abstract:

In this paper, the elasto-plastic and cyclic torsion of a shaft is studied using a finite element method. The Prager kinematic hardening theory of plasticity with the Ramberg and Osgood stress-strain equation is used to evaluate the cyclic loading behavior of the shaft under the torsional loading. The material of shaft is assumed to follow the non-linear strain hardening property based on the Prager model. The finite element method with C1 continuity is developed and used for solution of the governing equations of the problem. The successive substitution iterative method is used to calculate the distribution of stresses and plastic strains in the shaft due to cyclic loads. The shear stress, effective stress, residual stress and elastic and plastic shear strain distribution are presented in the numerical results.

Keywords: cyclic loading, finite element analysis, Prager kinematic hardening model, torsion of shaft

Procedia PDF Downloads 408
4318 Leverage Effect for Volatility with Generalized Laplace Error

Authors: Farrukh Javed, Krzysztof Podgórski

Abstract:

We propose a new model that accounts for the asymmetric response of volatility to positive ('good news') and negative ('bad news') shocks in economic time series the so-called leverage effect. In the past, asymmetric powers of errors in the conditionally heteroskedastic models have been used to capture this effect. Our model is using the gamma difference representation of the generalized Laplace distributions that efficiently models the asymmetry. It has one additional natural parameter, the shape, that is used instead of power in the asymmetric power models to capture the strength of a long-lasting effect of shocks. Some fundamental properties of the model are provided including the formula for covariances and an explicit form for the conditional distribution of 'bad' and 'good' news processes given the past the property that is important for the statistical fitting of the model. Relevant features of volatility models are illustrated using S&P 500 historical data.

Keywords: heavy tails, volatility clustering, generalized asymmetric laplace distribution, leverage effect, conditional heteroskedasticity, asymmetric power volatility, GARCH models

Procedia PDF Downloads 383
4317 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 165
4316 Non-Revenue Water Management in Palestine

Authors: Samah Jawad Jabari

Abstract:

Water is the most important and valuable resource not only for human life but also for all living things on the planet. The water supply utilities should fulfill the water requirement quantitatively and qualitatively. Drinking water systems are exposed to both natural (hurricanes and flood) and manmade hazards (risks) that are common in Palestine. Non-Revenue Water (NRW) is a manmade risk which remains a major concern in Palestine, as the NRW levels are estimated to be at a high level. In this research, Hebron city water distribution network was taken as a case study to estimate and audit the NRW levels. The research also investigated the state of the existing water distribution system in the study area by investigating the water losses and obtained more information on NRW prevention and management practices. Data and information have been collected from the Palestinian Water Authority (PWA) and Hebron Municipality (HM) archive. In addition to that, a questionnaire has been designed and administered by the researcher in order to collect the necessary data for water auditing. The questionnaire also assessed the views of stakeholder in PWA and HM (staff) on the current status of the NRW in the Hebron water distribution system. The important result obtained by this research shows that NRW in Hebron city was high and in excess of 30%. The main factors that contribute to NRW were the inaccuracies in billing volumes, unauthorized consumption, and the method of estimating consumptions through faulty meters. Policy for NRW reduction is available in Palestine; however, it is clear that the number of qualified staff available to carry out the activities related to leak detection is low, and that there is a lack of appropriate technologies to reduce water losses and undertake sufficient system maintenance, which needs to be improved to enhance the performance of the network and decrease the level of NRW losses.

Keywords: non-revenue water, water auditing, leak detection, water meters

Procedia PDF Downloads 296
4315 CO2 Gas Solubility and Foam Generation

Authors: Chanmoly Or, Kyuro Sasaki, Yuichi Sugai, Masanori Nakano, Motonao Imai

Abstract:

Cold drainage mechanism of oil production is a complicated process which involves with solubility and foaming processes. Laboratory experiments were carried out to investigate the CO2 gas solubility in hexadecane (as light oil) and the effect of depressurization processes on microbubble generation. The experimental study of sensitivity parameters of temperature and pressure on CO2 gas solubility in hexadecane was conducted at temperature of 20 °C and 50 °C and pressure ranged 2.0–7.0 MPa by using PVT (RUSKA Model 2370) apparatus. The experiments of foamy hexadecane were also prepared by depressurizing from saturated pressure of 6.4 MPa and temperature of 50 °C. The experimental results show the CO2 gas solubility in hexadecane linearly increases with increasing pressure. At pressure 4.5 MPa, CO2 gas dissolved in hexadecane 2.5 mmol.g-1 for temperature of 50 °C and 3.5 mmol.g-1 for temperature of 20 °C. The bubbles of foamy hexadecane were observed that most of large bubbles were coalesced shortly whereas the small one keeps presence. The experimental result of foamy hexadecane indicated large depressurization step (∆P) produces high quality of foam with high microbubble distribution.

Keywords: CO2 gas solubility, depressurization process, foamy hexadecane, microbubble distribution

Procedia PDF Downloads 491
4314 Wear Characteristics of Al Based Composites Fabricated with Nano Silicon Carbide Particles

Authors: Mohammad Reza Koushki Ardestani, Saeed Daneshmand, Mohammad Heydari Vini

Abstract:

In the present study, AA7075/SiO2 composites have been fabricated via liquid metallurgy process. Using the degassing process, the wet ability of the molten aluminum alloys increased which improved the bonding between aluminum matrix and reinforcement (SiO2) particles. AA7075 alloy and SiO2 particles were taken as the base matrix and reinforcements, respectively. Then, contents of 2.5 and 5 wt. % of SiO2 subdivisions were added into the AA7075 matrix. To improve wettability and distribution, reinforcement particles were pre-heated to a temperature of 550°C for each composite sample. A uniform distribution of SiO2 particles was observed through the matrix alloy in the microstructural study. A hardened EN32 steel disc as the counter face was used to evaluate the wear rate pin-on-disc, a wear testing machine containing. The results showed that the wear rate of the AA/SiO2 composites was lesser than that of the monolithic AA7075 samples. Finally, The SEM worn surfaces of samples were investigated.

Keywords: Al7075, SiO₂, wear, composites, stir casting

Procedia PDF Downloads 100
4313 Effect of Fuel Lean Reburning Process on NOx Reduction and CO Emission

Authors: Changyeop Lee, Sewon Kim

Abstract:

Reburning is a useful technology in reducing nitric oxide through injection of a secondary hydrocarbon fuel. In this paper, an experimental study has been conducted to evaluate the effect of fuel lean reburning on NOx/CO reduction in LNG flame. Experiments were performed in flames stabilized by a co-flow swirl burner, which was mounted at the bottom of the furnace. Tests were conducted using LNG gas as the reburn fuel as well as the main fuel. The effects of reburn fuel fraction and injection manner of the reburn fuel were studied when the fuel lean reburning system was applied. The paper reports data on flue gas emissions and temperature distribution in the furnace for a wide range of experimental conditions. At steady state, temperature distribution and emission formation in the furnace have been measured and compared. This paper makes clear that in order to decrease both NOx and CO concentrations in the exhaust when the pulsated fuel lean reburning system was adapted, it is important that the control of some factors such as frequency and duty ratio. Also it shows the fuel lean reburning is also effective method to reduce NOx as much as reburning.

Keywords: fuel lean reburn, NOx, CO, LNG flame

Procedia PDF Downloads 423
4312 Targeting Mineral Resources of the Upper Benue trough, Northeastern Nigeria Using Linear Spectral Unmixing

Authors: Bello Yusuf Idi

Abstract:

The Gongola arm of the Upper Banue Trough, Northeastern Nigeria is predominantly covered by the outcrops of Limestone-bearing rocks in form of Sandstone with intercalation of carbonate clay, shale, basaltic, felsphatic and migmatide rocks at subpixel dimension. In this work, subpixel classification algorithm was used to classify the data acquired from landsat 7 Enhance Thematic Mapper (ETM+) satellite system with the aim of producing fractional distribution image for three most economically important solid minerals of the area: Limestone, Basalt and Migmatide. Linear Spectral Unmixing (LSU) algorithm was used to produce fractional distribution image of abundance of the three mineral resources within a 100Km2 portion of the area. The results show that the minerals occur at different proportion all over the area. The fractional map could therefore serve as a guide to the ongoing reconnaissance for the economic potentiality of the formation.

Keywords: linear spectral un-mixing, upper benue trough, gongola arm, geological engineering

Procedia PDF Downloads 369
4311 Exploring Exposed Political Economy in Disaster Risk Reduction Efforts in Bangladesh

Authors: Shafiqul Islam, Cordia Chu

Abstract:

Bangladesh is one of the most vulnerable countries to climate related disasters such as flood and cyclone. Exploring from the semi-structured in-depth interviews of 38 stakeholders and literature review, this study examined the public spending distribution process in DRR. This paper demonstrates how the processes of political economy-enclosure, exclusion, encroachment, and entrenchment hinder the Disaster Risk Reduction (DRR) efforts of Department of Disaster Management (DDM) such as distribution of flood centres, cyclone centres and 40 days employment generation programs. Enclosure refers to when DRR projects allocated to less vulnerable areas or expand the roles of influencing actors into the public sphere. Exclusion refers to when DRR projects limit affected people’s access to resources or marginalize particular stakeholders in decision-making activities. Encroachment refers to when allocation of DRR projects and selection of location and issues degrade the environmental affect or contribute to other forms of disaster risk. Entrenchment refers to when DRR projects aggravate the disempowerment of common people worsen the concentrations of wealth and income inequality within a community. In line with United Nations (UN) Sustainable Development Goals (SDGs), Hyogo and Sendai Frameworks, in the case of Bangladesh, DRR policies implemented under the country’s national five-year plan, disaster-related acts and rules. These policies and practices have somehow enabled influential-elites to mobilize and distribute resources through bureaucracies. Exclusionary forms of fund distribution of DRR exist at both the national and local scales. DRR related allocations have encroached through the low land areas development project without consulting local needs. Most severely, DRR related unequal allocations have entrenched social class trapping the backward communities vulnerable to climate related disasters. Planners and practitioners of DRR need to take necessary steps to eliminate the potential risks from the processes of enclosure, exclusion, encroachment, and entrenchment happens in project fund allocations.

Keywords: Bangladesh, disaster risk reduction, fund distribution, political economy

Procedia PDF Downloads 129
4310 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 226
4309 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer

Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo

Abstract:

Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.

Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer

Procedia PDF Downloads 206
4308 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI

Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil

Abstract:

The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.

Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency

Procedia PDF Downloads 408
4307 Estimation of the Upper Tail Dependence Coefficient for Insurance Loss Data Using an Empirical Copula-Based Approach

Authors: Adrian O'Hagan, Robert McLoughlin

Abstract:

Considerable focus in the world of insurance risk quantification is placed on modeling loss values from lines of business (LOBs) that possess upper tail dependence. Copulas such as the Joe, Gumbel and Student-t copula may be used for this purpose. The copula structure imparts a desired level of tail dependence on the joint distribution of claims from the different LOBs. Alternatively, practitioners may possess historical or simulated data that already exhibit upper tail dependence, through the impact of catastrophe events such as hurricanes or earthquakes. In these circumstances, it is not desirable to induce additional upper tail dependence when modeling the joint distribution of the loss values from the individual LOBs. Instead, it is of interest to accurately assess the degree of tail dependence already present in the data. The empirical copula and its associated upper tail dependence coefficient are presented in this paper as robust, efficient means of achieving this goal.

Keywords: empirical copula, extreme events, insurance loss reserving, upper tail dependence coefficient

Procedia PDF Downloads 282
4306 The Distribution of rs5219 Polymorphism in the Non-Diabetic Elderly Jordanian Subject

Authors: Foad Alzoughool

Abstract:

Conflicting studies on the association between the rs5219 polymorphism and type 2 diabetes, some studies have confirmed a strong relationship between this variant and type2 diabetes, on the other hand, many studies denied the existence of this association. This study aimed to provide evidence about whether the rs5219 polymorphism has or hasn't a role as a risk factor for diabetes and meta-analysis to investigate the role of the control age group in the association. Genotyping of the rs5219 polymorphism was performed in a cohort of 266 healthy elderly subjects with a mean age (60.2 ± 5.1) with no history of diabetes (HbA1c < 6%) using standard Sanger sequencing methods. Lys/Lys alleles were detected in 20 persons (7.5%), Lys/Glu alleles in 96 persons (36.1%), and Glu/Glu in 150 persons (56.4%). The genotype distribution was consistent with Hardy–Weinberg equilibrium (P =0.7). Meta-analysis notably indicates no association between rs5219 polymorphism and type 2 diabetes in all studies used the younger age of the control group compared to the patient's age. In conclusion, our study sheds light on the importance of age factor among the control group recruited in case-control studies.

Keywords: Type 2 diabetes, rs5219 polymorphism, E23K, KCNJ11 gene

Procedia PDF Downloads 157
4305 Supply Network Design for Production-Distribution of Fish: A Sustainable Approach Using Mathematical Programming

Authors: Nicolás Clavijo Buriticá, Laura Viviana Triana Sanchez

Abstract:

This research develops a productive context associated with the aquaculture industry in northern Tolima-Colombia, specifically in the town of Lerida. Strategic aspects of chain of fish Production-Distribution, especially those related to supply network design of an association devoted to cultivating, farming, processing and marketing of fish are addressed. This research is addressed from a special approach of Supply Chain Management (SCM) which guides management objectives to the system sustainability; this approach is called Sustainable Supply Chain Management (SSCM). The network design of fish production-distribution system is obtained for the case study by two mathematical programming models that aims to maximize the economic benefits of the chain and minimize total supply chain costs, taking into account restrictions to protect the environment and its implications on system productivity. The results of the mathematical models validated in the productive situation of the partnership under study, called Asopiscinorte shows the variation in the number of open or closed locations in the supply network that determines the final network configuration. This proposed result generates for the case study an increase of 31.5% in the partial productivity of storage and processing, in addition to possible favorable long-term implications, such as attending an agile or not a consumer area, increase or not the level of sales in several areas, to meet in quantity, time and cost of work in progress and finished goods to various actors in the chain.

Keywords: Sustainable Supply Chain, mathematical programming, aquaculture industry, Supply Chain Design, Supply Chain Configuration

Procedia PDF Downloads 535
4304 Steady-State Behavior of a Multi-Phase M/M/1 Queue in Random Evolution Subject to Catastrophe Failure

Authors: Reni M. Sagayaraj, Anand Gnana S. Selvam, Reynald R. Susainathan

Abstract:

In this paper, we consider stochastic queueing models for Steady-state behavior of a multi-phase M/M/1 queue in random evolution subject to catastrophe failure. The arrival flow of customers is described by a marked Markovian arrival process. The service times of different type customers have a phase-type distribution with different parameters. To facilitate the investigation of the system we use a generalized phase-type service time distribution. This model contains a repair state, when a catastrophe occurs the system is transferred to the failure state. The paper focuses on the steady-state equation, and observes that, the steady-state behavior of the underlying queueing model along with the average queue size is analyzed.

Keywords: M/G/1 queuing system, multi-phase, random evolution, steady-state equation, catastrophe failure

Procedia PDF Downloads 327