Search results for: probability distribution
4941 Adaptation of Projection Profile Algorithm for Skewed Handwritten Text Line Detection
Authors: Kayode A. Olaniyi, Tola. M. Osifeko, Adeola A. Ogunleye
Abstract:
Text line segmentation is an important step in document image processing. It represents a labeling process that assigns the same label using distance metric probability to spatially aligned units. Text line detection techniques have successfully been implemented mainly in printed documents. However, processing of the handwritten texts especially unconstrained documents has remained a key problem. This is because the unconstrained hand-written text lines are often not uniformly skewed. The spaces between text lines may not be obvious, complicated by the nature of handwriting and, overlapping ascenders and/or descenders of some characters. Hence, text lines detection and segmentation represents a leading challenge in handwritten document image processing. Text line detection methods that rely on the traditional global projection profile of the text document cannot efficiently confront with the problem of variable skew angles between different text lines. Hence, the formulation of a horizontal line as a separator is often not efficient. This paper presents a technique to segment a handwritten document into distinct lines of text. The proposed algorithm starts, by partitioning the initial text image into columns, across its width into chunks of about 5% each. At each vertical strip of 5%, the histogram of horizontal runs is projected. We have worked with the assumption that text appearing in a single strip is almost parallel to each other. The algorithm developed provides a sliding window through the first vertical strip on the left side of the page. It runs through to identify the new minimum corresponding to a valley in the projection profile. Each valley would represent the starting point of the orientation line and the ending point is the minimum point on the projection profile of the next vertical strip. The derived text-lines traverse around any obstructing handwritten vertical strips of connected component by associating it to either the line above or below. A decision of associating such connected component is made by the probability obtained from a distance metric decision. The technique outperforms the global projection profile for text line segmentation and it is robust to handle skewed documents and those with lines running into each other.Keywords: connected-component, projection-profile, segmentation, text-line
Procedia PDF Downloads 1224940 Leverage Effect for Volatility with Generalized Laplace Error
Authors: Farrukh Javed, Krzysztof Podgórski
Abstract:
We propose a new model that accounts for the asymmetric response of volatility to positive ('good news') and negative ('bad news') shocks in economic time series the so-called leverage effect. In the past, asymmetric powers of errors in the conditionally heteroskedastic models have been used to capture this effect. Our model is using the gamma difference representation of the generalized Laplace distributions that efficiently models the asymmetry. It has one additional natural parameter, the shape, that is used instead of power in the asymmetric power models to capture the strength of a long-lasting effect of shocks. Some fundamental properties of the model are provided including the formula for covariances and an explicit form for the conditional distribution of 'bad' and 'good' news processes given the past the property that is important for the statistical fitting of the model. Relevant features of volatility models are illustrated using S&P 500 historical data.Keywords: heavy tails, volatility clustering, generalized asymmetric laplace distribution, leverage effect, conditional heteroskedasticity, asymmetric power volatility, GARCH models
Procedia PDF Downloads 3834939 Astronomical Object Classification
Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan
Abstract:
We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis
Procedia PDF Downloads 754938 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 1654937 Non-Revenue Water Management in Palestine
Authors: Samah Jawad Jabari
Abstract:
Water is the most important and valuable resource not only for human life but also for all living things on the planet. The water supply utilities should fulfill the water requirement quantitatively and qualitatively. Drinking water systems are exposed to both natural (hurricanes and flood) and manmade hazards (risks) that are common in Palestine. Non-Revenue Water (NRW) is a manmade risk which remains a major concern in Palestine, as the NRW levels are estimated to be at a high level. In this research, Hebron city water distribution network was taken as a case study to estimate and audit the NRW levels. The research also investigated the state of the existing water distribution system in the study area by investigating the water losses and obtained more information on NRW prevention and management practices. Data and information have been collected from the Palestinian Water Authority (PWA) and Hebron Municipality (HM) archive. In addition to that, a questionnaire has been designed and administered by the researcher in order to collect the necessary data for water auditing. The questionnaire also assessed the views of stakeholder in PWA and HM (staff) on the current status of the NRW in the Hebron water distribution system. The important result obtained by this research shows that NRW in Hebron city was high and in excess of 30%. The main factors that contribute to NRW were the inaccuracies in billing volumes, unauthorized consumption, and the method of estimating consumptions through faulty meters. Policy for NRW reduction is available in Palestine; however, it is clear that the number of qualified staff available to carry out the activities related to leak detection is low, and that there is a lack of appropriate technologies to reduce water losses and undertake sufficient system maintenance, which needs to be improved to enhance the performance of the network and decrease the level of NRW losses.Keywords: non-revenue water, water auditing, leak detection, water meters
Procedia PDF Downloads 2964936 CO2 Gas Solubility and Foam Generation
Authors: Chanmoly Or, Kyuro Sasaki, Yuichi Sugai, Masanori Nakano, Motonao Imai
Abstract:
Cold drainage mechanism of oil production is a complicated process which involves with solubility and foaming processes. Laboratory experiments were carried out to investigate the CO2 gas solubility in hexadecane (as light oil) and the effect of depressurization processes on microbubble generation. The experimental study of sensitivity parameters of temperature and pressure on CO2 gas solubility in hexadecane was conducted at temperature of 20 °C and 50 °C and pressure ranged 2.0–7.0 MPa by using PVT (RUSKA Model 2370) apparatus. The experiments of foamy hexadecane were also prepared by depressurizing from saturated pressure of 6.4 MPa and temperature of 50 °C. The experimental results show the CO2 gas solubility in hexadecane linearly increases with increasing pressure. At pressure 4.5 MPa, CO2 gas dissolved in hexadecane 2.5 mmol.g-1 for temperature of 50 °C and 3.5 mmol.g-1 for temperature of 20 °C. The bubbles of foamy hexadecane were observed that most of large bubbles were coalesced shortly whereas the small one keeps presence. The experimental result of foamy hexadecane indicated large depressurization step (∆P) produces high quality of foam with high microbubble distribution.Keywords: CO2 gas solubility, depressurization process, foamy hexadecane, microbubble distribution
Procedia PDF Downloads 4914935 Wear Characteristics of Al Based Composites Fabricated with Nano Silicon Carbide Particles
Authors: Mohammad Reza Koushki Ardestani, Saeed Daneshmand, Mohammad Heydari Vini
Abstract:
In the present study, AA7075/SiO2 composites have been fabricated via liquid metallurgy process. Using the degassing process, the wet ability of the molten aluminum alloys increased which improved the bonding between aluminum matrix and reinforcement (SiO2) particles. AA7075 alloy and SiO2 particles were taken as the base matrix and reinforcements, respectively. Then, contents of 2.5 and 5 wt. % of SiO2 subdivisions were added into the AA7075 matrix. To improve wettability and distribution, reinforcement particles were pre-heated to a temperature of 550°C for each composite sample. A uniform distribution of SiO2 particles was observed through the matrix alloy in the microstructural study. A hardened EN32 steel disc as the counter face was used to evaluate the wear rate pin-on-disc, a wear testing machine containing. The results showed that the wear rate of the AA/SiO2 composites was lesser than that of the monolithic AA7075 samples. Finally, The SEM worn surfaces of samples were investigated.Keywords: Al7075, SiO₂, wear, composites, stir casting
Procedia PDF Downloads 1004934 Effect of Fuel Lean Reburning Process on NOx Reduction and CO Emission
Authors: Changyeop Lee, Sewon Kim
Abstract:
Reburning is a useful technology in reducing nitric oxide through injection of a secondary hydrocarbon fuel. In this paper, an experimental study has been conducted to evaluate the effect of fuel lean reburning on NOx/CO reduction in LNG flame. Experiments were performed in flames stabilized by a co-flow swirl burner, which was mounted at the bottom of the furnace. Tests were conducted using LNG gas as the reburn fuel as well as the main fuel. The effects of reburn fuel fraction and injection manner of the reburn fuel were studied when the fuel lean reburning system was applied. The paper reports data on flue gas emissions and temperature distribution in the furnace for a wide range of experimental conditions. At steady state, temperature distribution and emission formation in the furnace have been measured and compared. This paper makes clear that in order to decrease both NOx and CO concentrations in the exhaust when the pulsated fuel lean reburning system was adapted, it is important that the control of some factors such as frequency and duty ratio. Also it shows the fuel lean reburning is also effective method to reduce NOx as much as reburning.Keywords: fuel lean reburn, NOx, CO, LNG flame
Procedia PDF Downloads 4234933 Targeting Mineral Resources of the Upper Benue trough, Northeastern Nigeria Using Linear Spectral Unmixing
Authors: Bello Yusuf Idi
Abstract:
The Gongola arm of the Upper Banue Trough, Northeastern Nigeria is predominantly covered by the outcrops of Limestone-bearing rocks in form of Sandstone with intercalation of carbonate clay, shale, basaltic, felsphatic and migmatide rocks at subpixel dimension. In this work, subpixel classification algorithm was used to classify the data acquired from landsat 7 Enhance Thematic Mapper (ETM+) satellite system with the aim of producing fractional distribution image for three most economically important solid minerals of the area: Limestone, Basalt and Migmatide. Linear Spectral Unmixing (LSU) algorithm was used to produce fractional distribution image of abundance of the three mineral resources within a 100Km2 portion of the area. The results show that the minerals occur at different proportion all over the area. The fractional map could therefore serve as a guide to the ongoing reconnaissance for the economic potentiality of the formation.Keywords: linear spectral un-mixing, upper benue trough, gongola arm, geological engineering
Procedia PDF Downloads 3694932 Exploring Exposed Political Economy in Disaster Risk Reduction Efforts in Bangladesh
Authors: Shafiqul Islam, Cordia Chu
Abstract:
Bangladesh is one of the most vulnerable countries to climate related disasters such as flood and cyclone. Exploring from the semi-structured in-depth interviews of 38 stakeholders and literature review, this study examined the public spending distribution process in DRR. This paper demonstrates how the processes of political economy-enclosure, exclusion, encroachment, and entrenchment hinder the Disaster Risk Reduction (DRR) efforts of Department of Disaster Management (DDM) such as distribution of flood centres, cyclone centres and 40 days employment generation programs. Enclosure refers to when DRR projects allocated to less vulnerable areas or expand the roles of influencing actors into the public sphere. Exclusion refers to when DRR projects limit affected people’s access to resources or marginalize particular stakeholders in decision-making activities. Encroachment refers to when allocation of DRR projects and selection of location and issues degrade the environmental affect or contribute to other forms of disaster risk. Entrenchment refers to when DRR projects aggravate the disempowerment of common people worsen the concentrations of wealth and income inequality within a community. In line with United Nations (UN) Sustainable Development Goals (SDGs), Hyogo and Sendai Frameworks, in the case of Bangladesh, DRR policies implemented under the country’s national five-year plan, disaster-related acts and rules. These policies and practices have somehow enabled influential-elites to mobilize and distribute resources through bureaucracies. Exclusionary forms of fund distribution of DRR exist at both the national and local scales. DRR related allocations have encroached through the low land areas development project without consulting local needs. Most severely, DRR related unequal allocations have entrenched social class trapping the backward communities vulnerable to climate related disasters. Planners and practitioners of DRR need to take necessary steps to eliminate the potential risks from the processes of enclosure, exclusion, encroachment, and entrenchment happens in project fund allocations.Keywords: Bangladesh, disaster risk reduction, fund distribution, political economy
Procedia PDF Downloads 1294931 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki
Abstract:
Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol
Procedia PDF Downloads 2264930 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer
Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo
Abstract:
Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer
Procedia PDF Downloads 2064929 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI
Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil
Abstract:
The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency
Procedia PDF Downloads 4084928 Estimation of the Upper Tail Dependence Coefficient for Insurance Loss Data Using an Empirical Copula-Based Approach
Authors: Adrian O'Hagan, Robert McLoughlin
Abstract:
Considerable focus in the world of insurance risk quantification is placed on modeling loss values from lines of business (LOBs) that possess upper tail dependence. Copulas such as the Joe, Gumbel and Student-t copula may be used for this purpose. The copula structure imparts a desired level of tail dependence on the joint distribution of claims from the different LOBs. Alternatively, practitioners may possess historical or simulated data that already exhibit upper tail dependence, through the impact of catastrophe events such as hurricanes or earthquakes. In these circumstances, it is not desirable to induce additional upper tail dependence when modeling the joint distribution of the loss values from the individual LOBs. Instead, it is of interest to accurately assess the degree of tail dependence already present in the data. The empirical copula and its associated upper tail dependence coefficient are presented in this paper as robust, efficient means of achieving this goal.Keywords: empirical copula, extreme events, insurance loss reserving, upper tail dependence coefficient
Procedia PDF Downloads 2824927 The Distribution of rs5219 Polymorphism in the Non-Diabetic Elderly Jordanian Subject
Authors: Foad Alzoughool
Abstract:
Conflicting studies on the association between the rs5219 polymorphism and type 2 diabetes, some studies have confirmed a strong relationship between this variant and type2 diabetes, on the other hand, many studies denied the existence of this association. This study aimed to provide evidence about whether the rs5219 polymorphism has or hasn't a role as a risk factor for diabetes and meta-analysis to investigate the role of the control age group in the association. Genotyping of the rs5219 polymorphism was performed in a cohort of 266 healthy elderly subjects with a mean age (60.2 ± 5.1) with no history of diabetes (HbA1c < 6%) using standard Sanger sequencing methods. Lys/Lys alleles were detected in 20 persons (7.5%), Lys/Glu alleles in 96 persons (36.1%), and Glu/Glu in 150 persons (56.4%). The genotype distribution was consistent with Hardy–Weinberg equilibrium (P =0.7). Meta-analysis notably indicates no association between rs5219 polymorphism and type 2 diabetes in all studies used the younger age of the control group compared to the patient's age. In conclusion, our study sheds light on the importance of age factor among the control group recruited in case-control studies.Keywords: Type 2 diabetes, rs5219 polymorphism, E23K, KCNJ11 gene
Procedia PDF Downloads 1574926 Supply Network Design for Production-Distribution of Fish: A Sustainable Approach Using Mathematical Programming
Authors: Nicolás Clavijo Buriticá, Laura Viviana Triana Sanchez
Abstract:
This research develops a productive context associated with the aquaculture industry in northern Tolima-Colombia, specifically in the town of Lerida. Strategic aspects of chain of fish Production-Distribution, especially those related to supply network design of an association devoted to cultivating, farming, processing and marketing of fish are addressed. This research is addressed from a special approach of Supply Chain Management (SCM) which guides management objectives to the system sustainability; this approach is called Sustainable Supply Chain Management (SSCM). The network design of fish production-distribution system is obtained for the case study by two mathematical programming models that aims to maximize the economic benefits of the chain and minimize total supply chain costs, taking into account restrictions to protect the environment and its implications on system productivity. The results of the mathematical models validated in the productive situation of the partnership under study, called Asopiscinorte shows the variation in the number of open or closed locations in the supply network that determines the final network configuration. This proposed result generates for the case study an increase of 31.5% in the partial productivity of storage and processing, in addition to possible favorable long-term implications, such as attending an agile or not a consumer area, increase or not the level of sales in several areas, to meet in quantity, time and cost of work in progress and finished goods to various actors in the chain.Keywords: Sustainable Supply Chain, mathematical programming, aquaculture industry, Supply Chain Design, Supply Chain Configuration
Procedia PDF Downloads 5354925 Steady-State Behavior of a Multi-Phase M/M/1 Queue in Random Evolution Subject to Catastrophe Failure
Authors: Reni M. Sagayaraj, Anand Gnana S. Selvam, Reynald R. Susainathan
Abstract:
In this paper, we consider stochastic queueing models for Steady-state behavior of a multi-phase M/M/1 queue in random evolution subject to catastrophe failure. The arrival flow of customers is described by a marked Markovian arrival process. The service times of different type customers have a phase-type distribution with different parameters. To facilitate the investigation of the system we use a generalized phase-type service time distribution. This model contains a repair state, when a catastrophe occurs the system is transferred to the failure state. The paper focuses on the steady-state equation, and observes that, the steady-state behavior of the underlying queueing model along with the average queue size is analyzed.Keywords: M/G/1 queuing system, multi-phase, random evolution, steady-state equation, catastrophe failure
Procedia PDF Downloads 3274924 Optical and Double Folding Model Analysis for Alpha Particles Elastically Scattered from 9Be and 11B Nuclei at Different Energies
Authors: Ahmed H. Amer, A. Amar, Sh. Hamada, I. I. Bondouk, F. A. El-Hussiny
Abstract:
Elastic scattering of α-particles from 9Be and 11B nuclei at different alpha energies have been analyzed. Optical model parameters (OMPs) of α-particles elastic scattering by these nuclei at different energies have been obtained. In the present calculations, the real part of the optical potential are derived by folding of nucleon-nucleon (NN) interaction into nuclear matter density distribution of the projectile and target nuclei using computer code FRESCO. A density-dependent version of the M3Y interaction (CDM3Y6), which is based on the G-matrix elements of the Paris NN potential, has been used. Volumetric integrals of the real and imaginary potential depth (JR, JW) have been calculated and found to be energy dependent. Good agreement between the experimental data and the theoretical predictions in the whole angular range. In double folding (DF) calculations, the obtained normalization coefficient Nr is in the range 0.70–1.32.Keywords: elastic scattering, optical model, double folding model, density distribution
Procedia PDF Downloads 2894923 On Crack Tip Stress Field in Pseudo-Elastic Shape Memory Alloys
Authors: Gulcan Ozerim, Gunay Anlas
Abstract:
In shape memory alloys, upon loading, stress increases around crack tip and a martensitic phase transformation occurs in early stages. In many studies the stress distribution in the vicinity of the crack tip is represented by using linear elastic fracture mechanics (LEFM) although the pseudo-elastic behavior results in a nonlinear stress-strain relation. In this study, the HRR singularity (Hutchinson, Rice and Rosengren), that uses Rice’s path independent J-integral, is tried to formulate the stress distribution around the crack tip. In HRR approach, the Ramberg-Osgood model for the stress-strain relation of power-law hardening materials is used to represent the elastic-plastic behavior. Although it is recoverable, the inelastic portion of the deformation in martensitic transformation (up to the end of transformation) resembles to that of plastic deformation. To determine the constants of the Ramberg-Osgood equation, the material’s response is simulated in ABAQUS using a UMAT based on ZM (Zaki-Moumni) thermo-mechanically coupled model, and the stress-strain curve of the material is plotted. An edge cracked shape memory alloy (Nitinol) plate is loaded quasi-statically under mode I and modeled using ABAQUS; the opening stress values ahead of the cracked tip are calculated. The stresses are also evaluated using the asymptotic equations of both LEFM and HRR. The results show that in the transformation zone around the crack tip, the stress values are much better represented when the HRR singularity is used although the J-integral does not show path independent behavior. For the nodes very close to the crack tip, the HRR singularity is not valid due to the non-proportional loading effect and high-stress values that go beyond the transformation finish stress.Keywords: crack, HRR singularity, shape memory alloys, stress distribution
Procedia PDF Downloads 3244922 Optical Fiber Data Throughput in a Quantum Communication System
Authors: Arash Kosari, Ali Araghi
Abstract:
A mathematical model for an optical-fiber communication channel is developed which results in an expression that calculates the throughput and loss of the corresponding link. The data are assumed to be transmitted by using of separate photons with different polarizations. The derived model also shows the dependency of data throughput with length of the channel and depolarization factor. It is observed that absorption of photons affects the throughput in a more intensive way in comparison with that of depolarization. Apart from that, the probability of depolarization and the absorption of radiated photons are obtained.Keywords: absorption, data throughput, depolarization, optical fiber
Procedia PDF Downloads 2844921 The Richtmyer-Meshkov Instability Impacted by the Interface with Different Components Distribution
Authors: Sheng-Bo Zhang, Huan-Hao Zhang, Zhi-Hua Chen, Chun Zheng
Abstract:
In this paper, the Richtmyer-Meshkov instability has been studied numerically by using the high-resolution Roe scheme based on the two-dimensional unsteady Euler equation, which was caused by the interaction between shock wave and the helium circular light gas cylinder with different component distributions. The numerical results further discuss the deformation process of the gas cylinder, the wave structure of the flow field and quantitatively analyze the characteristic dimensions (length, height, and central axial width) of the gas cylinder, the volume compression ratio of the cylinder over time. In addition, the flow mechanism of shock-driven interface gas mixing is analyzed from multiple perspectives by combining it with the flow field pressure, velocity, circulation, and gas mixing rate. Then the effects of different initial component distribution conditions on interface instability are investigated. The results show when the diffusion interface transit to the sharp interface, the reflection coefficient gradually increases on both sides of the interface. When the incident shock wave interacts with the cylinder, the transmission of the shock wave will transit from conventional transmission to unconventional transmission. At the same time, the reflected shock wave is gradually strengthened, and the transmitted shock wave is gradually weakened, which leads to an increase in the Richtmyer-Meshkov instability. Moreover, the Atwood number on both sides of the interface also increases as the diffusion interface transit to the sharp interface, which leads to an increase in the Rayleigh-Taylor instability and the Kelvin-Helmholtz instability. Therefore, the increase in instability will lead to an increase the circulation, resulting in an increase in the growth rate of gas mixing rate.Keywords: shock wave, He light cylinder, Richtmyer-Meshkov instability, Gaussian distribution
Procedia PDF Downloads 764920 Select-Low and Select-High Methods for the Wheeled Robot Dynamic States Control
Authors: Bogusław Schreyer
Abstract:
The paper enquires on the two methods of the wheeled robot braking torque control. Those two methods are applied when the adhesion coefficient under left side wheels is different from the adhesion coefficient under the right side wheels. In case of the select-low (SL) method the braking torque on both wheels is controlled by the signals originating from the wheels on the side of the lower adhesion. In the select-high (SH) method the torque is controlled by the signals originating from the wheels on the side of the higher adhesion. The SL method is securing stable and secure robot behaviors during the braking process. However, the efficiency of this method is relatively low. The SH method is more efficient in terms of time and braking distance but in some situations may cause wheels blocking. It is important to monitor the velocity of all wheels and then take a decision about the braking torque distribution accordingly. In case of the SH method the braking torque slope may require significant decrease in order to avoid wheel blocking.Keywords: select-high, select-low, torque distribution, wheeled robots
Procedia PDF Downloads 1184919 Optimal Power Distribution and Power Trading Control among Loads in a Smart Grid Operated Industry
Authors: Vivek Upadhayay, Siddharth Deshmukh
Abstract:
In recent years utilization of renewable energy sources has increased majorly because of the increase in global warming concerns. Organization these days are generally operated by Micro grid or smart grid on a small level. Power optimization and optimal load tripping is possible in a smart grid based industry. In any plant or industry loads can be divided into different categories based on their importance to the plant and power requirement pattern in the working days. Coming up with an idea to divide loads in different such categories and providing different power management algorithm to each category of load can reduce the power cost and can come handy in balancing stability and reliability of power. An objective function is defined which is subjected to a variable that we are supposed to minimize. Constraint equations are formed taking difference between the power usages pattern of present day and same day of previous week. By considering the objectives of minimal load tripping and optimal power distribution the proposed problem formulation is a multi-object optimization problem. Through normalization of each objective function, the multi-objective optimization is transformed to single-objective optimization. As a result we are getting the optimized values of power required to each load for present day by use of the past values of the required power for the same day of last week. It is quite a demand response scheduling of power. These minimized values then will be distributed to each load through an algorithm used to optimize the power distribution at a greater depth. In case of power storage exceeding the power requirement, profit can be made by selling exceeding power to the main grid.Keywords: power flow optimization, power trading enhancement, smart grid, multi-object optimization
Procedia PDF Downloads 5224918 Production Process of Coconut-Shell Product in Amphawa District
Authors: Wannee Sutthachaidee
Abstract:
The study of the production process of coconut-shell product in Amphawa, Samutsongkram Province is objected to study the pattern of the process of coconut-shell product by focusing in the 3 main processes which are inbound logistics process, production process and outbound process. The result of the research: There were 4 main results from the study. Firstly, most of the manufacturer of coconut-shell product is usually owned by a single owner and the quantity of the finished product is quite low and the main labor group is local people. Secondly, the production process can be divided into 4 stages which are pre-production process, production process, packaging process and distribution process. Thirdly, each 3 of the logistics process of coconut shell will find process which may cause the problem to the business but the process which finds the most problem is the production process because the production process needs the skilled labor and the quantity of the labor does not match with the demand from the customers. Lastly, the factors which affect the production process of the coconut shell can be founded in almost every process of the process such as production design, packaging design, sourcing supply and distribution management.Keywords: production process, coconut-shell product, Amphawa District, inbound logistics process
Procedia PDF Downloads 5194917 Policy Views of Sustainable Integrated Solution for Increased Synergy between Light Railways and Electrical Distribution Network
Authors: Mansoureh Zangiabadi, Shamil Velji, Rajendra Kelkar, Neal Wade, Volker Pickert
Abstract:
The EU has set itself a long-term goal of reducing greenhouse gas emissions by 80-95% of the 1990 levels by 2050 as set in the Energy Roadmap 2050. This paper reports on the European Union H2020 funded E-Lobster project which demonstrates tools and technologies, software and hardware in integrating the grid distribution, and the railway power systems with power electronics technologies (Smart Soft Open Point - sSOP) and local energy storage. In this context this paper describes the existing policies and regulatory frameworks of the energy market at European level with a special focus then at National level, on the countries where the members of the consortium are located, and where the demonstration activities will be implemented. By taking into account the disciplinary approach of E-Lobster, the main policy areas investigated includes electricity, energy market, energy efficiency, transport and smart cities. Energy storage will play a key role in enabling the EU to develop a low-carbon electricity system. In recent years, Energy Storage System (ESSs) are gaining importance due to emerging applications, especially electrification of the transportation sector and grid integration of volatile renewables. The need for storage systems led to ESS technologies performance improvements and significant price decline. This allows for opening a new market where ESSs can be a reliable and economical solution. One such emerging market for ESS is R+G management which will be investigated and demonstrated within E-Lobster project. The surplus of energy in one type of power system (e.g., due to metro braking) might be directly transferred to the other power system (or vice versa). However, it would usually happen at unfavourable instances when the recipient does not need additional power. Thus, the role of ESS is to enhance advantages coming from interconnection of the railway power systems and distribution grids by offering additional energy buffer. Consequently, the surplus/deficit of energy in, e.g. railway power systems, is not to be immediately transferred to/from the distribution grid but it could be stored and used when it is really needed. This will assure better energy management exchange between the railway power systems and distribution grids and lead to more efficient loss reduction. In this framework, to identify the existing policies and regulatory frameworks is crucial for the project activities and for the future development of business models for the E-Lobster solutions. The projections carried out by the European Commission, the Member States and stakeholders and their analysis indicated some trends, challenges, opportunities and structural changes needed to design the policy measures to provide the appropriate framework for investors. This study will be used as reference for the discussion in the envisaged workshops with stakeholders (DSOs and Transport Managers) in the E-Lobster project.Keywords: light railway, electrical distribution network, Electrical Energy Storage, policy
Procedia PDF Downloads 1344916 Modeling of Drug Distribution in the Human Vitreous
Authors: Judith Stein, Elfriede Friedmann
Abstract:
The injection of a drug into the vitreous body for the treatment of retinal diseases like wet aged-related macular degeneration (AMD) is the most common medical intervention worldwide. We develop mathematical models for drug transport in the vitreous body of a human eye to analyse the impact of different rheological models of the vitreous on drug distribution. In addition to the convection diffusion equation characterizing the drug spreading, we use porous media modeling for the healthy vitreous with a dense collagen network and include the steady permeating flow of the aqueous humor described by Darcy's law driven by a pressure drop. Additionally, the vitreous body in a healthy human eye behaves like a viscoelastic gel through the collagen fibers suspended in the network of hyaluronic acid and acts as a drug depot for the treatment of retinal diseases. In a completely liquefied vitreous, we couple the drug diffusion with the classical Navier-Stokes flow equations. We prove the global existence and uniqueness of the weak solution of the developed initial-boundary value problem describing the drug distribution in the healthy vitreous considering the permeating aqueous humor flow in the realistic three-dimensional setting. In particular, for the drug diffusion equation, results from the literature are extended from homogeneous Dirichlet boundary conditions to our mixed boundary conditions that describe the eye with the Galerkin's method using Cauchy-Schwarz inequality and trace theorem. Because there is only a small effective drug concentration range and higher concentrations may be toxic, the ability to model the drug transport could improve the therapy by considering patient individual differences and give a better understanding of the physiological and pathological processes in the vitreous.Keywords: coupled PDE systems, drug diffusion, mixed boundary conditions, vitreous body
Procedia PDF Downloads 1364915 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm
Authors: Suparman
Abstract:
Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.Keywords: piecewise regression, bayesian, reversible jump MCMC, segmentation
Procedia PDF Downloads 3714914 A Numerical Study on the Effects of N2 Dilution on the Flame Structure and Temperature Distribution of Swirl Diffusion Flames
Authors: Yasaman Tohidi, Shidvash Vakilipour, Saeed Ebadi Tavallaee, Shahin Vakilipoor Takaloo, Hossein Amiri
Abstract:
The numerical modeling is performed to study the effects of N2 addition to the fuel stream on the flame structure and temperature distribution of methane-air swirl diffusion flames with different swirl intensities. The Open source Field Operation and Manipulation (OpenFOAM) has been utilized as the computational tool. Flamelet approach along with modified k-ε model is employed to model the flame characteristics. The results indicate that the presence of N2 in the fuel stream leads to the flame temperature reduction. By increasing of swirl intensity, the flame structure changes significantly. The flame has a conical shape in low swirl intensity; however, it has an hour glass-shape with a shorter length in high swirl intensity. The effects of N2 dilution decrease the flame length in all swirl intensities; however, the rate of reduction is more noticeable in low swirl intensity.Keywords: swirl diffusion flame, N2 dilution, OpenFOAM, swirl intensity
Procedia PDF Downloads 1674913 Assessment of Memetic and Genetic Algorithm for a Flexible Integrated Logistics Network
Authors: E. Behmanesh, J. Pannek
Abstract:
The distribution-allocation problem is known as one of the most comprehensive strategic decision. In real-world cases, it is impossible to solve a distribution-allocation problem in traditional ways with acceptable time. Hence researchers develop efficient non-traditional techniques for the large-term operation of the whole supply chain. These techniques provide near-optimal solutions particularly for large scales test problems. This paper, presents an integrated supply chain model which is flexible in the delivery path. As the solution methodology, we apply a memetic algorithm with a novelty in population presentation. To illustrate the performance of the proposed memetic algorithm, LINGO optimization software serves as a comparison basis for small size problems. In large size cases that we are dealing with in the real world, the Genetic algorithm as the second metaheuristic algorithm is considered to compare the results and show the efficiency of the memetic algorithm.Keywords: integrated logistics network, flexible path, memetic algorithm, genetic algorithm
Procedia PDF Downloads 3734912 Water Balance Components under Climate Change in Croatia
Authors: Jelena Bašić, Višnjica Vučetić, Mislav Anić, Tomislav Bašić
Abstract:
Lack of precipitation combined with high temperatures causes great damage to the agriculture and economy in Croatia. Therefore, it is important to understand water circulation and balance. We decided to gain a better insight into the spatial distribution of water balance components (WBC) and their long-term changes in Croatia. WBC are precipitation (P), potential evapotranspiration (PET), actual evapotranspiration (ET), soil moisture content (S), runoff (RO), recharge (R), and soil moisture loss (L). Since measurements of the mentioned components in Croatia are very rare, the Palmer model has been applied to estimate them. We refined method by setting into the account the corrective factor to include influence effects of the wind as well as a maximum soil capacity for specific soil types. We will present one hundred years’ time series of PET and ET showing the trends at few meteorological stations and a comparison of components of two climatological periods. The meteorological data from 109 stations have been used for the spatial distribution map of the WBC of Croatia.Keywords: croatia, long-term trends, the palmer method, water balance components
Procedia PDF Downloads 139