Search results for: queue size distribution at a random epoch
11593 An Exploration of the Technical and Economic Feasibility of a Stand Alone Solar PV Generated DC Distribution System over AC Distribution System for Use in the Modern as Well as Future Houses of Isolated Areas
Authors: Alpesh Desai, Indrajit Mukhopadhyay
Abstract:
Standalone Photovoltaic (PV) systems are designed and sized to supply certain AC and/or DC electrical loads. In computers, consumer electronics and many small appliances as well as LED lighting the actual power consumed is DC. The DC system, which requires only voltage control, has many advantages such as feasible connection of the distributed energy sources and reduction of the conversion losses for DC-based loads. Also by using the DC power directly the cost of the size of the Inverter and Solar panel reduced hence the overall cost of the system reduced. This paper explores the technical and economic feasibility of supplying electrical power to homes/houses using DC voltage mains within the house. Theoretical calculated results are presented to demonstrate the advantage of DC system over AC system with PV on sustainable rural/isolated development.Keywords: distribution system, energy efficiency, off-grid, stand-alone PV system, sustainability, techno-socio-economic
Procedia PDF Downloads 26411592 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race
Authors: Joonas Pääkkönen
Abstract:
In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling
Procedia PDF Downloads 12511591 Simulation of Glass Breakage Using Voronoi Random Field Tessellations
Authors: Michael A. Kraus, Navid Pourmoghaddam, Martin Botz, Jens Schneider, Geralt Siebert
Abstract:
Fragmentation analysis of tempered glass gives insight into the quality of the tempering process and defines a certain degree of safety as well. Different standard such as the European EN 12150-1 or the American ASTM C 1048/CPSC 16 CFR 1201 define a minimum number of fragments required for soda-lime safety glass on the basis of fragmentation test results for classification. This work presents an approach for the glass breakage pattern prediction using a Voronoi Tesselation over Random Fields. The random Voronoi tessellation is trained with and validated against data from several breakage patterns. The fragments in observation areas of 50 mm x 50 mm were used for training and validation. All glass specimen used in this study were commercially available soda-lime glasses at three different thicknesses levels of 4 mm, 8 mm and 12 mm. The results of this work form a Bayesian framework for the training and prediction of breakage patterns of tempered soda-lime glass using a Voronoi Random Field Tesselation. Uncertainties occurring in this process can be well quantified, and several statistical measures of the pattern can be preservation with this method. Within this work it was found, that different Random Fields as basis for the Voronoi Tesselation lead to differently well fitted statistical properties of the glass breakage patterns. As the methodology is derived and kept general, the framework could be also applied to other random tesselations and crack pattern modelling purposes.Keywords: glass breakage predicition, Voronoi Random Field Tessellation, fragmentation analysis, Bayesian parameter identification
Procedia PDF Downloads 16111590 Parameter Estimation for the Mixture of Generalized Gamma Model
Authors: Wikanda Phaphan
Abstract:
Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method
Procedia PDF Downloads 22011589 Optimal Simultaneous Sizing and Siting of DGs and Smart Meters Considering Voltage Profile Improvement in Active Distribution Networks
Authors: T. Sattarpour, D. Nazarpour
Abstract:
This paper investigates the effect of simultaneous placement of DGs and smart meters (SMs), on voltage profile improvement in active distribution networks (ADNs). A substantial center of attention has recently been on responsive loads initiated in power system problem studies such as distributed generations (DGs). Existence of responsive loads in active distribution networks (ADNs) would have undeniable effect on sizing and siting of DGs. For this reason, an optimal framework is proposed for sizing and siting of DGs and SMs in ADNs. SMs are taken into consideration for the sake of successful implementing of demand response programs (DRPs) such as direct load control (DLC) with end-side consumers. Looking for voltage profile improvement, the optimization procedure is solved by genetic algorithm (GA) and tested on IEEE 33-bus distribution test system. Different scenarios with variations in the number of DG units, individual or simultaneous placing of DGs and SMs, and adaptive power factor (APF) mode for DGs to support reactive power have been established. The obtained results confirm the significant effect of DRPs and APF mode in determining the optimal size and site of DGs to be connected in ADN resulting to the improvement of voltage profile as well.Keywords: active distribution network (ADN), distributed generations (DGs), smart meters (SMs), demand response programs (DRPs), adaptive power factor (APF)
Procedia PDF Downloads 30311588 Identification of Flooding Attack (Zero Day Attack) at Application Layer Using Mathematical Model and Detection Using Correlations
Authors: Hamsini Pulugurtha, V.S. Lakshmi Jagadmaba Paluri
Abstract:
Distributed denial of service attack (DDoS) is one altogether the top-rated cyber threats presently. It runs down the victim server resources like a system of measurement and buffer size by obstructing the server to supply resources to legitimate shoppers. Throughout this text, we tend to tend to propose a mathematical model of DDoS attack; we discuss its relevancy to the choices like inter-arrival time or rate of arrival of the assault customers accessing the server. We tend to tend to further analyze the attack model in context to the exhausting system of measurement and buffer size of the victim server. The projected technique uses an associate in nursing unattended learning technique, self-organizing map, to make the clusters of identical choices. Lastly, the abstract applies mathematical correlation and so the standard likelihood distribution on the clusters and analyses their behaviors to look at a DDoS attack. These systems not exclusively interconnect very little devices exchanging personal data, but to boot essential infrastructures news standing of nuclear facilities. Although this interconnection brings many edges and blessings, it to boot creates new vulnerabilities and threats which might be conversant in mount attacks. In such sophisticated interconnected systems, the power to look at attacks as early as accomplishable is of paramount importance.Keywords: application attack, bandwidth, buffer correlation, DDoS distribution flooding intrusion layer, normal prevention probability size
Procedia PDF Downloads 22611587 Titanium Nitride Nanoparticles for Biological Applications
Authors: Nicole Nazario Bayon, Prathima Prabhu Tumkur, Nithin Krisshna Gunasekaran, Krishnan Prabhakaran, Joseph C. Hall, Govindarajan T. Ramesh
Abstract:
Titanium nitride (TiN) nanoparticles have sparked interest over the past decade due to their characteristics such as thermal stability, extreme hardness, low production cost, and similar optical properties to gold. In this study, TiN nanoparticles were synthesized via a thermal benzene route to obtain a black powder of nanoparticles. The final product was drop cast onto conductive carbon tape and sputter coated with gold/palladium at a thickness of 4 nm for characterization by field emission scanning electron microscopy (FE-SEM) with energy dispersive X-Ray spectroscopy (EDX) that revealed they were spherical. ImageJ software determined the average size of the TiN nanoparticles was 79 nm in diameter. EDX revealed the elements present in the sample and showed no impurities. Further characterization by X-ray diffraction (XRD) revealed characteristic peaks of cubic phase titanium nitride, and crystallite size was calculated to be 14 nm using the Debye-Scherrer method. Dynamic light scattering (DLS) analysis revealed the size and size distribution of the TiN nanoparticles, with average size being 154 nm. Zeta potential concluded the surface of the TiN nanoparticles is negatively charged. Biocompatibility studies using MTT(3-(4,5-Dimethylthiazol-2-yl)-2,5-Diphenyltetrazolium Bromide) assay showed TiN nanoparticles are not cytotoxic at low concentrations (2, 5, 10, 25, 50, 75 mcg/well), and cell viability began to decrease at a concentration of 100 mcg/well.Keywords: biocompatibility, characterization, cytotoxicity, nanoparticles, synthesis, titanium nitride
Procedia PDF Downloads 18011586 Similarities and Differences in Values of Young Women and Their Parents: The Effect of Value Transmission and Value Change
Authors: J. Fryt, K. Pietras, T. Smolen
Abstract:
Intergenerational similarities in values may be effect of value transmission within families or socio-cultural trends prevailing at a specific point in time. According to salience hypothesis, salient family values may be transmitted more frequently. On the other hand, many value studies reveal that generational shift from social values (conservation and self-transcendence) to more individualistic values (openness to change and self-enhancement) suggest that value transmission and value change are two different processes. The first aim of our study was to describe similarities and differences in values of young women and their parents. The second aim was to determine which value similarities may be due to transmission within families. Ninety seven Polish women aged 19-25 and both their mothers and fathers filled in the Portrait Value Questionaire. Intergenerational similarities in values between women were found in strong preference for benevolence, universalism and self-direction as well as low preference for power. Similarities between younger women and older men were found in strong preference for universalism and hedonism as well as lower preference for security and tradition. Young women differed from older generation in strong preference for stimulation and achievement as well as low preference for conformity. To identify the origin of intergenerational similarities (whether they are the effect of value transmission within families or not), we used the comparison between correlations of values in family dyads (mother-daughter, father-daughter) and distribution of correlations in random intergenerational dyads (random mother-daughter, random father-daughter) as well as peer dyads (random daughter-daughter). Values representing conservation (security, tradition and conformity) as well as benevolence and power were transmitted in families between women. Achievement, power and security were transmitted between fathers and daughters. Similarities in openness to change (self-direction, stimulation and hedonism) and universalism were not stronger within families than in random intergenerational and peer dyads. Taken together, our findings suggest that despite noticeable generation shift from social to more individualistic values, we can observe transmission of parents’ salient values such as security, tradition, benevolence and achievement.Keywords: value transmission, value change, intergenerational similarities, differences in values
Procedia PDF Downloads 42911585 Board Composition and Performance of Listed Deposit Money Banks in Nigeria
Authors: Mary David, Denis Basila
Abstract:
This study assessed the Impact of Board Composition on the Performance of Listed Deposit Money Banks in Nigeria. A sample of ten (10) deposit money banks formed the sample of this study. Board size, gender diversity, and board independence were used as the independent variables, and firm size as a control variable, whiles the bank performance was proxy with Tobin’s Q (TQ) as the dependent variable. Secondary data was collected from secondary source through the annual report and account of the banks and was analyzed through the support of STATA 14 versions. Descriptive statistics, correlation matrix, and OLS multiple regression model were adopted for the study. Breusch and pagan lagrangian multiplier test for random effect was conducted. The findings of the study reveal that board size has positive and significant impact on Tobin’s Q, gender diversity has positive and significant impact on Tobin’s Q, while board independent had a negative and nonsignificant influence on the Tobin’s Q, Similarly, firm size was found to have a negative and nonsignificant impact on Tobin’s Q of the study banks. This study recommended that policy makers, stakeholders, and corporate managers of deposit money banks of Nigeria and related industries are encouraged to adopt board sizes and gender diversity that impact positively on bank performance.Keywords: board composition, performance, deposit money banks, nigeria
Procedia PDF Downloads 7511584 The Effect of Human Capital and Oil Revenue on Income Distribution in Real Sample
Authors: Marjan Majdi, MohammadAli Moradi, Elham Samarikhalaj
Abstract:
Income distribution is one of the most topics in macro economic theories. There are many categories in economy such as income distribution that have the most influenced by economic policies. Human capital has an impact on economic growth and it has significant effect on income distributions. The results of this study confirm that the effects of oil revenue and human capital on income distribution are negative and significant but the value of the estimated coefficient is too small in a real sample in period time (1969-2006).Keywords: gini coefficient, human capital, income distribution, oil revenue
Procedia PDF Downloads 63711583 Reliability Analysis in Power Distribution System
Authors: R. A. Deshpande, P. Chandhra Sekhar, V. Sankar
Abstract:
In this paper, we discussed the basic reliability evaluation techniques needed to evaluate the reliability of distribution systems which are applied in distribution system planning and operation. Basically, the reliability study can also help to predict the reliability performance of the system after quantifying the impact of adding new components to the system. The number and locations of new components needed to improve the reliability indices to certain limits are identified and studied.Keywords: distribution system, reliability indices, urban feeder, rural feeder
Procedia PDF Downloads 77711582 Real-Time Observation of Concentration Distribution for Mix Liquids including Water in Micro Fluid Channel with Near-Infrared Spectroscopic Imaging Method
Authors: Hiroki Takiguchi, Masahiro Furuya, Takahiro Arai
Abstract:
In order to quantitatively comprehend thermal flow for some industrial applications such as nuclear and chemical reactors, detailed measurements for temperature and abundance (concentration) of materials at high temporal and spatial resolution are required. Additionally, rigorous evaluation of the size effect is also important for practical realization. This paper introduces a real-time spectroscopic imaging method in micro scale field, which visualizes temperature and concentration distribution of a liquid or mix liquids with near-infrared (NIR) wavelength region. This imaging principle is based on absorption of pre-selected narrow band from absorption spectrum peak or its dependence property of target liquid in NIR region. For example, water has a positive temperature sensitivity in the wavelength at 1905 nm, therefore the temperature of water can be measured using the wavelength band. In the experiment, the real-time imaging observation of concentration distribution in micro channel was demonstrated to investigate the applicability of micro-scale diffusion coefficient and temperature measurement technique using this proposed method. The effect of thermal diffusion and binary mutual diffusion was evaluated with the time-series visualizations of concentration distribution.Keywords: near-infrared spectroscopic imaging, micro fluid channel, concentration distribution, diffusion phenomenon
Procedia PDF Downloads 16111581 Customer Service Marketing Mix: A Survey of Small Business around Campus, Suan Sunandha Rajabhat University
Authors: Chonlada Choovanichchanon
Abstract:
This research paper was aimed to investigate a relationship between the customer service marketing mix and the level of customers’ satisfaction from purchasing goods and service from small business around campus, Suan Sunandha Rajabhat University, Bangkok, Thailand. Based on the survey of 200 customers who frequently purchased goods and service around campus, the level of satisfaction for each factor of marketing mix was reached. An accidental random sampling was applied by using questionnaire in collecting the data. The findings revealed that the means values can help to rank these variables from high to low mean as follows: 1) forms and system of service, 2) physical environment of service center, 3) service from staff and employee, 4) product quality and service, 5) market channel and distribution, 6) market price, and 7) market promotion and distribution.Keywords: service marketing mix, satisfaction, small business, survey
Procedia PDF Downloads 49411580 Hedonic Price Analysis of Consumer Preference for Musa spp in Northern Nigeria
Authors: Yakubu Suleiman, S. A. Musa
Abstract:
The research was conducted to determine the physical characteristics of banana fruits that influenced consumer preferences for the fruit in Northern Nigeria. Socio-economic characteristics of the respondents were also identified. Simple descriptive statistics and Hedonic prices model were used to analyze the data collected for socio-economic and consumer preference respectively with the aid of 1000 structured questionnaires. The result revealed the value of R2 to be 0.633, meaning that, 63.3% of the variation in the banana price was brought about by the explanatory variables included in the model and the variables are: colour, size, degree of ripeness, softness, surface blemish, cleanliness of the fruits, weight, length, and cluster size of fruits. However, the remaining 36.7% could be attributed to the error term or random disturbance in the model. It could also be seen from the calculated result that the intercept was 1886.5 and was statistically significant (P < 0.01), meaning that about N1886.5 worth of banana fruits could be bought by consumers without considering the variables of banana included in the model. Moreover, consumers showed that they have significant preference for colours, size, degree of ripeness, softness, weight, length and cluster size of banana fruits and they were tested to be significant at either P < 0.01, P < 0.05, and P < 0.1 . Moreover, the result also shows that consumers did not show significance preferences to surface blemish, cleanliness and variety of the banana fruit as all of them showed non-significance level with negative signs. Based on the findings of the research, it is hereby recommended that plant breeders and research institutes should concentrate on the production of banana fruits that have those physical characteristics that were found to be statistically significance like cluster size, degree of ripeness,’ softness, length, size, and skin colour.Keywords: analysis, consumers, preference, variables
Procedia PDF Downloads 34411579 Characterization of Coal Fly Ash with Potential Use in the Manufacture Geopolymers to Solidify/Stabilize Heavy Metal Ions
Authors: P. M. Fonseca Alfonso, E. A. Murillo Ruiz, M. Diaz Lagos
Abstract:
Understanding the physicochemical properties and mineralogy of fly ash from a particular source is essential for to protect the environment and considering its possible applications, specifically, in the production of geopolymeric materials that solidify/stabilize heavy metals ions. The results of the characterization of three fly ash samples are shown in this paper. The samples were produced in the TERMOPAIPA IV thermal power plant in the State of Boyaca, Colombia. The particle size distribution, chemical composition, mineralogy, and molecular structure of three samples were analyzed using laser diffraction, X-ray fluorescence, inductively coupled plasma mass spectrometry, X-ray diffraction, and infrared spectroscopy respectively. The particle size distribution of the three samples probably ranges from 0.128 to 211 μm. Approximately 59 elements have been identified in the three samples. It is noticeable that the ashes are made up of aluminum and silicon compounds. Besides, the iron phase in low content was also found. According to the results found in this study, the fly ash samples type F has a great potential to be used as raw material for the manufacture of geopolymers with potential use in the stabilization/solidification of heavy metals; mainly due to the presence of amorphous aluminosilicates typical of this type of ash, which react effectively with alkali-activator.Keywords: fly ash, geopolymers, molecular structure, physicochemical properties.
Procedia PDF Downloads 11911578 Analysis of Two Phase Hydrodynamics in a Column Flotation by Particle Image Velocimetry
Authors: Balraju Vadlakonda, Narasimha Mangadoddy
Abstract:
The hydrodynamic behavior in a laboratory column flotation was analyzed using particle image velocimetry. For complete characterization of column flotation, it is necessary to determine the flow velocity induced by bubbles in the liquid phase, the bubble velocity and bubble characteristics:diameter,shape and bubble size distribution. An experimental procedure for analyzing simultaneous, phase-separated velocity measurements in two-phase flows was introduced. The non-invasive PIV technique has used to quantify the instantaneous flow field, as well as the time averaged flow patterns in selected planes of the column. Using the novel particle velocimetry (PIV) technique by the combination of fluorescent tracer particles, shadowgraphy and digital phase separation with masking technique measured the bubble velocity as well as the Reynolds stresses in the column. Axial and radial mean velocities as well as fluctuating components were determined for both phases by averaging the sufficient number of double images. Bubble size distribution was cross validated with high speed video camera. Average turbulent kinetic energy of bubble were analyzed. Different air flow rates were considered in the experiments.Keywords: particle image velocimetry (PIV), bubble velocity, bubble diameter, turbulent kinetic energy
Procedia PDF Downloads 51111577 The BNCT Project Using the Cf-252 Source: Monte Carlo Simulations
Authors: Marta Błażkiewicz-Mazurek, Adam Konefał
Abstract:
The project can be divided into three main parts: i. modeling the Cf-252 neutron source and conducting an experiment to verify the correctness of the obtained results, ii. design of the BNCT system infrastructure, iii. analysis of the results from the logical detector. Modeling of the Cf-252 source included designing the shape and size of the source as well as the energy and spatial distribution of emitted neutrons. Two options were considered: a point source and a cylindrical spatial source. The energy distribution corresponded to various spectra taken from specialized literature. Directionally isotropic neutron emission was simulated. The simulation results were compared with experimental values determined using the activation detector method using indium foils and cadmium shields. The relative fluence rate of thermal and resonance neutrons was compared in the chosen places in the vicinity of the source. The second part of the project related to the modeling of the BNCT infrastructure consisted of developing a simulation program taking into account all the essential components of this system. Materials with moderating, absorbing, and backscattering properties of neutrons were adopted into the project. Additionally, a gamma radiation filter was introduced into the beam output system. The analysis of the simulation results obtained using a logical detector located at the beam exit from the BNCT infrastructure included neutron energy and their spatial distribution. Optimization of the system involved changing the size and materials of the system to obtain a suitable collimated beam of thermal neutrons.Keywords: BNCT, Monte Carlo, neutrons, simulation, modeling
Procedia PDF Downloads 3411576 Transportation and Urban Land-Use System for the Sustainability of Cities, a Case Study of Muscat
Authors: Bader Eddin Al Asali, N. Srinivasa Reddy
Abstract:
Cities are dynamic in nature and are characterized by concentration of people, infrastructure, services and markets, which offer opportunities for production and consumption. Often growth and development in urban areas is not systematic, and is directed by number of factors like natural growth, land prices, housing availability, job locations-the central business district (CBD’s), transportation routes, distribution of resources, geographical boundaries, administrative policies, etc. One sided spatial and geographical development in cities leads to the unequal spatial distribution of population and jobs, resulting in high transportation activity. City development can be measured by the parameters such as urban size, urban form, urban shape, and urban structure. Urban Size is the city size and defined by the population of the city, and urban form is the location and size of the economic activity (CBD) over the geographical space. Urban shape is the geometrical shape of the city over which the distribution of population and economic activity occupied. And Urban Structure is the transport network within which the population and activity centers are connected by hierarchy of roads. Among the urban land-use systems transportation plays significant role and is one of the largest energy consuming sector. Transportation interaction among the land uses is measured in Passenger-Km and mean trip length, and is often used as a proxy for measurement of energy consumption in transportation sector. Among the trips generated in cities, work trips constitute more than 70 percent. Work trips are originated from the place of residence and destination to the place of employment. To understand the role of urban parameters on transportation interaction, theoretical cities of different size and urban specifications are generated through building block exercise using a specially developed interactive C++ programme and land use transportation modeling is carried. The land-use transportation modeling exercise helps in understanding the role of urban parameters and also to classify the cities for their urban form, structure, and shape. Muscat the capital city of Oman underwent rapid urbanization over the last four decades is taken as a case study for its classification. Also, a pilot survey is carried to capture urban travel characteristics. Analysis of land-use transportation modeling with field data classified Muscat as a linear city with polycentric CBD. Conclusions are drawn suggestion are given for policy making for the sustainability of Muscat City.Keywords: land-use transportation, transportation modeling urban form, urban structure, urban rule parameters
Procedia PDF Downloads 27011575 Investigating the Effects of Data Transformations on a Bi-Dimensional Chi-Square Test
Authors: Alexandru George Vaduva, Adriana Vlad, Bogdan Badea
Abstract:
In this research, we conduct a Monte Carlo analysis on a two-dimensional χ2 test, which is used to determine the minimum distance required for independent sampling in the context of chaotic signals. We investigate the impact of transforming initial data sets from any probability distribution to new signals with a uniform distribution using the Spearman rank correlation on the χ2 test. This transformation removes the randomness of the data pairs, and as a result, the observed distribution of χ2 test values differs from the expected distribution. We propose a solution to this problem and evaluate it using another chaotic signal.Keywords: chaotic signals, logistic map, Pearson’s test, Chi Square test, bivariate distribution, statistical independence
Procedia PDF Downloads 9911574 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation
Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro
Abstract:
This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.Keywords: acceptance, block size, mixed linear model, testing order, testing order
Procedia PDF Downloads 32211573 Microstructural Evidences for Exhaustion Theory of Low Temperature Creep in Martensitic Steels
Authors: Nagarjuna Remalli, Robert Brandt
Abstract:
Down-sizing of combustion engines in automobiles are prevailed owing to required increase in efficiency. This leads to a stress increment on valve springs, which affects their intended function due to an increase in relaxation. High strength martensitic steels are used for valve spring applications. Recent investigations unveiled that low temperature creep (LTC) in martensitic steels obey a logarithmic creep law. The exhaustion theory links the logarithmic creep behavior to an activation energy which is characteristic for any given time during creep. This activation energy increases with creep strain due to barriers of low activation energies exhausted during creep. The assumption of the exhaustion theory is that the material is inhomogeneous in microscopic scale. According to these assumptions it is anticipated that small obstacles (e. g. ε–carbides) having a wide range of size distribution are non-uniformly distributed in the materials. X-ray diffraction studies revealed the presence of ε–carbides in high strength martensitic steels. In this study, high strength martensitic steels that are crept in the temperature range of 75 – 150 °C were investigated with the aid of a transmission electron microscope for the evidence of an inhomogeneous distribution of obstacles having different size to examine the validation of exhaustion theory.Keywords: creep mechanisms, exhaustion theory, low temperature creep, martensitic steels
Procedia PDF Downloads 26411572 Modeling the Current and Future Distribution of Anthus Pratensis under Climate Change
Authors: Zahira Belkacemi
Abstract:
One of the most important tools in conservation biology is information on the geographic distribution of species and the variables determining those patterns. In this study, we used maximum-entropy niche modeling (Maxent) to predict the current and future distribution of Anthus pratensis using climatic variables. The results showed that the species would not be highly affected by the climate change in shifting its distribution; however, the results of this study should be improved by taking into account other predictors, and that the NATURA 2000 protected sites will be efficient at 42% in protecting the species.Keywords: anthus pratensis, climate change, Europe, species distribution model
Procedia PDF Downloads 14611571 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 20211570 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes
Authors: J. J. Vargas, N. Prieto, L. A. Toro
Abstract:
Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method
Procedia PDF Downloads 37711569 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data
Authors: S. Jurado, E. Pazmino
Abstract:
Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.Keywords: medial axis, pore-throat distribution, porosity, porous media
Procedia PDF Downloads 11611568 Applying the Crystal Model Approach on Light Nuclei for Calculating Radii and Density Distribution
Authors: A. Amar
Abstract:
A new model, namely the crystal model, has been modified to calculate the radius and density distribution of light nuclei up to ⁸Be. The crystal model has been modified according to solid-state physics, which uses the analogy between nucleon distribution and atoms distribution in the crystal. The model has analytical analysis to calculate the radius where the density distribution of light nuclei has obtained from analogy of crystal lattice. The distribution of nucleons over crystal has been discussed in a general form. The equation that has been used to calculate binding energy was taken from the solid-state model of repulsive and attractive force. The numbers of the protons were taken to control repulsive force, where the atomic number was responsible for the attractive force. The parameter has been calculated from the crystal model was found to be proportional to the radius of the nucleus. The density distribution of light nuclei was taken as a summation of two clusters distribution as in ⁶Li=alpha+deuteron configuration. A test has been done on the data obtained for radius and density distribution using double folding for d+⁶,⁷Li with M3Y nucleon-nucleon interaction. Good agreement has been obtained for both the radius and density distribution of light nuclei. The model failed to calculate the radius of ⁹Be, so modifications should be done to overcome discrepancy.Keywords: nuclear physics, nuclear lattice, study nucleus as crystal, light nuclei till to ⁸Be
Procedia PDF Downloads 17711567 Three-Stage Multivariate Stratified Sample Surveys with Probabilistic Cost Constraint and Random Variance
Authors: Sanam Haseen, Abdul Bari
Abstract:
In this paper a three stage multivariate programming problem with random survey cost and variances as random variables has been formulated as a non-linear stochastic programming problem. The problem has been converted into an equivalent deterministic form using chance constraint programming and modified E-modeling. An empirical study of the problem has been done at the end of the paper using R-simulation.Keywords: chance constraint programming, modified E-model, stochastic programming, stratified sample surveys, three stage sample surveys
Procedia PDF Downloads 45811566 Residual Stress Around Embedded Particles in Bulk YBa2Cu3Oy Samples
Authors: Anjela Koblischka-Veneva, Michael R. Koblischka
Abstract:
To increase the flux pinning performance of bulk YBa2Cu3O7-δ (YBCO or Y-123) superconductors, it is common to employ secondary phase particles, either Y2BaCuO5 (Y-211) particles created during the growth of the samples or additionally added (nano)particles of various types, embedded in the superconducting Y-123 matrix. As the crystallographic parameters of all the particles indicate a misfit to Y-123, there will be residual strain within the Y-123 matrix around such particles. With a dedicated analysis of electron backscatter diffraction (EBSD) data obtained on various bulk, Y-123 superconductor samples, the strain distribution around such embedded secondary phase particles can be revealed. The results obtained are presented in form of Kernel Average Misorientation (KAM) mappings. Around large Y-211 particles, the strain can be so large that YBCO subgrains are formed. Therefore, it is essential to properly control the particle size as well as their distribution within the bulk sample to obtain the best performance. The impact of the strain distribution on the flux pinning properties is discussed.Keywords: Bulk superconductors, EBSD, Strain, YBa2Cu3Oy
Procedia PDF Downloads 15011565 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 4911564 Formulation of Famotidine Solid Lipid Nanoparticles (SLN): Preparation, Evaluation and Release Study
Authors: Rachmat Mauludin, Nurmazidah
Abstract:
Background and purpose: Famotidine is an H2 receptor blocker. Absorption orally is rapid enough, but famotidine can be degraded by stomach acid causing dose reduction until 35.8% after 50 minutes. This drug also undergoes first-pass metabolism which reduced its bio availability only until 40-50%. To overcome these problems, Solid Lipid Nano particles (SLNs) as alternative delivery systems can be formulated. SLNs is a lipid-based drug delivery technology with 50-1000 nm particle size, where the drug incorporated into the bio compatible lipids and the lipid particles are stabilized using appropriate stabilizers. When the particle size is 200 nm or below, lipid containing famotidine can be absorbed through the lymphatic vessels to the subclavian vein, so first-pass metabolism can be avoided. Method: Famotidine SLNs with various compositions of stabilizer was prepared using a high-speed homogenization and sonication method. Then, the particle size distribution, zeta potential, entrapment efficiency, particle morphology and in vitro release profiles were evaluated. Optimization of sonication time also carried out. Result: Particle size of SLN by Particle Size Analyzer was in range 114.6 up to 455.267 nm. Ultrasonicated SLNs within 5 minutes generated smaller particle size than SLNs which was ultrasonicated for 10 and 15 minutes. Entrapment efficiency of SLNs were 74.17 up to 79.45%. Particle morphology of the SLNs was spherical and distributed individually. Release study of Famotidine revealed that in acid medium, 28.89 up to 80.55% of famotidine could be released after 2 hours. Nevertheless in basic medium, famotidine was released 40.5 up to 86.88% in the same period. Conclusion: The best formula was SLNs which stabilized by 4% Poloxamer 188 and 1 % Span 20, that had particle size 114.6 nm in diameter, 77.14% famotidine entrapped, and the particle morphology was spherical and distributed individually. SLNs with the best drug release profile was SLNs which stabilized by 4% Eudragit L 100-55 and 1% Tween 80 which had released 36.34 % in pH 1.2 solution, and 74.13% in pH 7.4 solution after 2 hours. The optimum sonication time was 5 minutes.Keywords: famotodine, SLN, high speed homogenization, particle size, release study
Procedia PDF Downloads 861