Search results for: queue size distribution at a random epoch
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11461

Search results for: queue size distribution at a random epoch

11191 Performance Degradation for the GLR Test-Statistics for Spatial Signal Detection

Authors: Olesya Bolkhovskaya, Alexander Maltsev

Abstract:

Antenna arrays are widely used in modern radio systems in sonar and communications. The solving of the detection problems of a useful signal on the background of noise is based on the GLRT method. There is a large number of problem which depends on the known a priori information. In this work, in contrast to the majority of already solved problems, it is used only difference spatial properties of the signal and noise for detection. We are analyzing the influence of the degree of non-coherence of signal and noise unhomogeneity on the performance characteristics of different GLRT statistics. The description of the signal and noise is carried out by means of the spatial covariance matrices C in the cases of different number of known information. The partially coherent signal is simulated as a plane wave with a random angle of incidence of the wave concerning a normal. Background noise is simulated as random process with uniform distribution function in each element. The results of investigation of degradation of performance characteristics for different cases are represented in this work.

Keywords: GLRT, Neumann-Pearson’s criterion, Test-statistics, degradation, spatial processing, multielement antenna array

Procedia PDF Downloads 362
11190 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 112
11189 Assessing and Identifying Factors Affecting Customers Satisfaction of Commercial Bank of Ethiopia: The Case of West Shoa Zone (Bako, Gedo, Ambo, Ginchi and Holeta), Ethiopia

Authors: Habte Tadesse Likassa, Bacha Edosa

Abstract:

Customer’s satisfaction was very important thing that is required for the existence of banks to be more productive and success in any organization and business area. The main goal of the study is assessing and identifying factors that influence customer’s satisfaction in West Shoa Zone of Commercial Bank of Ethiopia (Holeta, Ginchi, Ambo, Gedo and Bako). Stratified random sampling procedure was used in the study and by using simple random sampling (lottery method) 520 customers were drawn from the target population. By using Probability Proportional Size Techniques sample size for each branch of banks were allocated. Both descriptive and inferential statistics methods were used in the study. A binary logistic regression model was fitted to see the significance of factors affecting customer’s satisfaction in this study. SPSS statistical package was used for data analysis. The result of the study reveals that the overall level of customer’s satisfaction in the study area is low (38.85%) as compared those who were not satisfied (61.15%). The result of study showed that all most all factors included in the study were significantly associated with customer’s satisfaction. Therefore, it can be concluded that based on the comparison of branches on their customers satisfaction by using odd ratio customers who were using Ambo and Bako are less satisfied as compared to customers who were in Holeta branch. Additionally, customers who were in Ginchi and Gedo were more satisfied than that of customers who were in Holeta. Since the level of customers satisfaction was low in the study area, it is more advisable and recommended for concerned body works cooperatively more in maximizing satisfaction of their customers.

Keywords: customers, satisfaction, binary logistic, complain handling process, waiting time

Procedia PDF Downloads 435
11188 Blocking of Random Chat Apps at Home Routers for Juvenile Protection in South Korea

Authors: Min Jin Kwon, Seung Won Kim, Eui Yeon Kim, Haeyoung Lee

Abstract:

Numerous anonymous chat apps that help people to connect with random strangers have been released in South Korea. However, they become a serious problem for young people since young people often use them for channels of prostitution or sexual violence. Although ISPs in South Korea are responsible for making inappropriate content inaccessible on their networks, they do not block traffic of random chat apps since 1) the use of random chat apps is entirely legal. 2) it is reported that they use HTTP proxy blocking so that non-HTTP traffic cannot be blocked. In this paper, we propose a service model that can block random chat apps at home routers. A service provider manages a blacklist that contains blocked apps’ information. Home routers that subscribe the service filter the traffic of the apps out using deep packet inspection. We have implemented a prototype of the proposed model, including a centralized server providing the blacklist, a Raspberry Pi-based home router that can filter traffic of the apps out, and an Android app used by the router’s administrator to locally customize the blacklist.

Keywords: deep packet inspection, internet filtering, juvenile protection, technical blocking

Procedia PDF Downloads 323
11187 Pre-Shared Key Distribution Algorithms' Attacks for Body Area Networks: A Survey

Authors: Priti Kumari, Tricha Anjali

Abstract:

Body Area Networks (BANs) have emerged as the most promising technology for pervasive health care applications. Since they facilitate communication of very sensitive health data, information leakage in such networks can put human life at risk, and hence security inside BANs is a critical issue. Safe distribution and periodic refreshment of cryptographic keys are needed to ensure the highest level of security. In this paper, we focus on the key distribution techniques and how they are categorized for BAN. The state-of-art pre-shared key distribution algorithms are surveyed. Possible attacks on algorithms are demonstrated with examples.

Keywords: attacks, body area network, key distribution, key refreshment, pre-shared keys

Procedia PDF Downloads 338
11186 Tabu Random Algorithm for Guiding Mobile Robots

Authors: Kevin Worrall, Euan McGookin

Abstract:

The use of optimization algorithms is common across a large number of diverse fields. This work presents the use of a hybrid optimization algorithm applied to a mobile robot tasked with carrying out a search of an unknown environment. The algorithm is then applied to the multiple robots case, which results in a reduction in the time taken to carry out the search. The hybrid algorithm is a Random Search Algorithm fused with a Tabu mechanism. The work shows that the algorithm locates the desired points in a quicker time than a brute force search. The Tabu Random algorithm is shown to work within a simulated environment using a validated mathematical model. The simulation was run using three different environments with varying numbers of targets. As an algorithm, the Tabu Random is small, clear and can be implemented with minimal resources. The power of the algorithm is the speed at which it locates points of interest and the robustness to the number of robots involved. The number of robots can vary with no changes to the algorithm resulting in a flexible algorithm.

Keywords: algorithms, control, multi-agent, search and rescue

Procedia PDF Downloads 218
11185 Probability Sampling in Matched Case-Control Study in Drug Abuse

Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell

Abstract:

Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.

Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling

Procedia PDF Downloads 471
11184 Occupational Diseases in the Automotive Industry in Czechia

Authors: J. Jarolímek, P. Urban, P. Pavlínek, D. Dzúrová

Abstract:

The industry constitutes a dominant economic sector in Czechia. The automotive industry represents the most important industrial sector in terms of gross value added and the number of employees. The objective of this study was to analyse the occurrence of occupational diseases (OD) in the automotive industry in Czechia during the 2001-2014 period. Whereas the occurrence of OD in other sectors has generally been decreasing, it has been increasing in the automotive industry, including growing spatial discrepancies. Data on OD cases were retrieved from the National Registry of Occupational Diseases. Further, we conducted a survey in automotive companies with a focus on occupational health services and positions of the companies in global production networks (GPNs). An analysis of OD distribution in the automotive industry was performed (age, gender, company size and its role in GPNs, regional distribution of studied companies, and regional unemployment rate), and was accompanied by an assessment of the quality and range of occupational health services. The employees older than 40 years had nearly 2.5 times higher probability of OD occurrence compared with employees younger than 40 years (OR 2.41; 95% CI: 2.05-2.85). The OD occurrence probability was 3 times higher for women than for men (OR 3.01; 95 % CI: 2.55-3.55). The OD incidence rate was increasing with the size of the company. An association between the OD incidence and the unemployment rate was not confirmed.

Keywords: occupational diseases, automotive industry, health geography, unemployment

Procedia PDF Downloads 226
11183 Multi-Scale Modeling of Ti-6Al-4V Mechanical Behavior: Size, Dispersion and Crystallographic Texture of Grains Effects

Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vidal, Farhad Rezai-Aria, Christine Boher

Abstract:

Ti-6Al-4V titanium alloy is one of the most widely used materials in aeronautical and aerospace industries. Because of its high specific strength, good fatigue, and corrosion resistance, this alloy is very suitable for moderate temperature applications. At room temperature, Ti-6Al-4V mechanical behavior is generally controlled by the behavior of alpha phase (beta phase percent is less than 8%). The plastic strain of this phase notably based on crystallographic slip can be hindered by various obstacles and mechanisms (crystal lattice friction, sessile dislocations, strengthening by solute atoms and grain boundaries…). The grains aspect of alpha phase (its morphology and texture) and the nature of its crystallographic lattice (which is hexagonal compact) give to plastic strain heterogeneous, discontinuous and anisotropic characteristics at the local scale. The aim of this work is to develop a multi-scale model for Ti-6Al-4V mechanical behavior using crystal plasticity approach; this multi-scale model is used then to investigate grains size, dispersion of grains size, crystallographic texture and slip systems activation effects on Ti-6Al-4V mechanical behavior under monotone quasi-static loading. Nine representative elementary volume (REV) are built for taking into account the physical elements (grains size, dispersion and crystallographic) mentioned above, then boundary conditions of tension test are applied. Finally, simulation of the mechanical behavior of Ti-6Al-4V and study of slip systems activation in alpha phase is reported. The results show that the macroscopic mechanical behavior of Ti-6Al-4V is strongly linked to the active slip systems family (prismatic, basal or pyramidal). The crystallographic texture determines which family of slip systems can be activated; therefore it gives to the plastic strain a heterogeneous character thus an anisotropic macroscopic mechanical behavior of Ti-6Al-4V alloy modeled. The grains size influences also on mechanical proprieties of Ti-6Al-4V, especially on the yield stress; by decreasing of the grain size, the yield strength increases. Finally, the grains' distribution which characterizes the morphology aspect (homogeneous or heterogeneous) gives to the deformation fields distribution enough heterogeneity because the crystallographic slip is easier in large grains compared to small grains, which generates a localization of plastic deformation in certain areas and a concentration of stresses in others.

Keywords: multi-scale modeling, Ti-6Al-4V alloy, crystal plasticity, grains size, crystallographic texture

Procedia PDF Downloads 137
11182 Programming with Grammars

Authors: Peter M. Maurer Maurer

Abstract:

DGL is a context free grammar-based tool for generating random data. Many types of simulator input data require some computation to be placed in the proper format. For example, it might be necessary to generate ordered triples in which the third element is the sum of the first two elements, or it might be necessary to generate random numbers in some sorted order. Although DGL is universal in computational power, generating these types of data is extremely difficult. To overcome this problem, we have enhanced DGL to include features that permit direct computation within the structure of a context free grammar. The features have been implemented as special types of productions, preserving the context free flavor of DGL specifications.

Keywords: DGL, Enhanced Context Free Grammars, Programming Constructs, Random Data Generation

Procedia PDF Downloads 123
11181 Chitosan-Whey Protein Isolate Core-Shell Nanoparticles as Delivery Systems

Authors: Zahra Yadollahi, Marjan Motiei, Natalia Kazantseva, Petr Saha

Abstract:

Chitosan (CS)-whey protein isolate (WPI) core-shell nanoparticles were synthesized through self-assembly of whey protein isolated polyanions and chitosan polycations in the presence of tripolyphosphate (TPP) as a crosslinker. The formation of this type of nanostructures with narrow particle size distribution is crucial for developing delivery systems since the functional characteristics highly depend on their sizes. To achieve this goal, the nanostructure was optimized by varying the concentrations of WPI, CS, and TPP in the reaction mixture. The chemical characteristics, surface morphology, and particle size of the nanoparticles were evaluated.

Keywords: whey protein isolated, chitosan, nanoparticles, delivery system

Procedia PDF Downloads 67
11180 Effects of Particle Size Distribution of Binders on the Performance of Slag-Limestone Ternary Cement

Authors: Zhuomin Zou, Thijs Van Landeghem, Elke Gruyaert

Abstract:

Using supplementary cementitious materials, such as blast-furnace slag and limestone, to replace cement clinker is a promising method to reduce the carbon emissions from cement production. To efficiently use slag and limestone, it is necessary to carefully select the particle size distribution (PSD) of the binders. This study investigated the effects of the PSD of binders on the performance of slag-limestone ternary cement. The Portland cement (PC) was prepared by grinding 95% clinker + 5% gypsum. Based on the PSD parameters of the binders, three types of ternary cements with a similar overall PSD were designed, i.e., NO.1 fine slag, medium PC, and coarse limestone; NO.2 fine limestone, medium PC, and coarse slag; NO.3. fine PC, medium slag, and coarse limestone. The binder contents in the ternary cements were (a) 50 % PC, 40 % slag, and 10 % limestone (called high cement group) or (b) 35 % PC, 55 % slag, and 10 % limestone (called low cement group). The pure PC and binary cement with 50% slag and 50% PC prepared with the same binders as the ternary cement were considered as reference cements. All these cements were used to investigate the mortar performance in terms of workability, strength at 2, 7, 28, and 90 days, carbonation resistance, and non-steady state chloride migration resistance at 28 and 56 days. Results show that blending medium PC with fine slag could exhibit comparable performance to blending fine PC with medium/coarse slag in binary cement. For the three ternary cements in the high cement group, ternary cement with fine limestone (NO.2) shows the lowest strength, carbonation, and chloride migration performance. Ternary cements with fine slag (NO.1) and with fine PC (NO.3) show the highest flexural strength at early and late ages, respectively. In addition, compared with ternary cement with fine PC (NO.3), ternary cement with fine slag (NO.1) has a similar carbonation resistance and a better chloride migration resistance. For the low cement group, three ternary cements have a similar flexural and compressive strength before 7 days. After 28 days, ternary cement with fine limestone (NO.2) shows the highest flexural strength while fine PC (NO.3) has the highest compressive strength. In addition, ternary cement with fine slag (NO.1) shows a better chloride migration resistance but a lower carbonation resistance compared with the other two ternary cements. Moreover, the durability performance of ternary cement with fine PC (NO.3) is better than that of fine limestone (NO.2).

Keywords: limestone, particle size distribution, slag, ternary cement

Procedia PDF Downloads 99
11179 Image Inpainting Model with Small-Sample Size Based on Generative Adversary Network and Genetic Algorithm

Authors: Jiawen Wang, Qijun Chen

Abstract:

The performance of most machine-learning methods for image inpainting depends on the quantity and quality of the training samples. However, it is very expensive or even impossible to obtain a great number of training samples in many scenarios. In this paper, an image inpainting model based on a generative adversary network (GAN) is constructed for the cases when the number of training samples is small. Firstly, a feature extraction network (F-net) is incorporated into the GAN network to utilize the available information of the inpainting image. The weighted sum of the extracted feature and the random noise acts as the input to the generative network (G-net). The proposed network can be trained well even when the sample size is very small. Secondly, in the phase of the completion for each damaged image, a genetic algorithm is designed to search an optimized noise input for G-net; based on this optimized input, the parameters of the G-net and F-net are further learned (Once the completion for a certain damaged image ends, the parameters restore to its original values obtained in the training phase) to generate an image patch that not only can fill the missing part of the damaged image smoothly but also has visual semantics.

Keywords: image inpainting, generative adversary nets, genetic algorithm, small-sample size

Procedia PDF Downloads 105
11178 Unbalanced Distribution Optimal Power Flow to Minimize Losses with Distributed Photovoltaic Plants

Authors: Malinwo Estone Ayikpa

Abstract:

Electric power systems are likely to operate with minimum losses and voltage meeting international standards. This is made possible generally by control actions provide by automatic voltage regulators, capacitors and transformers with on-load tap changer (OLTC). With the development of photovoltaic (PV) systems technology, their integration on distribution networks has increased over the last years to the extent of replacing the above mentioned techniques. The conventional analysis and simulation tools used for electrical networks are no longer able to take into account control actions necessary for studying distributed PV generation impact. This paper presents an unbalanced optimal power flow (OPF) model that minimizes losses with association of active power generation and reactive power control of single-phase and three-phase PV systems. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. The unbalance OPF is formulated by current balance equations and solved by primal-dual interior point method. Several simulation cases have been carried out varying the size and location of PV systems and the results show a detailed view of the impact of PV distributed generation on distribution systems.

Keywords: distribution system, loss, photovoltaic generation, primal-dual interior point method

Procedia PDF Downloads 306
11177 A New Concept for Deriving the Expected Value of Fuzzy Random Variables

Authors: Liang-Hsuan Chen, Chia-Jung Chang

Abstract:

Fuzzy random variables have been introduced as an imprecise concept of numeric values for characterizing the imprecise knowledge. The descriptive parameters can be used to describe the primary features of a set of fuzzy random observations. In fuzzy environments, the expected values are usually represented as fuzzy-valued, interval-valued or numeric-valued descriptive parameters using various metrics. Instead of the concept of area metric that is usually adopted in the relevant studies, the numeric expected value is proposed by the concept of distance metric in this study based on two characters (fuzziness and randomness) of FRVs. Comparing with the existing measures, although the results show that the proposed numeric expected value is same with those using the different metric, if only triangular membership functions are used. However, the proposed approach has the advantages of intuitiveness and computational efficiency, when the membership functions are not triangular types. An example with three datasets is provided for verifying the proposed approach.

Keywords: fuzzy random variables, distance measure, expected value, descriptive parameters

Procedia PDF Downloads 318
11176 Research on Placement Method of the Magnetic Flux Leakage Sensor Based on Online Detection of the Transformer Winding Deformation

Authors: Wei Zheng, Mao Ji, Zhe Hou, Meng Huang, Bo Qi

Abstract:

The transformer is the key equipment of the power system. Winding deformation is one of the main transformer defects, and timely and effective detection of the transformer winding deformation can ensure the safe and stable operation of the transformer to the maximum extent. When winding deformation occurs, the size, shape and spatial position of the winding will change, which directly leads to the change of magnetic flux leakage distribution. Therefore, it is promising to study the online detection method of the transformer winding deformation based on magnetic flux leakage characteristics, in which the key step is to study the optimal placement method of magnetic flux leakage sensors inside the transformer. In this paper, a simulation model of the transformer winding deformation is established to obtain the internal magnetic flux leakage distribution of the transformer under normal operation and different winding deformation conditions, and the law of change of magnetic flux leakage distribution due to winding deformation is analyzed. The results show that different winding deformation leads to different characteristics of the magnetic flux leakage distribution. On this basis, an optimized placement of magnetic flux leakage sensors inside the transformer is proposed to provide a basis for the online detection method of transformer winding deformation based on the magnetic flux leakage characteristics.

Keywords: magnetic flux leakage, sensor placement method, transformer, winding deformation

Procedia PDF Downloads 160
11175 The Effect of Information Technology on the Quality of Accounting Information

Authors: Mohammad Hadi Khorashadi Zadeh, Amin Karkon, Hamid Golnari

Abstract:

This study aimed to investigate the impact of information technology on the quality of accounting information was made in 2014. A survey of 425 executives of listed companies in Tehran Stock Exchange, using the Cochran formula simple random sampling method, 84 managers of these companies as the sample size was considered. Methods of data collection based on questionnaire information technology some of the questions of the impact of information technology was standardized questionnaires and the questions were designed according to existing components. After the distribution and collection of questionnaires, data analysis and hypothesis testing using structural equation modeling Smart PLS2 and software measurement model and the structure was conducted in two parts. In the first part of the questionnaire technical characteristics including reliability, validity, convergent and divergent validity for PLS has been checked and in the second part, application no significant coefficients were used to examine the research hypotheses. The results showed that IT and its dimensions (timeliness, relevance, accuracy, adequacy, and the actual transfer rate) affect the quality of accounting information of listed companies in Tehran Stock Exchange influence.

Keywords: information technology, information quality, accounting, transfer speed

Procedia PDF Downloads 252
11174 Probabilistic Gathering of Agents with Simple Sensors: Distributed Algorithm for Aggregation of Robots Equipped with Binary On-Board Detectors

Authors: Ariel Barel, Rotem Manor, Alfred M. Bruckstein

Abstract:

We present a probabilistic gathering algorithm for agents that can only detect the presence of other agents in front of or behind them. The agents act in the plane and are identical and indistinguishable, oblivious, and lack any means of direct communication. They do not have a common frame of reference in the plane and choose their orientation (direction of possible motion) at random. The analysis of the gathering process assumes that the agents act synchronously in selecting random orientations that remain fixed during each unit time-interval. Two algorithms are discussed. The first one assumes discrete jumps based on the sensing results given the randomly selected motion direction, and in this case, extensive experimental results exhibit probabilistic clustering into a circular region with radius equal to the step-size in time proportional to the number of agents. The second algorithm assumes agents with continuous sensing and motion, and in this case, we can prove gathering into a very small circular region in finite expected time.

Keywords: control, decentralized, gathering, multi-agent, simple sensors

Procedia PDF Downloads 143
11173 A Study of the Alumina Distribution in the Lab-Scale Cell during Aluminum Electrolysis

Authors: Olga Tkacheva, Pavel Arkhipov, Alexey Rudenko, Yurii Zaikov

Abstract:

The aluminum electrolysis process in the conventional cryolite-alumina electrolyte with cryolite ratio of 2.7 was carried out at an initial temperature of 970 °C and the anode current density of 0.5 A/cm2 in a 15A lab-scale cell in order to study the formation of the side ledge during electrolysis and the alumina distribution between electrolyte and side ledge. The alumina contained 35.97% α-phase and 64.03% γ-phase with the particles size in the range of 10-120 μm. The cryolite ratio and the alumina concentration were determined in molten electrolyte during electrolysis and in frozen bath after electrolysis. The side ledge in the electrolysis cell was formed only by the 13th hour of electrolysis. With a slight temperature decrease a significant increase in the side ledge thickness was observed. The basic components of the side ledge obtained by the XRD phase analysis were Na3AlF6, Na5Al3F14, Al2O3, and NaF.5CaF2.AlF3. As in the industrial cell, the increased alumina concentration in the side ledge formed on the cell walls and at the ledge-electrolyte-aluminum three-phase boundary during aluminum electrolysis in the lab cell was found (FTP No 05.604.21.0239, IN RFMEFI60419X0239).

Keywords: alumina distribution, aluminum electrolyzer, cryolie-alumina electrolyte, side ledge

Procedia PDF Downloads 248
11172 Evaluation of Best-Fit Probability Distribution for Prediction of Extreme Hydrologic Phenomena

Authors: Karim Hamidi Machekposhti, Hossein Sedghi

Abstract:

The probability distributions are the best method for forecasting of extreme hydrologic phenomena such as rainfall and flood flows. In this research, in order to determine suitable probability distribution for estimating of annual extreme rainfall and flood flows (discharge) series with different return periods, precipitation with 40 and discharge with 58 years time period had been collected from Karkheh River at Iran. After homogeneity and adequacy tests, data have been analyzed by Stormwater Management and Design Aid (SMADA) software and residual sum of squares (R.S.S). The best probability distribution was Log Pearson Type III with R.S.S value (145.91) and value (13.67) for peak discharge and Log Pearson Type III with R.S.S values (141.08) and (8.95) for maximum discharge in Jelogir Majin and Pole Zal stations, respectively. The best distribution for maximum precipitation in Jelogir Majin and Pole Zal stations was Log Pearson Type III distribution with R.S.S values (1.74&1.90) and then Pearson Type III distribution with R.S.S values (1.53&1.69). Overall, the Log Pearson Type III distributions are acceptable distribution types for representing statistics of extreme hydrologic phenomena in Karkheh River at Iran with the Pearson Type III distribution as a potential alternative.

Keywords: Karkheh River, Log Pearson Type III, probability distribution, residual sum of squares

Procedia PDF Downloads 176
11171 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest

Authors: Bharatendra Rai

Abstract:

Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).

Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error

Procedia PDF Downloads 291
11170 Solving Process Planning, Weighted Apparent Tardiness Cost Dispatching, and Weighted Processing plus Weight Due-Date Assignment Simultaneously Using a Hybrid Search

Authors: Halil Ibrahim Demir, Caner Erden, Abdullah Hulusi Kokcam, Mumtaz Ipek

Abstract:

Process planning, scheduling, and due date assignment are three important manufacturing functions which are studied independently in literature. There are hundreds of works on IPPS and SWDDA problems but a few works on IPPSDDA problem. Integrating these three functions is very crucial due to the high relationship between them. Since the scheduling problem is in the NP-Hard problem class without any integration, an integrated problem is even harder to solve. This study focuses on the integration of these functions. Sum of weighted tardiness, earliness, and due date related costs are used as a penalty function. Random search and hybrid metaheuristics are used to solve the integrated problem. Marginal improvement in random search is very high in the early iterations and reduces enormously in later iterations. At that point directed search contribute to marginal improvement more than random search. In this study, random and genetic search methods are combined to find better solutions. Results show that overall performance becomes better as the integration level increases.

Keywords: process planning, genetic algorithm, hybrid search, random search, weighted due-date assignment, weighted scheduling

Procedia PDF Downloads 339
11169 KSVD-SVM Approach for Spontaneous Facial Expression Recognition

Authors: Dawood Al Chanti, Alice Caplier

Abstract:

Sparse representations of signals have received a great deal of attention in recent years. In this paper, the interest of using sparse representation as a mean for performing sparse discriminative analysis between spontaneous facial expressions is demonstrated. An automatic facial expressions recognition system is presented. It uses a KSVD-SVM approach which is made of three main stages: A pre-processing and feature extraction stage, which solves the problem of shared subspace distribution based on the random projection theory, to obtain low dimensional discriminative and reconstructive features; A dictionary learning and sparse coding stage, which uses the KSVD model to learn discriminative under or over dictionaries for sparse coding; Finally a classification stage, which uses a SVM classifier for facial expressions recognition. Our main concern is to be able to recognize non-basic affective states and non-acted expressions. Extensive experiments on the JAFFE static acted facial expressions database but also on the DynEmo dynamic spontaneous facial expressions database exhibit very good recognition rates.

Keywords: dictionary learning, random projection, pose and spontaneous facial expression, sparse representation

Procedia PDF Downloads 277
11168 Characteristics of the Particle Size Distribution and Exposure Concentrations of Nanoparticles Generated from the Laser Metal Deposition Process

Authors: Yu-Hsuan Liu, Ying-Fang Wang

Abstract:

The objectives of the present study are to characterize nanoparticles generated from the laser metal deposition (LMD) process and to estimate particle concentrations deposited in the head (H), that the tracheobronchial (TB) and alveolar (A) regions, respectively. The studied LMD chamber (3.6m × 3.8m × 2.9m) is installed with a robot laser metal deposition machine. Direct-reading instrument of a scanning mobility particle sizer (SMPS, Model 3082, TSI Inc., St. Paul, MN, USA) was used to conduct static sampling inside the chamber for nanoparticle number concentration and particle size distribution measurements. The SMPS obtained particle number concentration at every 3 minutes, the diameter of the SMPS ranged from 11~372 nm when the aerosol and sheath flow rates were set at 0.6 and 6 L / min, respectively. The resultant size distributions were used to predict depositions of nanoparticles at the H, TB, and A regions of the respiratory tract using the UK National Radiological Protection Board’s (NRPB’s) LUDEP Software. Result that the number concentrations of nanoparticles in indoor background and LMD chamber were 4.8×10³ and 4.3×10⁵ # / cm³, respectively. However, the nanoparticles emitted from the LMD process was in the form of the uni-modal with number median diameter (NMD) and geometric standard deviation (GSD) as 142nm and 1.86, respectively. The fractions of the nanoparticles deposited on the alveolar region (A: 69.8%) were higher than the other two regions of the head region (H: 10.9%), tracheobronchial region (TB: 19.3%). This study conducted static sampling to measure the nanoparticles in the LMD process, and the results show that the fraction of particles deposited on the A region was higher than the other two regions. Therefore, applying the characteristics of nanoparticles emitted from LMD process could be provided valuable scientific-based evidence for exposure assessments in the future.

Keywords: exposure assessment, laser metal deposition process, nanoparticle, respiratory region

Procedia PDF Downloads 264
11167 Classification for Obstructive Sleep Apnea Syndrome Based on Random Forest

Authors: Cheng-Yu Tsai, Wen-Te Liu, Shin-Mei Hsu, Yin-Tzu Lin, Chi Wu

Abstract:

Background: Obstructive Sleep apnea syndrome (OSAS) is a common respiratory disorder during sleep. In addition, Body parameters were identified high predictive importance for OSAS severity. However, the effects of body parameters on OSAS severity remain unclear. Objective: In this study, the objective is to establish a prediction model for OSAS by using body parameters and investigate the effects of body parameters in OSAS. Methodologies: Severity was quantified as the polysomnography and the mean hourly number of greater than 3% dips in oxygen saturation during examination in a hospital in New Taipei City (Taiwan). Four levels of OSAS severity were classified by the apnea and hypopnea index (AHI) with American Academy of Sleep Medicine (AASM) guideline. Body parameters, including neck circumference, waist size, and body mass index (BMI) were obtained from questionnaire. Next, dividing the collecting subjects into two groups: training and testing groups. The training group was used to establish the random forest (RF) to predicting, and test group was used to evaluated the accuracy of classification. Results: There were 3330 subjects recruited in this study, whom had been done polysomnography for evaluating severity for OSAS. A RF of 1000 trees achieved correctly classified 79.94 % of test cases. When further evaluated on the test cohort, RF showed the waist and BMI as the high import factors in OSAS. Conclusion It is possible to provide patient with prescreening by body parameters which can pre-evaluate the health risks.

Keywords: apnea and hypopnea index, Body parameters, obstructive sleep apnea syndrome, Random Forest

Procedia PDF Downloads 121
11166 Long Term Love Relationships Analyzed as a Dynamic System with Random Variations

Authors: Nini Johana Marín Rodríguez, William Fernando Oquendo Patino

Abstract:

In this work, we model a coupled system where we explore the effects of steady and random behavior on a linear system like an extension of the classic Strogatz model. This is exemplified by modeling a couple love dynamics as a linear system of two coupled differential equations and studying its stability for four types of lovers chosen as CC='Cautious- Cautious', OO='Only other feelings', OP='Opposites' and RR='Romeo the Robot'. We explore the effects of, first, introducing saturation, and second, adding a random variation to one of the CC-type lover, which will shape his character by trying to model how its variability influences the dynamics between love and hate in couple in a long run relationship. This work could also be useful to model other kind of systems where interactions can be modeled as linear systems with external or internal random influence. We found the final results are not easy to predict and a strong dependence on initial conditions appear, which a signature of chaos.

Keywords: differential equations, dynamical systems, linear system, love dynamics

Procedia PDF Downloads 325
11165 Bayesian Approach for Moving Extremes Ranked Set Sampling

Authors: Said Ali Al-Hadhrami, Amer Ibrahim Al-Omari

Abstract:

In this paper, Bayesian estimation for the mean of exponential distribution is considered using Moving Extremes Ranked Set Sampling (MERSS). Three priors are used; Jeffery, conjugate and constant using MERSS and Simple Random Sampling (SRS). Some properties of the proposed estimators are investigated. It is found that the suggested estimators using MERSS are more efficient than its counterparts based on SRS.

Keywords: Bayesian, efficiency, moving extreme ranked set sampling, ranked set sampling

Procedia PDF Downloads 486
11164 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the PTH percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: reliability, accelerated life testing, cumulative exposure model, Bayesian estimation, progressive type-I censoring, Weibull distribution

Procedia PDF Downloads 479
11163 Optimal Placement and Sizing of Energy Storage System in Distribution Network with Photovoltaic Based Distributed Generation Using Improved Firefly Algorithms

Authors: Ling Ai Wong, Hussain Shareef, Azah Mohamed, Ahmad Asrul Ibrahim

Abstract:

The installation of photovoltaic based distributed generation (PVDG) in active distribution system can lead to voltage fluctuation due to the intermittent and unpredictable PVDG output power. This paper presented a method in mitigating the voltage rise by optimally locating and sizing the battery energy storage system (BESS) in PVDG integrated distribution network. The improved firefly algorithm is used to perform optimal placement and sizing. Three objective functions are presented considering the voltage deviation and BESS off-time with state of charge as the constraint. The performance of the proposed method is compared with another optimization method such as the original firefly algorithm and gravitational search algorithm. Simulation results show that the proposed optimum BESS location and size improve the voltage stability.

Keywords: BESS, firefly algorithm, PVDG, voltage fluctuation

Procedia PDF Downloads 300
11162 Wireless Sensor Network for Forest Fire Detection and Localization

Authors: Tarek Dandashi

Abstract:

WSNs may provide a fast and reliable solution for the early detection of environment events like forest fires. This is crucial for alerting and calling for fire brigade intervention. Sensor nodes communicate sensor data to a host station, which enables a global analysis and the generation of a reliable decision on a potential fire and its location. A WSN with TinyOS and nesC for the capturing and transmission of a variety of sensor information with controlled source, data rates, duration, and the records/displaying activity traces is presented. We propose a similarity distance (SD) between the distribution of currently sensed data and that of a reference. At any given time, a fire causes diverging opinions in the reported data, which alters the usual data distribution. Basically, SD consists of a metric on the Cumulative Distribution Function (CDF). SD is designed to be invariant versus day-to-day changes of temperature, changes due to the surrounding environment, and normal changes in weather, which preserve the data locality. Evaluation shows that SD sensitivity is quadratic versus an increase in sensor node temperature for a group of sensors of different sizes and neighborhood. Simulation of fire spreading when ignition is placed at random locations with some wind speed shows that SD takes a few minutes to reliably detect fires and locate them. We also discuss the case of false negative and false positive and their impact on the decision reliability.

Keywords: forest fire, WSN, wireless sensor network, algortihm

Procedia PDF Downloads 242