Search results for: weighted approximation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1033

Search results for: weighted approximation

913 Statistical Convergence of the Szasz-Mirakjan-Kantorovich-Type Operators

Authors: Rishikesh Yadav, Ramakanta Meher, Vishnu Narayan Mishra

Abstract:

The main aim of this article is to investigate the statistical convergence of the summation of integral type operators and to obtain the weighted statistical convergence. The rate of statistical convergence by means of modulus of continuity and function belonging to the Lipschitz class are also studied. We discuss the convergence of the defined operators by graphical representation and put a better rate of convergence than the Szasz-Mirakjan-Kantorovich operators. In the last section, we extend said operators into bivariate operators to study about the rate of convergence in sense of modulus of continuity and by means of Lipschitz class by using function of two variables.

Keywords: The Szasz-Mirakjan-Kantorovich operators, statistical convergence, modulus of continuity, Peeters K-functional, weighted modulus of continuity

Procedia PDF Downloads 170
912 First Principle Calculations of Magnetic and Electronic Properties of Double Perovskite Ba2MnMoO6

Authors: B. Bouadjemi, S. Bentata, W. Benstaali, A. Souidi, A. Abbad, T. Lantri, Z. Aziz, A. Zitouni

Abstract:

The electronic and magnetic structures of double perovskite Ba2MnMoO6 are systematically investigated using the first principle method of the Full Potential Linear Augmented Plane Waves Plus the Local Orbitals (FP-LAPW+LO) within the Local Spin Density Approximation (LSDA) and the Generalized Gradient Approximation (GGA). In order to take into account the strong on-site Coulomb interaction, we included the Hubbard correlation terms: LSDA+U and GGA+U approaches. Whereas half-metallic ferromagnetic character is observed due to dominant Mn spin-up and Mo spin-down contributions insulating ground state is obtained. The LSDA+U and GGA+U calculations yield better agreement with the theoretical and the experimental results than LSDA and GGA do.

Keywords: electronic structure, double perovskite, first principles, Ba2MnMoO6, half-metallic

Procedia PDF Downloads 411
911 Hybrid Artificial Bee Colony and Least Squares Method for Rule-Based Systems Learning

Authors: Ahcene Habbi, Yassine Boudouaoui

Abstract:

This paper deals with the problem of automatic rule generation for fuzzy systems design. The proposed approach is based on hybrid artificial bee colony (ABC) optimization and weighted least squares (LS) method and aims to find the structure and parameters of fuzzy systems simultaneously. More precisely, two ABC based fuzzy modeling strategies are presented and compared. The first strategy uses global optimization to learn fuzzy models, the second one hybridizes ABC and weighted least squares estimate method. The performances of the proposed ABC and ABC-LS fuzzy modeling strategies are evaluated on complex modeling problems and compared to other advanced modeling methods.

Keywords: automatic design, learning, fuzzy rules, hybrid, swarm optimization

Procedia PDF Downloads 409
910 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text

Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni

Abstract:

The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.

Keywords: cooccurrence graph, entity relation graph, unstructured text, weighted distance

Procedia PDF Downloads 117
909 Breast Cancer Survivability Prediction via Classifier Ensemble

Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia

Abstract:

This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.

Keywords: classifier ensemble, breast cancer survivability, data mining, SEER

Procedia PDF Downloads 295
908 Using Scale Invariant Feature Transform Features to Recognize Characters in Natural Scene Images

Authors: Belaynesh Chekol, Numan Çelebi

Abstract:

The main purpose of this work is to recognize individual characters extracted from natural scene images using scale invariant feature transform (SIFT) features as an input to K-nearest neighbor (KNN); a classification learner algorithm. For this task, 1,068 and 78 images of English alphabet characters taken from Chars74k data set is used to train and test the classifier respectively. For each character image, We have generated describing features by using SIFT algorithm. This set of features is fed to the learner so that it can recognize and label new images of English characters. Two types of KNN (fine KNN and weighted KNN) were trained and the resulted classification accuracy is 56.9% and 56.5% respectively. The training time taken was the same for both fine and weighted KNN.

Keywords: character recognition, KNN, natural scene image, SIFT

Procedia PDF Downloads 255
907 Cognitive Weighted Polymorphism Factor: A New Cognitive Complexity Metric

Authors: T. Francis Thamburaj, A. Aloysius

Abstract:

Polymorphism is one of the main pillars of the object-oriented paradigm. It induces hidden forms of class dependencies which may impact software quality, resulting in higher cost factor for comprehending, debugging, testing, and maintaining the software. In this paper, a new cognitive complexity metric called Cognitive Weighted Polymorphism Factor (CWPF) is proposed. Apart from the software structural complexity, it includes the cognitive complexity on the basis of type. The cognitive weights are calibrated based on 27 empirical studies with 120 persons. A case study and experimentation of the new software metric shows positive results. Further, a comparative study is made and the correlation test has proved that CWPF complexity metric is a better, more comprehensive, and more realistic indicator of the software complexity than Abreu’s Polymorphism Factor (PF) complexity metric.

Keywords: cognitive complexity metric, object-oriented metrics, polymorphism factor, software metrics

Procedia PDF Downloads 413
906 On the PTC Thermistor Model with a Hyperbolic Tangent Electrical Conductivity

Authors: M. O. Durojaye, J. T. Agee

Abstract:

This paper is on the one-dimensional, positive temperature coefficient (PTC) thermistor model with a hyperbolic tangent function approximation for the electrical conductivity. The method of asymptotic expansion was adopted to obtain the steady state solution and the unsteady-state response was obtained using the method of lines (MOL) which is a well-established numerical technique. The approach is to reduce the partial differential equation to a vector system of ordinary differential equations and solve numerically. Our analysis shows that the hyperbolic tangent approximation introduced is well suitable for the electrical conductivity. Numerical solutions obtained also exhibit correct physical characteristics of the thermistor and are in good agreement with the exact steady state solutions.

Keywords: electrical conductivity, hyperbolic tangent function, PTC thermistor, method of lines

Procedia PDF Downloads 298
905 Extended Constraint Mask Based One-Bit Transform for Low-Complexity Fast Motion Estimation

Authors: Oğuzhan Urhan

Abstract:

In this paper, an improved motion estimation (ME) approach based on weighted constrained one-bit transform is proposed for block-based ME employed in video encoders. Binary ME approaches utilize low bit-depth representation of the original image frames with a Boolean exclusive-OR based hardware efficient matching criterion to decrease computational burden of the ME stage. Weighted constrained one-bit transform (WC‑1BT) based approach improves the performance of conventional C-1BT based ME employing 2-bit depth constraint mask instead of a 1-bit depth mask. In this work, the range of constraint mask is further extended to increase ME performance of WC-1BT approach. Experiments reveal that the proposed method provides better ME accuracy compared existing similar ME methods in the literature.

Keywords: fast motion estimation; low-complexity motion estimation, video coding

Procedia PDF Downloads 292
904 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation using PINN

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary condition to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful to study various optical phenomena.

Keywords: deep learning, optical Soliton, neural network, partial differential equation

Procedia PDF Downloads 89
903 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 364
902 Estimation and Forecasting with a Quantile AR Model for Financial Returns

Authors: Yuzhi Cai

Abstract:

This talk presents a Bayesian approach to quantile autoregressive (QAR) time series model estimation and forecasting. We establish that the joint posterior distribution of the model parameters and future values is well defined. The associated MCMC algorithm for parameter estimation and forecasting converges to the posterior distribution quickly. We also present a combining forecasts technique to produce more accurate out-of-sample forecasts by using a weighted sequence of fitted QAR models. A moving window method to check the quality of the estimated conditional quantiles is developed. We verify our methodology using simulation studies and then apply it to currency exchange rate data. An application of the method to the USD to GBP daily currency exchange rates will also be discussed. The results obtained show that an unequally weighted combining method performs better than other forecasting methodology.

Keywords: combining forecasts, MCMC, quantile modelling, quantile forecasting, predictive density functions

Procedia PDF Downloads 315
901 Detection Efficient Enterprises via Data Envelopment Analysis

Authors: S. Turkan

Abstract:

In this paper, the Turkey’s Top 500 Industrial Enterprises data in 2014 were analyzed by data envelopment analysis. Data envelopment analysis is used to detect efficient decision-making units such as universities, hospitals, schools etc. by using inputs and outputs. The decision-making units in this study are enterprises. To detect efficient enterprises, some financial ratios are determined as inputs and outputs. For this reason, financial indicators related to productivity of enterprises are considered. The efficient foreign weighted owned capital enterprises are detected via super efficiency model. According to the results, it is said that Mercedes-Benz is the most efficient foreign weighted owned capital enterprise in Turkey.

Keywords: data envelopment analysis, super efficiency, logistic regression, financial ratios

Procedia PDF Downloads 303
900 Application of RS and GIS Technique for Identifying Groundwater Potential Zone in Gomukhi Nadhi Sub Basin, South India

Authors: Punitha Periyasamy, Mahalingam Sudalaimuthu, Sachikanta Nanda, Arasu Sundaram

Abstract:

India holds 17.5% of the world’s population but has only 2% of the total geographical area of the world where 27.35% of the area is categorized as wasteland due to lack of or less groundwater. So there is a demand for excessive groundwater for agricultural and non agricultural activities to balance its growth rate. With this in mind, an attempt is made to find the groundwater potential zone in Gomukhi river sub basin of Vellar River basin, TamilNadu, India covering an area of 1146.6 Sq.Km consists of 9 blocks from Peddanaickanpalayam to Villupuram fall in the sub basin. The thematic maps such as Geology, Geomorphology, Lineament, Landuse, and Landcover and Drainage are prepared for the study area using IRS P6 data. The collateral data includes rainfall, water level, soil map are collected for analysis and inference. The digital elevation model (DEM) is generated using Shuttle Radar Topographic Mission (SRTM) and the slope of the study area is obtained. ArcGIS 10.1 acts as a powerful spatial analysis tool to find out the ground water potential zones in the study area by means of weighted overlay analysis. Each individual parameter of the thematic maps are ranked and weighted in accordance with their influence to increase the water level in the ground. The potential zones in the study area are classified viz., Very Good, Good, Moderate, Poor with its aerial extent of 15.67, 381.06, 575.38, 174.49 Sq.Km respectively.

Keywords: ArcGIS, DEM, groundwater, recharge, weighted overlay

Procedia PDF Downloads 416
899 Modelling and Maping Malnutrition Toddlers in Bojonegoro Regency with Mixed Geographically Weighted Regression Approach

Authors: Elvira Mustikawati P.H., Iis Dewi Ratih, Dita Amelia

Abstract:

Bojonegoro has proclaimed a policy of zero malnutrition. Therefore, as an effort to solve the cases of malnutrition children in Bojonegoro, this study used the approach geographically Mixed Weighted Regression (MGWR) to determine the factors that influence the percentage of malnourished children under five in which factors can be divided into locally influential factor in each district and global factors that influence throughout the district. Based on the test of goodness of fit models, R2 and AIC values in GWR models are better than MGWR models. R2 and AIC values in MGWR models are 84.37% and 14.28, while the GWR models respectively are 91.04% and -62.04. Based on the analysis with GWR models, District Sekar, Bubulan, Gondang, and Dander is a district with three predictor variables (percentage of vitamin A, the percentage of births assisted health personnel, and the percentage of clean water) that significantly influence the percentage of malnourished children under five.

Keywords: GWR, MGWR, R2, AIC

Procedia PDF Downloads 254
898 The Construction of the Semigroup Which Is Chernoff Equivalent to Statistical Mixture of Quantizations for the Case of the Harmonic Oscillator

Authors: Leonid Borisov, Yuri Orlov

Abstract:

We obtain explicit formulas of finitely multiple approximations of the equilibrium density matrix for the case of the harmonic oscillator using Chernoff's theorem and the notion of semigroup which is Chernoff equivalent to average semigroup. Also we found explicit formulas for the corresponding approximate Wigner functions and average values of the observable. We consider a superposition of τ -quantizations representing a wide class of linear quantizations. We show that the convergence of the approximations of the average values of the observable is not uniform with respect to the Gibbs parameter. This does not allow to represent approximate expression as the sum of the exact limits and small deviations evenly throughout the temperature range with a given order of approximation.

Keywords: Chernoff theorem, Feynman formulas, finitely multiple approximation, harmonic oscillator, Wigner function

Procedia PDF Downloads 410
897 Neutral Sugar Contents of Laurel-leaved and Cryptomeria japonica Forests

Authors: Ayuko Itsuki, Sachiyo Aburatani

Abstract:

Soil neutral sugar contents in Kasuga-yama Hill Primeval Forest (Nara, Japan) were examined using the Waksman’s approximation analysis to clarify relations with the neutral sugar constituted the soil organic matter and the microbial biomass. Samples were selected from the soil surrounding laurel-leaved (BB-1) and Carpinus japonica (BB-2) trees for analysis. The water and HCl soluble neutral sugars increased microbial biomass of the laurel-leaved forest soil. Arabinose, xylose, and galactose of the HCl soluble fraction were used immediately in comparison with other neutral sugars. Rhamnose, glucose, and fructose of the HCl soluble fraction were re-composed by the microbes.

Keywords: forest soil, neutral sugaras, soil organic matter, Waksman’s approximation analysis

Procedia PDF Downloads 281
896 An Axisymmetric Finite Element Method for Compressible Swirling Flow

Authors: Raphael Zanella, Todd A. Oliver, Karl W. Schulz

Abstract:

This work deals with the finite element approximation of axisymmetric compressible flows with swirl velocity. We are interested in problems where the flow, while weakly dependent on the azimuthal coordinate, may have a strong azimuthal velocity component. We describe the approximation of the compressible Navier-Stokes equations with H1-conformal spaces of axisymmetric functions. The weak formulation is implemented in a C++ solver with explicit time marching. The code is first verified with a convergence test on a manufactured solution. The verification is completed by comparing the numerical and analytical solutions in a Poiseuille flow case and a Taylor-Couette flow case. The code is finally applied to the problem of a swirling subsonic air flow in a plasma torch geometry.

Keywords: axisymmetric problem, compressible Navier-Stokes equations, continuous finite elements, swirling flow

Procedia PDF Downloads 148
895 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.

Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation

Procedia PDF Downloads 47
894 Analytical Design of IMC-PID Controller for Ideal Decoupling Embedded in Multivariable Smith Predictor Control System

Authors: Le Hieu Giang, Truong Nguyen Luan Vu, Le Linh

Abstract:

In this paper, the analytical tuning rules of IMC-PID controller are presented for the multivariable Smith predictor that involved the ideal decoupling. Accordingly, the decoupler is first introduced into the multivariable Smith predictor control system by a well-known approach of ideal decoupling, which is compactly extended for general nxn multivariable processes and the multivariable Smith predictor controller is then obtained in terms of the multiple single-loop Smith predictor controllers. The tuning rules of PID controller in series with filter are found by using Maclaurin approximation. Many multivariable industrial processes are employed to demonstrate the simplicity and effectiveness of the presented method. The simulation results show the superior performances of presented method in compared with the other methods.

Keywords: ideal decoupler, IMC-PID controller, multivariable smith predictor, Padé approximation

Procedia PDF Downloads 390
893 Applying Element Free Galerkin Method on Beam and Plate

Authors: Mahdad M’hamed, Belaidi Idir

Abstract:

This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate hole

Keywords: numerical computation, element-free Galerkin (EFG), moving least squares (MLS), meshless methods

Procedia PDF Downloads 261
892 Model Order Reduction for Frequency Response and Effect of Order of Method for Matching Condition

Authors: Aref Ghafouri, Mohammad javad Mollakazemi, Farhad Asadi

Abstract:

In this paper, model order reduction method is used for approximation in linear and nonlinearity aspects in some experimental data. This method can be used for obtaining offline reduced model for approximation of experimental data and can produce and follow the data and order of system and also it can match to experimental data in some frequency ratios. In this study, the method is compared in different experimental data and influence of choosing of order of the model reduction for obtaining the best and sufficient matching condition for following the data is investigated in format of imaginary and reality part of the frequency response curve and finally the effect and important parameter of number of order reduction in nonlinear experimental data is explained further.

Keywords: frequency response, order of model reduction, frequency matching condition, nonlinear experimental data

Procedia PDF Downloads 370
891 Proposals of Exposure Limits for Infrasound From Wind Turbines

Authors: M. Pawlaczyk-Łuszczyńska, T. Wszołek, A. Dudarewicz, P. Małecki, M. Kłaczyński, A. Bortkiewicz

Abstract:

Human tolerance to infrasound is defined by the hearing threshold. Infrasound that cannot be heard (or felt) is not annoying and is not thought to have any other adverse or health effects. Recent research has largely confirmed earlier findings. ISO 7196:1995 recommends the use of G-weighted characteristics for the assessment of infrasound. There is a strong correlation between G-weighted SPL and annoyance perception. The aim of this study was to propose exposure limits for infrasound from wind turbines. However, only a few countries have set limits for infrasound. These limits are usually no higher than 85-92 dBG, and none of them are specific to wind turbines. Over the years, a number of studies have been carried out to determine hearing thresholds below 20 Hz. It has been recognized that 10% of young people would be able to perceive 10 Hz at around 90 dB, and it has also been found that the difference in median hearing thresholds between young adults aged around 20 years and older adults aged over 60 years is around 10 dB, irrespective of frequency. This shows that older people (up to about 60 years of age) retain good hearing in the low frequency range, while their sensitivity to higher frequencies is often significantly reduced. In terms of exposure limits for infrasound, the average hearing threshold corresponds to a tone with a G-weighted SPL of about 96 dBG. In contrast, infrasound at Lp,G levels below 85-90 dBG is usually inaudible. The individual hearing threshold can, therefore be 10-15 dB lower than the average threshold, so the recommended limits for environmental infrasound could be 75 dBG or 80 dBG. It is worth noting that the G86 curve has been taken as the threshold of auditory perception of infrasound reached by 90-95% of the population, so the G75 and G80 curves can be taken as the criterion curve for wind turbine infrasound. Finally, two assessment methods and corresponding exposure limit values have been proposed for wind turbine infrasound, i.e. method I - based on G-weighted sound pressure level measurements and method II - based on frequency analysis in 1/3-octave bands in the frequency range 4-20 Hz. Separate limit values have been set for outdoor living areas in the open countryside (Area A) and for noise sensitive areas (Area B). In the case of Method I, infrasound limit values of 80 dBG (for areas A) and 75 dBG (for areas B) have been proposed, while in the case of Method II - criterion curves G80 and G75 have been chosen (for areas A and B, respectively).

Keywords: infrasound, exposure limit, hearing thresholds, wind turbines

Procedia PDF Downloads 49
890 E-Hailing Taxi Industry Management Mode Innovation Based on the Credit Evaluation

Authors: Yuan-lin Liu, Ye Li, Tian Xia

Abstract:

There are some shortcomings in Chinese existing taxi management modes. This paper suggests to establish the third-party comprehensive information management platform and put forward an evaluation model based on credit. Four indicators are used to evaluate the drivers’ credit, they are passengers’ evaluation score, driving behavior evaluation, drivers’ average bad record number, and personal credit score. A weighted clustering method is used to achieve credit level evaluation for taxi drivers. The management of taxi industry is based on the credit level, while the grade of the drivers is accorded to their credit rating. Credit rating determines the cost, income levels, the market access, useful period of license and the level of wage and bonus, as well as violation fine. These methods can make the credit evaluation effective. In conclusion, more credit data will help to set up a more accurate and detailed classification standard library.

Keywords: credit, mobile internet, e-hailing taxi, management mode, weighted cluster

Procedia PDF Downloads 288
889 The Moment of the Optimal Average Length of the Multivariate Exponentially Weighted Moving Average Control Chart for Equally Correlated Variables

Authors: Edokpa Idemudia Waziri, Salisu S. Umar

Abstract:

The Hotellng’s T^2 is a well-known statistic for detecting a shift in the mean vector of a multivariate normal distribution. Control charts based on T have been widely used in statistical process control for monitoring a multivariate process. Although it is a powerful tool, the T statistic is deficient when the shift to be detected in the mean vector of a multivariate process is small and consistent. The Multivariate Exponentially Weighted Moving Average (MEWMA) control chart is one of the control statistics used to overcome the drawback of the Hotellng’s T statistic. In this paper, the probability distribution of the Average Run Length (ARL) of the MEWMA control chart when the quality characteristics exhibit substantial cross correlation and when the process is in-control and out-of-control was derived using the Markov Chain algorithm. The derivation of the probability functions and the moments of the run length distribution were also obtained and they were consistent with some existing results for the in-control and out-of-control situation. By simulation process, the procedure identified a class of ARL for the MEWMA control when the process is in-control and out-of-control. From our study, it was observed that the MEWMA scheme is quite adequate for detecting a small shift and a good way to improve the quality of goods and services in a multivariate situation. It was also observed that as the in-control average run length ARL0¬ or the number of variables (p) increases, the optimum value of the ARL0pt increases asymptotically and as the magnitude of the shift σ increases, the optimal ARLopt decreases. Finally, we use the example from the literature to illustrate our method and demonstrate its efficiency.

Keywords: average run length, markov chain, multivariate exponentially weighted moving average, optimal smoothing parameter

Procedia PDF Downloads 392
888 Analysis of Spatial Heterogeneity of Residential Prices in Guangzhou: An Actual Study Based on Poi Geographically Weighted Regression Model

Authors: Zichun Guo

Abstract:

Guangzhou's housing prices have declined for a long time compared with the other three major cities. As Guangzhou's housing price ladder increases, the influencing factors of housing prices have gradually attracted attention. This article attempts to use housing price data and POI data and uses the Kriging spatial interpolation method and the geographically weighted regression model to explore the distribution of housing prices and the impact of factors. Caused, especially located in Huadu District and the city center. The response is mainly obvious in surrounding areas, which may be related to housing positioning. Economic POIs close to the city center have stronger responses. The factors affecting housing prices provide this method, which is conducive to the management and macro-control of relevant departments, better meets the demand for home purchases, and realizes financing-side reforms.

Keywords: housing prices, spatial heterogeneity, Guangzhou, POI

Procedia PDF Downloads 9
887 A Hybrid Adomian Decomposition Method in the Solution of Logistic Abelian Ordinary Differential and Its Comparism with Some Standard Numerical Scheme

Authors: F. J. Adeyeye, D. Eni, K. M. Okedoye

Abstract:

In this paper we present a Hybrid of Adomian decomposition method (ADM). This is the substitution of a One-step method of Taylor’s series approximation of orders I and II, into the nonlinear part of Adomian decomposition method resulting in a convergent series scheme. This scheme is applied to solve some Logistic problems represented as Abelian differential equation and the results are compared with the actual solution and Runge-kutta of order IV in order to ascertain the accuracy and efficiency of the scheme. The findings shows that the scheme is efficient enough to solve logistic problems considered in this paper.

Keywords: Adomian decomposition method, nonlinear part, one-step method, Taylor series approximation, hybrid of Adomian polynomial, logistic problem, Malthusian parameter, Verhulst Model

Procedia PDF Downloads 371
886 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator

Procedia PDF Downloads 44
885 First Principal Calculation of Structural, Elastic and Thermodynamic Properties of Yttrium-Copper Intermetallic Compound

Authors: Ammar Benamrani

Abstract:

This work investigates the equation of state parameters, elastic constants, and several other physical properties of (B2-type) Yttrium-Copper (YCu) rare earth intermetallic compound using the projected augmented wave (PAW) pseudopotentials method as implemented in the Quantum Espresso code. Using both the local density approximation (LDA) and the generalized gradient approximation (GGA), the finding of this research on the lattice parameter of YCu intermetallic compound agree very well with the experimental ones. The obtained results of the elastic constants and the Debye temperature are also in general in good agreement compared to the theoretical ones reported previously in literature. Furthermore, several thermodynamic properties of YCu intermetallic compound have been studied using quasi-harmonic approximations (QHA). The calculated data on the thermodynamic properties shows that the free energy and both isothermal and adiabatic bulk moduli decrease gradually with increasing of the temperature, while all other thermodynamic quantities increase with the temperature.

Keywords: Yttrium-Copper intermetallic compound, thermo_pw package, elastic constants, thermodynamic properties

Procedia PDF Downloads 124
884 Optimization of Smart Beta Allocation by Momentum Exposure

Authors: J. B. Frisch, D. Evandiloff, P. Martin, N. Ouizille, F. Pires

Abstract:

Smart Beta strategies intend to be an asset management revolution with reference to classical cap-weighted indices. Indeed, these strategies allow a better control on portfolios risk factors and an optimized asset allocation by taking into account specific risks or wishes to generate alpha by outperforming indices called 'Beta'. Among many strategies independently used, this paper focuses on four of them: Minimum Variance Portfolio, Equal Risk Contribution Portfolio, Maximum Diversification Portfolio, and Equal-Weighted Portfolio. Their efficiency has been proven under constraints like momentum or market phenomenon, suggesting a reconsideration of cap-weighting.
 To further increase strategy return efficiency, it is proposed here to compare their strengths and weaknesses inside time intervals corresponding to specific identifiable market phases, in order to define adapted strategies depending on pre-specified situations. 
Results are presented as performance curves from different combinations compared to a benchmark. If a combination outperforms the applicable benchmark in well-defined actual market conditions, it will be preferred. It is mainly shown that such investment 'rules', based on both historical data and evolution of Smart Beta strategies, and implemented according to available specific market data, are providing very interesting optimal results with higher return performance and lower risk.
 Such combinations have not been fully exploited yet and justify present approach aimed at identifying relevant elements characterizing them.

Keywords: smart beta, minimum variance portfolio, equal risk contribution portfolio, maximum diversification portfolio, equal weighted portfolio, combinations

Procedia PDF Downloads 315