Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1399

Search results for: sample weights

1399 Sample-Weighted Fuzzy Clustering with Regularizations

Authors: Miin-Shen Yang, Yee-Shan Pan

Abstract:

Although there have been many researches in cluster analysis to consider on feature weights, little effort is made on sample weights. Recently, Yu et al. (2011) considered a probability distribution over a data set to represent its sample weights and then proposed sample-weighted clustering algorithms. In this paper, we give a sample-weighted version of generalized fuzzy clustering regularization (GFCR), called the sample-weighted GFCR (SW-GFCR). Some experiments are considered. These experimental results and comparisons demonstrate that the proposed SW-GFCR is more effective than the most clustering algorithms.

Keywords: Clustering; fuzzy c-means, fuzzy clustering, sample weights, regularization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
1398 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models

Authors: Yoonsuh Jung

Abstract:

As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an ‘optimal’ value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.

Keywords: Cross Validation, Parameter Averaging, Parameter Selection, Regularization Parameter Search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1301
1397 A Straightforward Approach for Determining the Weights of Decision Makers Based on Angle Cosine and Projection Method

Authors: Qiang Yang, Ping-An Du

Abstract:

Group decision making with multiple attribute has attracted intensive concern in the decision analysis area. This paper assumes that the contributions of all the decision makers (DMs) are not equal to the decision process based on different knowledge and experience in group setting. The aim of this paper is to develop a novel approach to determine weights of DMs in the group decision making problems. In this paper, the weights of DMs are determined in the group decision environment via angle cosine and projection method. First of all, the average decision of all individual decisions is defined as the ideal decision. After that, we define the weight of each decision maker (DM) by aggregating the angle cosine and projection between individual decision and ideal decision with associated direction indicator μ. By using the weights of DMs, all individual decisions are aggregated into a collective decision. Further, the preference order of alternatives is ranked in accordance with the overall row value of collective decision. Finally, an example in a chemical company is provided to illustrate the developed approach.

Keywords: Angel cosine, ideal decision, projection method, weights of decision makers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
1396 A Neighborhood Condition for Fractional k-deleted Graphs

Authors: Sizhong Zhou, Hongxia Liu

Abstract:

Abstract–Let k ≥ 3 be an integer, and let G be a graph of order n with n ≥ 9k +3- 42(k - 1)2 + 2. Then a spanning subgraph F of G is called a k-factor if dF (x) = k for each x ∈ V (G). A fractional k-factor is a way of assigning weights to the edges of a graph G (with all weights between 0 and 1) such that for each vertex the sum of the weights of the edges incident with that vertex is k. A graph G is a fractional k-deleted graph if there exists a fractional k-factor after deleting any edge of G. In this paper, it is proved that G is a fractional k-deleted graph if G satisfies δ(G) ≥ k + 1 and |NG(x) ∪ NG(y)| ≥ 1 2 (n + k - 2) for each pair of nonadjacent vertices x, y of G.

Keywords: Graph, minimum degree, neighborhood union, fractional k-factor, fractional k-deleted graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810
1395 Ranking Alternatives in Multi-Criteria Decision Analysis using Common Weights Based on Ideal and Anti-ideal Frontiers

Authors: Saber Saati Mohtadi, Ali Payan, Azizallah Kord

Abstract:

One of the most important issues in multi-criteria decision analysis (MCDA) is to determine the weights of criteria so that all alternatives can be compared based on the collective performance of criteria. In this paper, one of popular methods in data envelopment analysis (DEA) known as common weights (CWs) is used to determine the weights in MCDA. Two frontiers named ideal and anti-ideal frontiers, instead of ideal and anti-ideal alternatives, are defined based on two new proposed CWs models. Ideal and antiideal frontiers are more flexible than that of alternatives. According to the optimal solutions of these two models, the distances of an alternative from the ideal and anti-ideal frontiers are derived. Then, a relative distance is introduced to measure the value of each alternative. The suggested models are linear and despite weight restrictions are feasible. An example is presented for explaining the method and for comparing to the existing literature.

Keywords: Anti-ideal frontier, Common weights (CWs), Ideal frontier, Multi-criteria decision analysis (MCDA)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
1394 Easy-Interactive Ordering of the Pareto Optimal Set with Imprecise Weights

Authors: Maria Kalinina, Aron Larsson, Leif Olsson

Abstract:

In the multi objective optimization, in the case when generated set of Pareto optimal solutions is large, occurs the problem to select of the best solution from this set. In this paper, is suggested a method to order of Pareto set. Ordering the Pareto optimal set carried out in conformity with the introduced distance function between each solution and selected reference point, where the reference point may be adjusted to represent the preferences of a decision making agent. Preference information about objective weights from a decision maker may be expressed imprecisely. The developed elicitation procedure provides an opportunity to obtain surrogate numerical weights for the objectives, and thus, to manage impreciseness of preference. The proposed method is a scalable to many objectives and can be used independently or as complementary to the various visualization techniques in the multidimensional case.

Keywords: Imprecise weights, Multiple objectives, Pareto optimality, Visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
1393 Mathematical Programming on Multivariate Calibration Estimation in Stratified Sampling

Authors: Dinesh Rao, M.G.M. Khan, Sabiha Khan

Abstract:

Calibration estimation is a method of adjusting the original design weights to improve the survey estimates by using auxiliary information such as the known population total (or mean) of the auxiliary variables. A calibration estimator uses calibrated weights that are determined to minimize a given distance measure to the original design weights while satisfying a set of constraints related to the auxiliary information. In this paper, we propose a new multivariate calibration estimator for the population mean in the stratified sampling design, which incorporates information available for more than one auxiliary variable. The problem of determining the optimum calibrated weights is formulated as a Mathematical Programming Problem (MPP) that is solved using the Lagrange multiplier technique.

Keywords: Calibration estimation, Stratified sampling, Multivariate auxiliary information, Mathematical programming problem, Lagrange multiplier technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1604
1392 Effect of Neighborhood Size on Negative Weights in Punctual Kriging Based Image Restoration

Authors: Asmatullah Chaudhry, Anwar M. Mirza

Abstract:

We present a general comparison of punctual kriging based image restoration for different neighbourhood sizes. The formulation of the technique under consideration is based on punctual kriging and fuzzy concepts for image restoration in spatial domain. Three different neighbourhood windows are considered to estimate the semivariance at different lags for studying its effect in reduction of negative weights resulted in punctual kriging, consequently restoration of degraded images. Our results show that effect of neighbourhood size higher than 5x5 on reduction in negative weights is insignificant. In addition, image quality measures, such as structure similarity indices, peak signal to noise ratios and the new variogram based quality measures; show that 3x3 window size gives better performance as compared with larger window sizes.

Keywords: Image restoration, punctual kriging, semi-variance, structure similarity index, negative weights in punctual kriging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2107
1391 Using the Combined Model of PROMETHEE and Fuzzy Analytic Network Process for Determining Question Weights in Scientific Exams through Data Mining Approach

Authors: Hassan Haleh, Amin Ghaffari, Parisa Farahpour

Abstract:

Need for an appropriate system of evaluating students- educational developments is a key problem to achieve the predefined educational goals. Intensity of the related papers in the last years; that tries to proof or disproof the necessity and adequacy of the students assessment; is the corroborator of this matter. Some of these studies tried to increase the precision of determining question weights in scientific examinations. But in all of them there has been an attempt to adjust the initial question weights while the accuracy and precision of those initial question weights are still under question. Thus In order to increase the precision of the assessment process of students- educational development, the present study tries to propose a new method for determining the initial question weights by considering the factors of questions like: difficulty, importance and complexity; and implementing a combined method of PROMETHEE and fuzzy analytic network process using a data mining approach to improve the model-s inputs. The result of the implemented case study proves the development of performance and precision of the proposed model.

Keywords: Assessing students, Analytic network process, Clustering, Data mining, Fuzzy sets, Multi-criteria decision making, and Preference function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1252
1390 Evolving Neural Networks using Moment Method for Handwritten Digit Recognition

Authors: H. El Fadili, K. Zenkouar, H. Qjidaa

Abstract:

This paper proposes a neural network weights and topology optimization using genetic evolution and the backpropagation training algorithm. The proposed crossover and mutation operators aims to adapt the networks architectures and weights during the evolution process. Through a specific inheritance procedure, the weights are transmitted from the parents to their offsprings, which allows re-exploitation of the already trained networks and hence the acceleration of the global convergence of the algorithm. In the preprocessing phase, a new feature extraction method is proposed based on Legendre moments with the Maximum entropy principle MEP as a selection criterion. This allows a global search space reduction in the design of the networks. The proposed method has been applied and tested on the well known MNIST database of handwritten digits.

Keywords: Genetic algorithm, Legendre Moments, MEP, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
1389 Weighted Clustering Coefficient for Identifying Modular Formations in Protein-Protein Interaction Networks

Authors: Zelmina Lubovac, Björn Olsson, Jonas Gamalielsson

Abstract:

This paper describes a novel approach for deriving modules from protein-protein interaction networks, which combines functional information with topological properties of the network. This approach is based on weighted clustering coefficient, which uses weights representing the functional similarities between the proteins. These weights are calculated according to the semantic similarity between the proteins, which is based on their Gene Ontology terms. We recently proposed an algorithm for identification of functional modules, called SWEMODE (Semantic WEights for MODule Elucidation), that identifies dense sub-graphs containing functionally similar proteins. The rational underlying this approach is that each module can be reduced to a set of triangles (protein triplets connected to each other). Here, we propose considering semantic similarity weights of all triangle-forming edges between proteins. We also apply varying semantic similarity thresholds between neighbours of each node that are not neighbours to each other (and hereby do not form a triangle), to derive new potential triangles to include in module-defining procedure. The results show an improvement of pure topological approach, in terms of number of predicted modules that match known complexes.

Keywords: Modules, systems biology, protein interactionnetworks, yeast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
1388 Redefining the Croatian Economic Sentiment Indicator

Authors: I. Lolic, P. Soric, M. Cizmesija

Abstract:

Based on Business and Consumer Survey (BCS) data, the European Commission (EC) regularly publishes the monthly Economic Sentiment Indicator (ESI) for each EU member state. ESI is conceptualized as a leading indicator, aimed ad tracking the overall economic activity. In calculating ESI, the EC employs arbitrarily chosen weights on 15 BCS response balances. This paper raises the predictive quality of ESI by applying nonlinear programming to find such weights that maximize the correlation coefficient of ESI and year-on-year GDP growth. The obtained results show that the highest weights are assigned to the response balances of industrial sector questions, followed by questions from the retail trade sector. This comes as no surprise since the existing literature shows that the industrial production is a plausible proxy for the overall Croatian economic activity and since Croatian GDP is largely influenced by the aggregate personal consumption.

Keywords: Business and Consumer Survey, Economic Sentiment Indicator, Leading Indicator, Nonlinear Optimization with Constraints.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
1387 Rating the Importance of Customer Requirements for Green Product Using Analytic Hierarchy Process Methodology

Authors: Lara F. Horani, Shurong Tong

Abstract:

Identification of customer requirements and their preferences are the starting points in the process of product design. Most of design methodologies focus on traditional requirements. But in the previous decade, the green products and the environment requirements have increasingly attracted the attention with the constant increase in the level of consumer awareness towards environmental problems (such as green-house effect, global warming, pollution and energy crisis, and waste management). Determining the importance weights for the customer requirements is an essential and crucial process. This paper used the analytic hierarchy process (AHP) approach to evaluate and rate the customer requirements for green products. With respect to the ultimate goal of customer satisfaction, surveys are conducted using a five-point scale analysis. With the help of this scale, one can derive the weight vectors. This approach can improve the imprecise ranking of customer requirements inherited from studies based on the conventional AHP. Furthermore, the AHP with extent analysis is simple and easy to implement to prioritize customer requirements. The research is based on collected data through a questionnaire survey conducted over a sample of 160 people belonging to different age, marital status, education and income groups in order to identify the customer preferences for green product requirements.

Keywords: Analytic hierarchy process, green product, customer requirements for green design, importance weights for the customer requirements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 529
1386 Implementation of SU-MIMO and MU-MIMOGTD-System under Imperfect CSI Knowledge

Authors: Parit Kanjanavirojkul, Kiatwarakorn Keeratishananond, Prapun Suksompong

Abstract:

We study the performance of compressed beamforming weights feedback technique in generalized triangular decomposition (GTD) based MIMO system. GTD is a beamforming technique that enjoys QoS flexibility. The technique, however, will perform at its optimum only when the full knowledge of channel state information (CSI) is available at the transmitter. This would be impossible in the real system, where there are channel estimation error and limited feedback. We suggest a way to implement the quantized beamforming weights feedback, which can significantly reduce the feedback data, on GTD-based MIMO system and investigate the performance of the system. Interestingly, we found that compressed beamforming weights feedback does not degrade the BER performance of the system at low input power, while the channel estimation error and quantization do. For comparison, GTD is more sensitive to compression and quantization, while SVD is more sensitive to the channel estimation error. We also explore the performance of GTDbased MU-MIMO system, and find that the BER performance starts to degrade largely at around -20 dB channel estimation error.

Keywords: MIMO, MU-MIMO, GTD, Imperfect CSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
1385 Microstructure Parameters of a Super-Ionic Sample (Csag2i3)

Authors: Samir Osman M., Mohammed Hassan S.

Abstract:

Sample of CsAg2I3 was prepared by solid state reaction. Then, microstructure parameters of this sample have been determined using wide angle X-ray scattering WAXS method. As well as, Cell parameters of crystal structure have been refined using CHEKCELL program. This analysis states that the lattice intrinsic strainof the sample is so small and the crystal size is on the order of 559Å.

Keywords: WAXS, Microstructure parameters, super-ionic conductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1074
1384 An Adequate Choice of Initial Sample Size for Selection Approach

Authors: Mohammad H. Almomani, Rosmanjawati Abdul Rahman

Abstract:

In this paper, we consider the effect of the initial sample size on the performance of a sequential approach that used in selecting a good enough simulated system, when the number of alternatives is very large. We implement a sequential approach on M=M=1 queuing system under some parameter settings, with a different choice of the initial sample sizes to explore the impacts on the performance of this approach. The results show that the choice of the initial sample size does affect the performance of our selection approach.

Keywords: Ranking and Selection, Ordinal Optimization, Optimal Computing Budget Allocation, Subset Selection, Indifference-Zone, Initial Sample Size.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1011
1383 2D and 3D Unsteady Simulation of the Heat Transfer in the Sample during Heat Treatment by Moving Heat Source

Authors: Z. Veselý, M. Honner, J. Mach

Abstract:

The aim of the performed work is to establish the 2D and 3D model of direct unsteady task of sample heat treatment by moving source employing computer model on the basis of finite element method. Complex boundary condition on heat loaded sample surface is the essential feature of the task. Computer model describes heat treatment of the sample during heat source movement over the sample surface. It is started from 2D task of sample cross section as a basic model. Possibilities of extension from 2D to 3D task are discussed. The effect of the addition of third model dimension on temperature distribution in the sample is showed. Comparison of various model parameters on the sample temperatures is observed. Influence of heat source motion on the depth of material heat treatment is shown for several velocities of the movement. Presented computer model is prepared for the utilization in laser treatment of machine parts.

Keywords: Computer simulation, unsteady model, heat treatment, complex boundary condition, moving heat source.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674
1382 The Effect of Nonnormality on CB-SEM and PLS-SEM Path Estimates

Authors: Z. Jannoo, B. W. Yap, N. Auchoybur, M. A. Lazim

Abstract:

The two common approaches to Structural Equation Modeling (SEM) are the Covariance-Based SEM (CB-SEM) and Partial Least Squares SEM (PLS-SEM). There is much debate on the performance of CB-SEM and PLS-SEM for small sample size and when distributions are nonnormal. This study evaluates the performance of CB-SEM and PLS-SEM under normality and nonnormality conditions via a simulation. Monte Carlo Simulation in R programming language was employed to generate data based on the theoretical model with one endogenous and four exogenous variables. Each latent variable has three indicators. For normal distributions, CB-SEM estimates were found to be inaccurate for small sample size while PLS-SEM could produce the path estimates. Meanwhile, for a larger sample size, CB-SEM estimates have lower variability compared to PLS-SEM. Under nonnormality, CB-SEM path estimates were inaccurate for small sample size. However, CB-SEM estimates are more accurate than those of PLS-SEM for sample size of 50 and above. The PLS-SEM estimates are not accurate unless sample size is very large.  

Keywords: CB-SEM, Monte Carlo simulation, Normality conditions, Nonnormality, PLS-SEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4694
1381 On the Oil Repellency of Nanotextured Aluminum Surface

Authors: G. Momen, R. Jafari, M. Farzaneh

Abstract:

Two different superhydrophobic surfaces were elaborated and their oil repellency behavior was evaluated using several liquid with different surface tension. A silicone rubber/SiO2 nanocomposite coated (A) on aluminum substrate by “spin-coating" and the sample B was an anodized aluminum surface covered by Teflon-like coating. A high static contact angle about ∼162° was measured for two prepared surfaces on which the water droplet rolloff. Scanning electron microscopy (SEM) showed the presence of micro/nanostructures for both sample A and B similar to that of lotus leaf. However the sample A presented significantly different behaviour of wettability against the low surface tension liquid. Sample A has been wetted totally by oil (dodecan) droplet while sample B showed oleophobic behaviour. Oleophobic property of Teflon like coating can be contributed to the presence of CF2 and CF3 functional group which was shown by XPS analysis.

Keywords: Oleophobic, Superhydrophobic, Aluminum surface, Nano-texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911
1380 Approximations to the Distribution of the Sample Correlation Coefficient

Authors: John N. Haddad, Serge B. Provost

Abstract:

Given a bivariate normal sample of correlated variables, (Xi, Yi), i = 1, . . . , n, an alternative estimator of Pearson’s correlation coefficient is obtained in terms of the ranges, |Xi − Yi|. An approximate confidence interval for ρX,Y is then derived, and a simulation study reveals that the resulting coverage probabilities are in close agreement with the set confidence levels. As well, a new approximant is provided for the density function of R, the sample correlation coefficient. A mixture involving the proposed approximate density of R, denoted by hR(r), and a density function determined from a known approximation due to R. A. Fisher is shown to accurately approximate the distribution of R. Finally, nearly exact density approximants are obtained on adjusting hR(r) by a 7th degree polynomial.

Keywords: Sample correlation coefficient, density approximation, confidence intervals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
1379 Effect of Pectinase on the Physico-Chemical Properties of Juice from Pawpaw (Carica papaya) Fruits

Authors: Idoko J. O., Achusi N.

Abstract:

A procedure for the preparation of clarified Pawpaw Juice was developed. About 750ml Pawpaw pulp was measured into 2 measuring cylinders A & B of capacity 1 litre heated to 400C, cooled to 200C. 30mls pectinase was added into cylinder A, while 30mls distilled water was added into cylinder B. Enzyme treated sample (A) was allowed to digest for 5hours after which it was heated to 900C for 15 minutes to inactivate the enzyme. The heated sample was cooled and with the aid of a mucillin cloth the pulp was filtered to obtain the clarified pawpaw juice. The juice was filled into 100ml plastic bottles, pasteurized at 950C for 45 minutes, cooled and stored at room temperature. The sample treated with 30mls distilled water also underwent the same process. Freshly pasteurized sample was analyzed for specific gravity, titratable acidity, pH, sugars and ascorbic acid. The remaining sample was then stored for 2 weeks and the above analyses repeated. There were differences in the results of the freshly pasteurized samples and stored sample in pH and ascorbic acid levels, also sample treated with pectinase yielded higher volumes of juice than that treated with distilled water.

Keywords: Juice, pawpaw, pectinase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2231
1378 Performance Evaluation of Universities as Groups of Decision Making Units

Authors: Ali Payan, Bijan Rahmani Parchicolaie

Abstract:

Universities have different offices such as educational, research, student, administrative, and financial offices. This paper considers universities as groups of decision making units (DMUs) in which DMUs are their offices. This approach gives us with a more just evaluation of universities instead of separate evaluation of the offices of universities. The proposed approach to evaluate group performance of universities is based on common set of weights method in DEA. The suggested method not only can compare groups and measure their efficiencies, but also can calculate the efficiency of units within group and efficiency spread of groups. At last, the suggested method is applied for the analysis of the performance of universities in 14th district of Islamic Azad University as groups under evaluation.

Keywords: Common set of weights, group efficiency, performance analysis, spread efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1133
1377 Analysis of Gas Disturbance Characteristics in Lunar Sample Storage

Authors: Lv Shizeng, Han Xiao, Zhang Yi, Ding Wenjing

Abstract:

The lunar sample storage device is mainly used for the preparation of the lunar samples, observation, physical analysis and other work. The lunar samples and operating equipment are placed directly inside the storage device. The inside of the storage device is a high purity nitrogen environment to ensure that the sample is not contaminated by the Earth's environment. In order to ensure that the water and oxygen indicators in the storage device meet the sample requirements, a dynamic gas cycle is required between the storage device and the external purification equipment. However, the internal gas disturbance in the storage device can affect the operation of the sample. In this paper, the storage device model is established, and the tetrahedral mesh is established by Tetra/Mixed method. The influence of different inlet position and gas flow on the internal flow field disturbance is calculated, and the disturbed flow area should be avoided during the sampling operation.

Keywords: Lunar samples, gas disturbance, storage device, characteristic analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 531
1376 Dependent Weighted Aggregation Operators of Hesitant Fuzzy Numbers

Authors: Jing Liu

Abstract:

In this paper, motivated by the ideas of dependent weighted aggregation operators, we develop some new hesitant fuzzy dependent weighted aggregation operators to aggregate the input arguments taking the form of hesitant fuzzy numbers rather than exact numbers, or intervals. In fact, we propose three hesitant fuzzy dependent weighted averaging(HFDWA) operators, and three hesitant fuzzy dependent weighted geometric(HFDWG) operators based on different weight vectors, and the most prominent characteristic of these operators is that the associated weights only depend on the aggregated hesitant fuzzy numbers and can relieve the influence of unfair hesitant fuzzy numbers on the aggregated results by assigning low weights to those “false” and “biased” ones. Some examples are given to illustrated the efficiency of the proposed operators.

Keywords: Hesitant fuzzy numbers, hesitant fuzzy dependent weighted averaging(HFDWA) operators, hesitant fuzzy dependent weighted geometric(HFDWG) operators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
1375 System Performance Comparison of Turbo and Trellis Coded Optical CDMA Systems

Authors: M. Kulkarni, R. K. Sinha, D. R. Bhaskar

Abstract:

In this paper, we have compared the performance of a Turbo and Trellis coded optical code division multiple access (OCDMA) system. The comparison of the two codes has been accomplished by employing optical orthogonal codes (OOCs). The Bit Error Rate (BER) performances have been compared by varying the code weights of address codes employed by the system. We have considered the effects of optical multiple access interference (OMAI), thermal noise and avalanche photodiode (APD) detector noise. Analysis has been carried out for the system with and without double optical hard limiter (DHL). From the simulation results it is observed that a better and distinct comparison can be drawn between the performance of Trellis and Turbo coded systems, at lower code weights of optical orthogonal codes for a fixed number of users. The BER performance of the Turbo coded system is found to be better than the Trellis coded system for all code weights that have been considered for the simulation. Nevertheless, the Trellis coded OCDMA system is found to be better than the uncoded OCDMA system. Trellis coded OCDMA can be used in systems where decoding time has to be kept low, bandwidth is limited and high reliability is not a crucial factor as in local area networks. Also the system hardware is less complex in comparison to the Turbo coded system. Trellis coded OCDMA system can be used without significant modification of the existing chipsets. Turbo-coded OCDMA can however be employed in systems where high reliability is needed and bandwidth is not a limiting factor.

Keywords: avalanche photodiode, optical code division multipleaccess, optical multiple access interference, Trellis codedmodulation, Turbo code

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
1374 MIMO Broadcast Scheduling for Weighted Sum-rate Maximization

Authors: Swadhin Kumar Mishra, Sidhartha Panda, C. Ardil

Abstract:

Multiple-Input-Multiple-Output (MIMO) is one of the most important communication techniques that allow wireless systems to achieve higher data rate. To overcome the practical difficulties in implementing Dirty Paper Coding (DPC), various suboptimal MIMO Broadcast (MIMO-BC) scheduling algorithms are employed which choose the best set of users among all the users. In this paper we discuss such a sub-optimal MIMO-BC scheduling algorithm which employs antenna selection at the receiver side. The channels for the users considered here are not Identical and Independent Distributed (IID) so that users at the receiver side do not get equal opportunity for communication. So we introduce a method of applying weights to channels of the users which are not IID in such a way that each of the users gets equal opportunity for communication. The effect of weights on overall sum-rate achieved by the system has been investigated and presented.

Keywords: Antenna selection, Identical and Independent Distributed (IID), Sum-rate capacity, Weighted sum rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340
1373 Bootstrap and MLS Methods-based Individual Bioequivalence Assessment

Authors: Kongsheng Zhang, Li Ge

Abstract:

It is a one-sided hypothesis testing process for assessing bioequivalence. Bootstrap and modified large-sample(MLS) methods are considered to study individual bioequivalence(IBE), type I error and power of hypothesis tests are simulated and compared with FDA(2001). The results show that modified large-sample method is equivalent to the method of FDA(2001) .

Keywords: Individual bioequivalence, bootstrap, Bayesian bootstrap, modified large-sample.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1342
1372 Entropic Measures of a Probability Sample Space and Exponential Type (α, β) Entropy

Authors: Rajkumar Verma, Bhu Dev Sharma

Abstract:

Entropy is a key measure in studies related to information theory and its many applications. Campbell for the first time recognized that the exponential of the Shannon’s entropy is just the size of the sample space, when distribution is uniform. Here is the idea to study exponentials of Shannon’s and those other entropy generalizations that involve logarithmic function for a probability distribution in general. In this paper, we introduce a measure of sample space, called ‘entropic measure of a sample space’, with respect to the underlying distribution. It is shown in both discrete and continuous cases that this new measure depends on the parameters of the distribution on the sample space - same sample space having different ‘entropic measures’ depending on the distributions defined on it. It was noted that Campbell’s idea applied for R`enyi’s parametric entropy of a given order also. Knowing that parameters play a role in providing suitable choices and extended applications, paper studies parametric entropic measures of sample spaces also. Exponential entropies related to Shannon’s and those generalizations that have logarithmic functions, i.e. are additive have been studies for wider understanding and applications. We propose and study exponential entropies corresponding to non additive entropies of type (α, β), which include Havard and Charvˆat entropy as a special case.

Keywords: Sample space, Probability distributions, Shannon’s entropy, R`enyi’s entropy, Non-additive entropies .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3127
1371 Cyanide and Heavy Metal Concentration of Fermented Cassava Flour (Lafun) Available in the Markets of Ogun and Oyo States of Nigeria

Authors: Adebayo-Oyetoro A. O., Oyewole O. B., Obadina A. O, Omemu M. A.

Abstract:

Fermented cassava flours (lafun) sold in Ogun and Oyo States of Nigeria were collected from 10 markets for a period of two months and analysed to determine their safety status. The presence of trace metals was due to high vehicular movement around the drying sites and markets. Cyanide and moisture contents of samples were also determined to assess the adequacy of fermentation and drying. The result showed that sample OWO was found to have the highest amount of 16.02±0.12mg/kg cyanide while the lowest was found in sample OJO with 10.51±0.10mg/kg. The results also indicated that sample TVE had the highest moisture content of 18.50±0.20% while sample OWO had the lowest amount of 12.46±0.47%. Copper and lead levels were found to be highest in TVE with values 28.10mg/kg and 1.1mg/kg respectively, while sample BTS had the lowest values of 20.6mg/kg and 0.05mg/kg respectively. High value of cyanide indicated inadequate fermentation.

Keywords: Cyanide, fermented, heavy metal, lafun.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2551
1370 Determination of Vitamin C (Ascorbic Acid) in Orange Juices Product

Authors: Wanida Wonsawat

Abstract:

This research describes a voltammetric approach to determine amounts of vitamin C (Ascorbic acid) in orange juice sample, using three screen printed electrode. The anodic currents of vitamin C were proportional to vitamin C concentration in the range of 0 – 10.0 mM with the limit of detection of 1.36 mM. The method was successfully employed with 2 µL of the working solution dropped on the electrode surface. The proposed method was applied for the analysis of vitamin C in packed orange juice without sample purification or complexion of sample preparation step.

Keywords: Ascorbic acid, Vitamin C, Juice, Voltammetry

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7633