Search results for: weighted dissimilarity measure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1361

Search results for: weighted dissimilarity measure

1271 Application of RS and GIS Technique for Identifying Groundwater Potential Zone in Gomukhi Nadhi Sub Basin, South India

Authors: Punitha Periyasamy, Mahalingam Sudalaimuthu, Sachikanta Nanda, Arasu Sundaram

Abstract:

India holds 17.5% of the world’s population but has only 2% of the total geographical area of the world where 27.35% of the area is categorized as wasteland due to lack of or less groundwater. So there is a demand for excessive groundwater for agricultural and non agricultural activities to balance its growth rate. With this in mind, an attempt is made to find the groundwater potential zone in Gomukhi Nadhi sub basin of Vellar River basin, TamilNadu, India covering an area of 1146.6 Sq.Km consists of 9 blocks from Peddanaickanpalayam to Virudhachalam in the sub basin. The thematic maps such as Geology, Geomorphology, Lineament, Landuse and Landcover and Drainage are prepared for the study area using IRS P6 data. The collateral data includes rainfall, water level, soil map are collected for analysis and inference. The digital elevation model (DEM) is generated using Shuttle Radar Topographic Mission (SRTM) and the slope of the study area is obtained. ArcGIS 10.1 acts as a powerful spatial analysis tool to find out the ground water potential zones in the study area by means of weighted overlay analysis. Each individual parameter of the thematic maps are ranked and weighted in accordance with their influence to increase the water level in the ground. The potential zones in the study area are classified viz., Very Good, Good, Moderate, Poor with its aerial extent of 15.67, 381.06, 575.38, 174.49 Sq.Km respectively.

Keywords: ArcGIS, DEM, Groundwater, Recharge, Weighted Overlay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2993
1270 Clustering in WSN Based on Minimum Spanning Tree Using Divide and Conquer Approach

Authors: Uttam Vijay, Nitin Gupta

Abstract:

Due to heavy energy constraints in WSNs clustering is an efficient way to manage the energy in sensors. There are many methods already proposed in the area of clustering and research is still going on to make clustering more energy efficient. In our paper we are proposing a minimum spanning tree based clustering using divide and conquer approach. The MST based clustering was first proposed in 1970’s for large databases. Here we are taking divide and conquer approach and implementing it for wireless sensor networks with the constraints attached to the sensor networks. This Divide and conquer approach is implemented in a way that we don’t have to construct the whole MST before clustering but we just find the edge which will be the part of the MST to a corresponding graph and divide the graph in clusters there itself if that edge from the graph can be removed judging on certain constraints and hence saving lot of computation.

Keywords: Algorithm, Clustering, Edge-Weighted Graph, Weighted-LEACH.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2477
1269 Optimal Design for SARMA(P,Q)L Process of EWMA Control Chart

Authors: Y. Areepong

Abstract:

The main goal of this paper is to study Statistical Process Control (SPC) with Exponentially Weighted Moving Average (EWMA) control chart when observations are serially-correlated. The characteristic of control chart is Average Run Length (ARL) which is the average number of samples taken before an action signal is given. Ideally, an acceptable ARL of in-control process should be enough large, so-called (ARL0). Otherwise it should be small when the process is out-of-control, so-called Average of Delay Time (ARL1) or a mean of true alarm. We find explicit formulas of ARL for EWMA control chart for Seasonal Autoregressive and Moving Average processes (SARMA) with Exponential white noise. The results of ARL obtained from explicit formula and Integral equation are in good agreement. In particular, this formulas for evaluating (ARL0) and (ARL1) be able to get a set of optimal parameters which depend on smoothing parameter (λ) and width of control limit (H) for designing EWMA chart with minimum of (ARL1).

Keywords: Average Run Length1, Optimal parameters, Exponentially Weighted Moving Average (EWMA) control chart.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1984
1268 Detection Efficient Enterprises via Data Envelopment Analysis

Authors: S. Turkan

Abstract:

In this paper, the Turkey’s Top 500 Industrial Enterprises data in 2014 were analyzed by data envelopment analysis. Data envelopment analysis is used to detect efficient decision-making units such as universities, hospitals, schools etc. by using inputs and outputs. The decision-making units in this study are enterprises. To detect efficient enterprises, some financial ratios are determined as inputs and outputs. For this reason, financial indicators related to productivity of enterprises are considered. The efficient foreign weighted owned capital enterprises are detected via super efficiency model. According to the results, it is said that Mercedes-Benz is the most efficient foreign weighted owned capital enterprise in Turkey.

Keywords: Data envelopment analysis, super efficiency, financial ratios, BCC model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 876
1267 Image Segmentation Based on Graph Theoretical Approach to Improve the Quality of Image Segmentation

Authors: Deepthi Narayan, Srikanta Murthy K., G. Hemantha Kumar

Abstract:

Graph based image segmentation techniques are considered to be one of the most efficient segmentation techniques which are mainly used as time & space efficient methods for real time applications. How ever, there is need to focus on improving the quality of segmented images obtained from the earlier graph based methods. This paper proposes an improvement to the graph based image segmentation methods already described in the literature. We contribute to the existing method by proposing the use of a weighted Euclidean distance to calculate the edge weight which is the key element in building the graph. We also propose a slight modification of the segmentation method already described in the literature, which results in selection of more prominent edges in the graph. The experimental results show the improvement in the segmentation quality as compared to the methods that already exist, with a slight compromise in efficiency.

Keywords: Graph based image segmentation, threshold, Weighted Euclidean distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
1266 Application of Data Envelopment Analysis to Assess Quality Management Efficiency

Authors: Chuen Tse Kuah, Kuan Yew Wong, Farzad Behrouzi

Abstract:

This paper is aimed to give an illustration on the application of Data Envelopment Analysis (DEA) as a tool to assess Quality Management (QM) efficiency. A variant of DEA, slack based measure (SBM) is used for this purpose. From this study, it is found that DEA is suitable to measure QM efficiency and give improvement suggestions to the inefficient QM.

Keywords: Quality Management, Data Envelopment Analysis, Slack Based Measure, Efficiency Measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2090
1265 Design of Two-Channel Quadrature Mirror Filter Banks Using Digital All-Pass Filters

Authors: Ju-Hong Lee, Yi-Lin Shieh

Abstract:

The paper deals with the minimax design of two-channel linear-phase (LP) quadrature mirror filter (QMF) banks using infinite impulse response (IIR) digital all-pass filters (DAFs). Based on the theory of two-channel QMF banks using two IIR DAFs, the design problem is appropriately formulated to result in an appropriate Chebyshev approximation for the desired group delay responses of the IIR DAFs and the magnitude response of the low-pass analysis filter. Through a frequency sampling and iterative approximation method, the design problem can be solved by utilizing a weighted least squares approach. The resulting two-channel QMF banks can possess approximately LP response without magnitude distortion. Simulation results are presented for illustration and comparison.

Keywords: Chebyshev approximation, Digital All-Pass Filter, Quadrature Mirror Filter, Weighted Least Squares.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2741
1264 A New Weighted LDA Method in Comparison to Some Versions of LDA

Authors: Delaram Jarchi, Reza Boostani

Abstract:

Linear Discrimination Analysis (LDA) is a linear solution for classification of two classes. In this paper, we propose a variant LDA method for multi-class problem which redefines the between class and within class scatter matrices by incorporating a weight function into each of them. The aim is to separate classes as much as possible in a situation that one class is well separated from other classes, incidentally, that class must have a little influence on classification. It has been suggested to alleviate influence of classes that are well separated by adding a weight into between class scatter matrix and within class scatter matrix. To obtain a simple and effective weight function, ordinary LDA between every two classes has been used in order to find Fisher discrimination value and passed it as an input into two weight functions and redefined between class and within class scatter matrices. Experimental results showed that our new LDA method improved classification rate, on glass, iris and wine datasets, in comparison to different versions of LDA.

Keywords: Discriminant vectors, weighted LDA, uncorrelation, principle components, Fisher-face method, Bootstarp method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523
1263 Standard Fuzzy Sets for Aircraft Selection using Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This study uses two-dimensional standard fuzzy sets to enhance multiple criteria decision-making analysis for passenger aircraft selection, allowing decision-makers to express judgments with uncertain and vague information. Using two-dimensional fuzzy numbers, three decision makers evaluated three aircraft alternatives according to seven decision criteria. A validity analysis based on two-dimensional standard fuzzy weighted geometric (SFWG) and two-dimensional standard fuzzy weighted average (SFGA) operators is conducted to test the proposed approach's robustness and effectiveness in the fuzzy multiple criteria decision making (MCDM) evaluation process. 

Keywords: Standard fuzzy sets (SFSs), aircraft selection, multiple criteria decision making, intuitionistic fuzzy sets (IFSs), SFWG, SFGA, MCDM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 396
1262 Project Complexity Indices based on Topology Features

Authors: Amer A. Boushaala

Abstract:

The heuristic decision rules used for project scheduling will vary depending upon the project-s size, complexity, duration, personnel, and owner requirements. The concept of project complexity has received little detailed attention. The need to differentiate between easy and hard problem instances and the interest in isolating the fundamental factors that determine the computing effort required by these procedures inspired a number of researchers to develop various complexity measures. In this study, the most common measures of project complexity are presented. A new measure of project complexity is developed. The main privilege of the proposed measure is that, it considers size, shape and logic characteristics, time characteristics, resource demands and availability characteristics as well as number of critical activities and critical paths. The degree of sensitivity of the proposed measure for complexity of project networks has been tested and evaluated against the other measures of complexity of the considered fifty project networks under consideration in the current study. The developed measure showed more sensitivity to the changes in the network data and gives accurate quantified results when comparing the complexities of networks.

Keywords: Activity networks, Complexity index, Networkcomplexity measure, Network topology, Project Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
1261 MIMO Broadcast Scheduling for Weighted Sum-rate Maximization

Authors: Swadhin Kumar Mishra, Sidhartha Panda, C. Ardil

Abstract:

Multiple-Input-Multiple-Output (MIMO) is one of the most important communication techniques that allow wireless systems to achieve higher data rate. To overcome the practical difficulties in implementing Dirty Paper Coding (DPC), various suboptimal MIMO Broadcast (MIMO-BC) scheduling algorithms are employed which choose the best set of users among all the users. In this paper we discuss such a sub-optimal MIMO-BC scheduling algorithm which employs antenna selection at the receiver side. The channels for the users considered here are not Identical and Independent Distributed (IID) so that users at the receiver side do not get equal opportunity for communication. So we introduce a method of applying weights to channels of the users which are not IID in such a way that each of the users gets equal opportunity for communication. The effect of weights on overall sum-rate achieved by the system has been investigated and presented.

Keywords: Antenna selection, Identical and Independent Distributed (IID), Sum-rate capacity, Weighted sum rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590
1260 Documents Emotions Classification Model Based on TF-IDF Weighting Measure

Authors: Amr Mansour Mohsen, Hesham Ahmed Hassan, Amira M. Idrees

Abstract:

Emotions classification of text documents is applied to reveal if the document expresses a determined emotion from its writer. As different supervised methods are previously used for emotion documents’ classification, in this research we present a novel model that supports the classification algorithms for more accurate results by the support of TF-IDF measure. Different experiments have been applied to reveal the applicability of the proposed model, the model succeeds in raising the accuracy percentage according to the determined metrics (precision, recall, and f-measure) based on applying the refinement of the lexicon, integration of lexicons using different perspectives, and applying the TF-IDF weighting measure over the classifying features. The proposed model has also been compared with other research to prove its competence in raising the results’ accuracy.

Keywords: Emotion detection, TF-IDF, WEKA tool, classification algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724
1259 Application of a Similarity Measure for Graphs to Web-based Document Structures

Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian, Max Mühlhauser

Abstract:

Due to the tremendous amount of information provided by the World Wide Web (WWW) developing methods for mining the structure of web-based documents is of considerable interest. In this paper we present a similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as linear integer strings, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments for solving a novel and challenging problem: Measuring the structural similarity of generalized trees. In other words: We first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem for developing a efficient graph similarity measure. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based document structures.

Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
1258 The Performance Analysis of Error Saturation Nonlinearity LMS in Impulsive Noise based on Weighted-Energy Conservation

Authors: T Panigrahi, G Panda, Mulgrew

Abstract:

This paper introduces a new approach for the performance analysis of adaptive filter with error saturation nonlinearity in the presence of impulsive noise. The performance analysis of adaptive filters includes both transient analysis which shows that how fast a filter learns and the steady-state analysis gives how well a filter learns. The recursive expressions for mean-square deviation(MSD) and excess mean-square error(EMSE) are derived based on weighted energy conservation arguments which provide the transient behavior of the adaptive algorithm. The steady-state analysis for co-related input regressor data is analyzed, so this approach leads to a new performance results without restricting the input regression data to be white.

Keywords: Error saturation nonlinearity, transient analysis, impulsive noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781
1257 CFD Modeling of a Radiator Axial Fan for Air Flow Distribution

Authors: S. Jain, Y. Deshpande

Abstract:

The fluid mechanics principle is used extensively in designing axial flow fans and their associated equipment. This paper presents a computational fluid dynamics (CFD) modeling of air flow distribution from a radiator axial flow fan used in an acid pump truck Tier4 (APT T4) Repower. This axial flow fan augments the transfer of heat from the engine mounted on the APT T4. CFD analysis was performed for an area weighted average static pressure difference at the inlet and outlet of the fan. Pressure contours, velocity vectors, and path lines were plotted for detailing the flow characteristics for different orientations of the fan blade. The results were then compared and verified against known theoretical observations and actual experimental data. This study shows that a CFD simulation can be very useful for predicting and understanding the flow distribution from a radiator fan for further research work.

Keywords: Computational fluid dynamics (CFD), acid pump truck (APT) Tier4 Repower, axial flow fan, area weighted average static pressure difference, and contour plots.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8496
1256 Application of Feed-Forward Neural Networks Autoregressive Models in Gross Domestic Product Prediction

Authors: Ε. Giovanis

Abstract:

In this paper we present an autoregressive model with neural networks modeling and standard error backpropagation algorithm training optimization in order to predict the gross domestic product (GDP) growth rate of four countries. Specifically we propose a kind of weighted regression, which can be used for econometric purposes, where the initial inputs are multiplied by the neural networks final optimum weights from input-hidden layer after the training process. The forecasts are compared with those of the ordinary autoregressive model and we conclude that the proposed regression-s forecasting results outperform significant those of autoregressive model in the out-of-sample period. The idea behind this approach is to propose a parametric regression with weighted variables in order to test for the statistical significance and the magnitude of the estimated autoregressive coefficients and simultaneously to estimate the forecasts.

Keywords: Autoregressive model, Error back-propagation Feed-Forward neural networks, , Gross Domestic Product

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
1255 A New Approach for Controlling Overhead Traveling Crane Using Rough Controller

Authors: Mazin Z. Othman

Abstract:

This paper presents the idea of a rough controller with application to control the overhead traveling crane system. The structure of such a controller is based on a suggested concept of a fuzzy logic controller. A measure of fuzziness in rough sets is introduced. A comparison between fuzzy logic controller and rough controller has been demonstrated. The results of a simulation comparing the performance of both controllers are shown. From these results we infer that the performance of the proposed rough controller is satisfactory.

Keywords: Accuracy measure, Fuzzy Logic Controller (FLC), Overhead Traveling Crane (OTC), Rough Set Theory (RST), Roughness measure

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
1254 Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System

Authors: Rahman Ali, Muhammad Sajjad, Farkhund Iqbal, Muhammad Sadiq Hassan Zada, Mohammed Hussain

Abstract:

Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.

Keywords: CO2 emission, IoT, EDA, Weighted Sum Model, WSM, regression, smart parking system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 744
1253 The Induced Generalized Hybrid Averaging Operator and its Application in Financial Decision Making

Authors: José M. Merigó, Montserrat Casanovas

Abstract:

We present the induced generalized hybrid averaging (IGHA) operator. It is a new aggregation operator that generalizes the hybrid averaging (HA) by using generalized means and order inducing variables. With this formulation, we get a wide range of mean operators such as the induced HA (IHA), the induced hybrid quadratic averaging (IHQA), the HA, etc. The ordered weighted averaging (OWA) operator and the weighted average (WA) are included as special cases of the HA operator. Therefore, with this generalization we can obtain a wide range of aggregation operators such as the induced generalized OWA (IGOWA), the generalized OWA (GOWA), etc. We further generalize the IGHA operator by using quasi-arithmetic means. Then, we get the Quasi-IHA operator. Finally, we also develop an illustrative example of the new approach in a financial decision making problem. The main advantage of the IGHA is that it gives a more complete view of the decision problem to the decision maker because it considers a wide range of situations depending on the operator used.

Keywords: Decision making, Aggregation operators, OWA operator, Generalized means, Selection of investments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501
1252 W3-Miner: Mining Weighted Frequent Subtree Patterns in a Collection of Trees

Authors: R. AliMohammadzadeh, M. Haghir Chehreghani, A. Zarnani, M. Rahgozar

Abstract:

Mining frequent tree patterns have many useful applications in XML mining, bioinformatics, network routing, etc. Most of the frequent subtree mining algorithms (i.e. FREQT, TreeMiner and CMTreeMiner) use anti-monotone property in the phase of candidate subtree generation. However, none of these algorithms have verified the correctness of this property in tree structured data. In this research it is shown that anti-monotonicity does not generally hold, when using weighed support in tree pattern discovery. As a result, tree mining algorithms that are based on this property would probably miss some of the valid frequent subtree patterns in a collection of trees. In this paper, we investigate the correctness of anti-monotone property for the problem of weighted frequent subtree mining. In addition we propose W3-Miner, a new algorithm for full extraction of frequent subtrees. The experimental results confirm that W3-Miner finds some frequent subtrees that the previously proposed algorithms are not able to discover.

Keywords: Semi-Structured Data Mining, Anti-Monotone Property, Trees.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
1251 Genetic Algorithm Application in a Dynamic PCB Assembly with Carryover Sequence- Dependent Setups

Authors: M. T. Yazdani Sabouni, Rasaratnam Logendran

Abstract:

We consider a typical problem in the assembly of printed circuit boards (PCBs) in a two-machine flow shop system to simultaneously minimize the weighted sum of weighted tardiness and weighted flow time. The investigated problem is a group scheduling problem in which PCBs are assembled in groups and the interest is to find the best sequence of groups as well as the boards within each group to minimize the objective function value. The type of setup operation between any two board groups is characterized as carryover sequence-dependent setup time, which exactly matches with the real application of this problem. As a technical constraint, all of the boards must be kitted before the assembly operation starts (kitting operation) and by kitting staff. The main idea developed in this paper is to completely eliminate the role of kitting staff by assigning the task of kitting to the machine operator during the time he is idle which is referred to as integration of internal (machine) and external (kitting) setup times. Performing the kitting operation, which is a preparation process of the next set of boards while the other boards are currently being assembled, results in the boards to continuously enter the system or have dynamic arrival times. Consequently, a dynamic PCB assembly system is introduced for the first time in the assembly of PCBs, which also has characteristics similar to that of just-in-time manufacturing. The problem investigated is computationally very complex, meaning that finding the optimal solutions especially when the problem size gets larger is impossible. Thus, a heuristic based on Genetic Algorithm (GA) is employed. An example problem on the application of the GA developed is demonstrated and also numerical results of applying the GA on solving several instances are provided.

Keywords: Genetic algorithm, Dynamic PCB assembly, Carryover sequence-dependent setup times, Multi-objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
1250 Application of Intuitionistic Fuzzy Cross Entropy Measure in Decision Making for Medical Diagnosis

Authors: Shikha Maheshwari, Amit Srivastava

Abstract:

In medical investigations, uncertainty is a major challenging problem in making decision for doctors/experts to identify the diseases with a common set of symptoms and also has been extensively increasing in medical diagnosis problems. The theory of cross entropy for intuitionistic fuzzy sets (IFS) is an effective approach in coping uncertainty in decision making for medical diagnosis problem. The main focus of this paper is to propose a new intuitionistic fuzzy cross entropy measure (IFCEM), which aid in reducing the uncertainty and doctors/experts will take their decision easily in context of patient’s disease. It is shown that the proposed measure has some elegant properties, which demonstrates its potency. Further, it is also exemplified in detail the efficiency and utility of the proposed measure by using a real life case study of diagnosis the disease in medical science.

Keywords: Intuitionistic fuzzy cross entropy (IFCEM), intuitionistic fuzzy set (IFS), medical diagnosis, uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2045
1249 A Bootstrap's Reliability Measure on Tests of Hypotheses

Authors: Al Jefferson J. Pabelic, Dennis A. Tarepe

Abstract:

Bootstrapping has gained popularity in different tests of hypotheses as an alternative in using asymptotic distribution if one is not sure of the distribution of the test statistic under a null hypothesis. This method, in general, has two variants – the parametric and the nonparametric approaches. However, issues on reliability of this method always arise in many applications. This paper addresses the issue on reliability by establishing a reliability measure in terms of quantiles with respect to asymptotic distribution, when this is approximately correct. The test of hypotheses used is Ftest. The simulated results show that using nonparametric bootstrapping in F-test gives better reliability than parametric bootstrapping with relatively higher degrees of freedom.

Keywords: F-test, nonparametric bootstrapping, parametric bootstrapping, reliability measure, tests of hypotheses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
1248 Optimal Convolutive Filters for Real-Time Detection and Arrival Time Estimation of Transient Signals

Authors: Michal Natora, Felix Franke, Klaus Obermayer

Abstract:

Linear convolutive filters are fast in calculation and in application, and thus, often used for real-time processing of continuous data streams. In the case of transient signals, a filter has not only to detect the presence of a specific waveform, but to estimate its arrival time as well. In this study, a measure is presented which indicates the performance of detectors in achieving both of these tasks simultaneously. Furthermore, a new sub-class of linear filters within the class of filters which minimize the quadratic response is proposed. The proposed filters are more flexible than the existing ones, like the adaptive matched filter or the minimum power distortionless response beamformer, and prove to be superior with respect to that measure in certain settings. Simulations of a real-time scenario confirm the advantage of these filters as well as the usefulness of the performance measure.

Keywords: Adaptive matched filter, minimum variance distortionless response, beam forming, Capon beam former, linear filters, performance measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523
1247 Iterative Image Reconstruction for Sparse-View Computed Tomography via Total Variation Regularization and Dictionary Learning

Authors: XianYu Zhao, JinXu Guo

Abstract:

Recently, low-dose computed tomography (CT) has become highly desirable due to increasing attention to the potential risks of excessive radiation. For low-dose CT imaging, ensuring image quality while reducing radiation dose is a major challenge. To facilitate low-dose CT imaging, we propose an improved statistical iterative reconstruction scheme based on the Penalized Weighted Least Squares (PWLS) standard combined with total variation (TV) minimization and sparse dictionary learning (DL) to improve reconstruction performance. We call this method "PWLS-TV-DL". In order to evaluate the PWLS-TV-DL method, we performed experiments on digital phantoms and physical phantoms, respectively. The experimental results show that our method is in image quality and calculation. The efficiency is superior to other methods, which confirms the potential of its low-dose CT imaging.

Keywords: Low dose computed tomography, penalized weighted least squares, total variation, dictionary learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 838
1246 Multiple Criteria Decision Making for Turkish Air Force Stealth Fighter Aircraft Selection

Authors: C. Ardil

Abstract:

Neutrosophic logic decision analysis is proposed as a method of stealth fighter aircraft selection for Turkish Air Force. The opinion of experts is employed to rank the alternatives across a set of criteria. The analyst uses neutrosophic logic numbers to describe the experts' preferences. This approach can handle the situation in the case of unavailability of precise data, which is most commonly the case in stealth fighter aircraft selection. Neutrosophic logic numbers can consider the imprecision of the factors affecting decision making such as stealth analysis, survivability analysis, and performance analysis. Neutrosophic logic ranking is achieved using weighted arithmetic operator and weighted geometric operator and the alternatives are ranked from best to worst. An example is also presented to illustrate the applicability and effectiveness of the proposed method. 

Keywords: Neutrosophic set theory, stealth fighter aircraft selection, multiple criteria decision-making, neutrosophic logic decision making, Turkish Air Force, MCDM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 498
1245 Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method

Authors: Farhad Asadi, Mohammad Javad Mollakazemi, Aref Ghafouri

Abstract:

Recent research in neural networks science and neuroscience for modeling complex time series data and statistical learning has focused mostly on learning from high input space and signals. Local linear models are a strong choice for modeling local nonlinearity in data series. Locally weighted projection regression is a flexible and powerful algorithm for nonlinear approximation in high dimensional signal spaces. In this paper, different learning scenario of one and two dimensional data series with different distributions are investigated for simulation and further noise is inputted to data distribution for making different disordered distribution in time series data and for evaluation of algorithm in locality prediction of nonlinearity. Then, the performance of this algorithm is simulated and also when the distribution of data is high or when the number of data is less the sensitivity of this approach to data distribution and influence of important parameter of local validity in this algorithm with different data distribution is explained.

Keywords: Local nonlinear estimation, LWPR algorithm, Online training method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1601
1244 Advanced Jet Trainer and Light Attack Aircraft Selection Using Composite Programming in Multiple Criteria Decision Making Analysis Method

Authors: C. Ardil

Abstract:

In this paper, composite programming is discussed for aircraft evaluation and selection problem using the multiple criteria decision analysis method. The decision criteria and aircraft alternatives were identified from the literature review. The importance of criteria weights was determined by the standard deviation method. The proposed model is applied to a practical decision problem for evaluating and selecting advanced jet trainer and light attack aircraft. The proposed technique gives robust and efficient results in modeling multiple criteria decisions. As a result of composite programming analysis, Hürjet, an advanced jet trainer and light attack aircraft alternative (a3), was chosen as the most suitable aircraft candidate.  

Keywords: composite programming, additive weighted model, multiplicative weighted model, multiple criteria decision making analysis, MCDMA, aircraft selection, advanced jet trainer and light attack aircraft, M-346, FA-50, Hürjet

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 488
1243 Video-Based Face Recognition Based On State-Space Model

Authors: Cheng-Chieh Chiang, Yi-Chia Chan, Greg C. Lee

Abstract:

This paper proposes a video-based framework for face recognition to identify which faces appear in a video sequence. Our basic idea is like a tracking task - to track a selection of person candidates over time according to the observing visual features of face images in video frames. Hence, we employ the state-space model to formulate video-based face recognition by dividing this problem into two parts: the likelihood and the transition measures. The likelihood measure is to recognize whose face is currently being observed in video frames, for which two-dimensional linear discriminant analysis is employed. The transition measure estimates the probability of changing from an incorrect recognition at the previous stage to the correct person at the current stage. Moreover, extra nodes associated with head nodes are incorporated into our proposed state-space model. The experimental results are also provided to demonstrate the robustness and efficiency of our proposed approach.

Keywords: 2DLDA, face recognition, state-space model, likelihood measure, transition measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
1242 Robust Statistics Based Algorithm to Remove Salt and Pepper Noise in Images

Authors: V.R.Vijaykumar, P.T.Vanathi, P.Kanagasabapathy, D.Ebenezer

Abstract:

In this paper, a robust statistics based filter to remove salt and pepper noise in digital images is presented. The function of the algorithm is to detect the corrupted pixels first since the impulse noise only affect certain pixels in the image and the remaining pixels are uncorrupted. The corrupted pixels are replaced by an estimated value using the proposed robust statistics based filter. The proposed method perform well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% compared to standard median filter, weighted median filter, recursive weighted median filter, progressive switching median filter, signal dependent rank ordered mean filter, adaptive median filter and recently proposed decision based algorithm. The visual and quantitative results show the proposed algorithm outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise

Keywords: Image denoising, Nonlinear filter, Robust Statistics, and Salt and Pepper Noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2203