Search results for: Absolute entropy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 402

Search results for: Absolute entropy

342 Application the Statistical Conditional Entropy Function for Definition of Cause-and-Effect Relations during Primary Soil Formation

Authors: Vladimir K. Mukhomorov

Abstract:

Within the framework of a method of the information theory it is offered statistics and probabilistic model for definition of cause-and-effect relations in the coupled multicomponent subsystems. The quantitative parameter which is defined through conditional and unconditional entropy functions is introduced. The method is applied to the analysis of the experimental data on dynamics of change of the chemical elements composition of plants organs (roots, reproductive organs, leafs and stems). Experiment is directed on studying of temporal processes of primary soil formation and their connection with redistribution dynamics of chemical elements in plant organs. This statistics and probabilistic model allows also quantitatively and unambiguously to specify the directions of the information streams on plant organs.

Keywords: Chemical elements, entropy function, information, plants.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1359
341 Development of Maximum Entropy Method for Prediction of Droplet-size Distribution in Primary Breakup Region of Spray

Authors: E. Movahednejad, F. Ommi

Abstract:

Droplet size distributions in the cold spray of a fuel are important in observed combustion behavior. Specification of droplet size and velocity distributions in the immediate downstream of injectors is also essential as boundary conditions for advanced computational fluid dynamics (CFD) and two-phase spray transport calculations. This paper describes the development of a new model to be incorporated into maximum entropy principle (MEP) formalism for prediction of droplet size distribution in droplet formation region. The MEP approach can predict the most likely droplet size and velocity distributions under a set of constraints expressing the available information related to the distribution. In this article, by considering the mechanisms of turbulence generation inside the nozzle and wave growth on jet surface, it is attempted to provide a logical framework coupling the flow inside the nozzle to the resulting atomization process. The purpose of this paper is to describe the formulation of this new model and to incorporate it into the maximum entropy principle (MEP) by coupling sub-models together using source terms of momentum and energy. Comparison between the model prediction and experimental data for a gas turbine swirling nozzle and an annular spray indicate good agreement between model and experiment.

Keywords: Droplet, instability, Size Distribution, Turbulence, Maximum Entropy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2580
340 The Buffer Gas Influence Rate on Absolute Cu Atoms Density with regard to Deposition

Authors: S. Sobhanian, H. Naghshara, N. Sadeghi, S. Khorram

Abstract:

The absolute Cu atoms density in Cu(2S1/2ÔåÉ2P1/2) ground state has been measured by Resonance Optical Absorption (ROA) technique in a DC magnetron sputtering deposition with argon. We measured these densities under variety of operation conditions: pressure from 0.6 μbar to 14 μbar, input power from 10W to 200W and N2 mixture from 0% to 100%. For measuring the gas temperature, we used the simulation of N2 rotational spectra with a special computer code. The absolute number density of Cu atoms decreases with increasing the N2 percentage of buffer gas at any conditions of this work. But the deposition rate, is not decreased with the same manner. The deposition rate variation is very small and in the limit of quartz balance measuring equipment accuracy. So we conclude that decrease in the absolute number density of Cu atoms in magnetron plasma has not a big effect on deposition rate, because the diffusion of Cu atoms to the chamber volume and deviation of Cu atoms from direct path (towards the substrate) decreases with increasing of N2 percentage of buffer gas. This is because of the lower mass of N2 atoms compared to the argon ones.

Keywords: Deposition rate, Resonance Optical Absorption, Sputtering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1366
339 Wavelet Entropy Based Algorithm for Fault Detection and Classification in FACTS Compensated Transmission Line

Authors: Amany M. El-Zonkoly, Hussein Desouki

Abstract:

Distance protection of transmission lines including advanced flexible AC transmission system (FACTS) devices has been a very challenging task. FACTS devices of interest in this paper are static synchronous series compensators (SSSC) and unified power flow controller (UPFC). In this paper, a new algorithm is proposed to detect and classify the fault and identify the fault position in a transmission line with respect to a FACTS device placed in the midpoint of the transmission line. Discrete wavelet transformation and wavelet entropy calculations are used to analyze during fault current and voltage signals of the compensated transmission line. The proposed algorithm is very simple and accurate in fault detection and classification. A variety of fault cases and simulation results are introduced to show the effectiveness of such algorithm.

Keywords: Entropy calculation, FACTS, SSSC, UPFC, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075
338 Upgraded Rough Clustering and Outlier Detection Method on Yeast Dataset by Entropy Rough K-Means Method

Authors: P. Ashok, G. M. Kadhar Nawaz

Abstract:

Rough set theory is used to handle uncertainty and incomplete information by applying two accurate sets, Lower approximation and Upper approximation. In this paper, the rough clustering algorithms are improved by adopting the Similarity, Dissimilarity–Similarity and Entropy based initial centroids selection method on three different clustering algorithms namely Entropy based Rough K-Means (ERKM), Similarity based Rough K-Means (SRKM) and Dissimilarity-Similarity based Rough K-Means (DSRKM) were developed and executed by yeast dataset. The rough clustering algorithms are validated by cluster validity indexes namely Rand and Adjusted Rand indexes. An experimental result shows that the ERKM clustering algorithm perform effectively and delivers better results than other clustering methods. Outlier detection is an important task in data mining and very much different from the rest of the objects in the clusters. Entropy based Rough Outlier Factor (EROF) method is seemly to detect outlier effectively for yeast dataset. In rough K-Means method, by tuning the epsilon (ᶓ) value from 0.8 to 1.08 can detect outliers on boundary region and the RKM algorithm delivers better results, when choosing the value of epsilon (ᶓ) in the specified range. An experimental result shows that the EROF method on clustering algorithm performed very well and suitable for detecting outlier effectively for all datasets. Further, experimental readings show that the ERKM clustering method outperformed the other methods.

Keywords: Clustering, Entropy, Outlier, Rough K-Means, validity index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
337 Combination of Tensile Strength and Elongation of Reverse Rolled TaNbHfZrTi Refractory High Entropy Alloy

Authors: M. Veerasham

Abstract:

The refractory high entropy alloys are potential materials for high-temperature applications because of their ability to retain high strength up to 1600°C. However, their practical applications were limited due to poor elongation at room temperature. Therefore, decreasing the average valence electron concentrations (VEC) is an effective design strategy to improve the intrinsic ductility of refractory high entropy alloys. In this work, the high-entropy alloy TaNbHfZrTi was processed at room temperature by each step reverse rolling up to a 90% reduction in thickness. Subsequently, the reverse rolled 90% samples were utilized for annealing treatment at 800°C and 1000°C for 1 h to understand phase stability, microstructure, texture, and mechanical properties. The reverse rolled 90% condition contains body-centered cubic (BCC) single-phase; upon annealing at 800 °C, the formation of secondary phase BCC-2 prevailed. The partial recrystallization and complete recrystallization microstructures were developed for annealed at 800°C and 1000°C, respectively. The reverse rolled condition and 1000°C annealed temperature exhibit extraordinary room temperature tensile properties with high ultimate tensile strength (UTS) without compromising loss of ductility called “strength-ductility” trade-off. The reverse-rolled 90% and annealing treatment carried out at temperature about 1000°C for 1 h consist of UTS 1430 MPa and 1556 MPa with an appreciable amount of 21% and 20% elongation, respectively. The development of hierarchical microstructure prevailed for the annealed 1000°C which led to the simultaneous increase in tensile strength and elongation.

Keywords: refractory high entropy alloys, reverse rolling, recrystallization, microstructure, tensile properties

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 546
336 An Improved C-Means Model for MRI Segmentation

Authors: Ying Shen, Weihua Zhu

Abstract:

Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.

Keywords: Magnetic Resonance Image, C-means model, image segmentation, information entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918
335 Quantification of Heart Rate Variability: A Measure based on Unique Heart Rates

Authors: V. I. Thajudin Ahamed, P. Dhanasekaran, A. Naseem, N. G. Karthick, T. K. Abdul Jaleel, Paul K.Joseph

Abstract:

It is established that the instantaneous heart rate (HR) of healthy humans keeps on changing. Analysis of heart rate variability (HRV) has become a popular non invasive tool for assessing the activities of autonomic nervous system. Depressed HRV has been found in several disorders, like diabetes mellitus (DM) and coronary artery disease, characterised by autonomic nervous dysfunction. A new technique, which searches for pattern repeatability in a time series, is proposed specifically for the analysis of heart rate data. These set of indices, which are termed as pattern repeatability measure and pattern repeatability ratio are compared with approximate entropy and sample entropy. In our analysis, based on the method developed, it is observed that heart rate variability is significantly different for DM patients, particularly for patients with diabetic foot ulcer.

Keywords: Autonomic nervous system, diabetes mellitus, heart rate variability, pattern identification, sample entropy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910
334 Secure Secret Recovery by using Weighted Personal Entropy

Authors: Leau Y. B., Dinna Nina M. N., Habeeb S. A. H., Jetol B.

Abstract:

Authentication plays a vital role in many secure systems. Most of these systems require user to log in with his or her secret password or pass phrase before entering it. This is to ensure all the valuables information is kept confidential guaranteeing also its integrity and availability. However, to achieve this goal, users are required to memorize high entropy passwords or pass phrases. Unfortunately, this sometimes causes difficulty for user to remember meaningless strings of data. This paper presents a new scheme which assigns a weight to each personal question given to the user in revealing the encrypted secrets or password. Concentration of this scheme is to offer fault tolerance to users by allowing them to forget the specific password to a subset of questions and still recover the secret and achieve successful authentication. Comparison on level of security for weight-based and weightless secret recovery scheme is also discussed. The paper concludes with the few areas that requires more investigation in this research.

Keywords: Secret Recovery, Personal Entropy, Cryptography, Secret Sharing and Key Management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
333 A Self Organized Map Method to Classify Auditory-Color Synesthesia from Frontal Lobe Brain Blood Volume

Authors: Takashi Kaburagi, Takamasa Komura, Yosuke Kurihara

Abstract:

Absolute pitch is the ability to identify a musical note without a reference tone. Training for absolute pitch often occurs in preschool education. It is necessary to clarify how well the trainee can make use of synesthesia in order to evaluate the effect of the training. To the best of our knowledge, there are no existing methods for objectively confirming whether the subject is using synesthesia. Therefore, in this study, we present a method to distinguish the use of color-auditory synesthesia from the separate use of color and audition during absolute pitch training. This method measures blood volume in the prefrontal cortex using functional Near-infrared spectroscopy (fNIRS) and assumes that the cognitive step has two parts, a non-linear step and a linear step. For the linear step, we assume a second order ordinary differential equation. For the non-linear part, it is extremely difficult, if not impossible, to create an inverse filter of such a complex system as the brain. Therefore, we apply a method based on a self-organizing map (SOM) and are guided by the available data. The presented method was tested using 15 subjects, and the estimation accuracy is reported.

Keywords: Absolute pitch, functional near-infrared spectroscopy, prefrontal cortex, synesthesia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 978
332 An Attribute-Centre Based Decision Tree Classification Algorithm

Authors: Gökhan Silahtaroğlu

Abstract:

Decision tree algorithms have very important place at classification model of data mining. In literature, algorithms use entropy concept or gini index to form the tree. The shape of the classes and their closeness to each other some of the factors that affect the performance of the algorithm. In this paper we introduce a new decision tree algorithm which employs data (attribute) folding method and variation of the class variables over the branches to be created. A comparative performance analysis has been held between the proposed algorithm and C4.5.

Keywords: Classification, decision tree, split, pruning, entropy, gini.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370
331 An Advanced Time-Frequency Domain Method for PD Extraction with Non-Intrusive Measurement

Authors: Guomin Luo, Daming Zhang, Yong Kwee Koh, Kim Teck Ng, Helmi Kurniawan, Weng Hoe Leong

Abstract:

Partial discharge (PD) detection is an important method to evaluate the insulation condition of metal-clad apparatus. Non-intrusive sensors which are easy to install and have no interruptions on operation are preferred in onsite PD detection. However, it often lacks of accuracy due to the interferences in PD signals. In this paper a novel PD extraction method that uses frequency analysis and entropy based time-frequency (TF) analysis is introduced. The repetitive pulses from convertor are first removed via frequency analysis. Then, the relative entropy and relative peak-frequency of each pulse (i.e. time-indexed vector TF spectrum) are calculated and all pulses with similar parameters are grouped. According to the characteristics of non-intrusive sensor and the frequency distribution of PDs, the pulses of PD and interferences are separated. Finally the PD signal and interferences are recovered via inverse TF transform. The de-noised result of noisy PD data demonstrates that the combination of frequency and time-frequency techniques can discriminate PDs from interferences with various frequency distributions.

Keywords: Entropy, Fourier analysis, non-intrusive measurement, time-frequency analysis, partial discharge

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1589
330 Effect of Aging on the Second Law Efficiency, Exergy Destruction and Entropy Generation in the Skeletal Muscles during Exercise

Authors: Jale Çatak, Bayram Yılmaz, Mustafa Ozilgen

Abstract:

The second law muscle work efficiency is obtained by multiplying the metabolic and mechanical work efficiencies. Thermodynamic analyses are carried out with 19 sets of arms and legs exercise data which were obtained from the healthy young people. These data are used to simulate the changes occurring during aging. The muscle work efficiency decreases with aging as a result of the reduction of the metabolic energy generation in the mitochondria. The reduction of the mitochondrial energy efficiency makes it difficult to carry out the maintenance of the muscle tissue, which in turn causes a decline of the muscle work efficiency. When the muscle attempts to produce more work, entropy generation and exergy destruction increase. Increasing exergy destruction may be regarded as the result of the deterioration of the muscles. When the exergetic efficiency is 0.42, exergy destruction becomes 1.49 folds of the work performance. This proportionality becomes 2.50 and 5.21 folds when the exergetic efficiency decreases to 0.30 and 0.17 respectively.

Keywords: Aging mitochondria, entropy generation, exergy destruction, muscle work performance, second law efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378
329 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and roughsets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: Rough-sets, Classification, Feature Selection, Entropy, Outliers, Frequent itemset mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2434
328 Spectral Entropy Employment in Speech Enhancement based on Wavelet Packet

Authors: Talbi Mourad, Salhi Lotfi, Chérif Adnen

Abstract:

In this work, we are interested in developing a speech denoising tool by using a discrete wavelet packet transform (DWPT). This speech denoising tool will be employed for applications of recognition, coding and synthesis. For noise reduction, instead of applying the classical thresholding technique, some wavelet packet nodes are set to zero and the others are thresholded. To estimate the non stationary noise level, we employ the spectral entropy. A comparison of our proposed technique to classical denoising methods based on thresholding and spectral subtraction is made in order to evaluate our approach. The experimental implementation uses speech signals corrupted by two sorts of noise, white and Volvo noises. The obtained results from listening tests show that our proposed technique is better than spectral subtraction. The obtained results from SNR computation show the superiority of our technique when compared to the classical thresholding method using the modified hard thresholding function based on u-law algorithm.

Keywords: Enhancement, spectral subtraction, SNR, discrete wavelet packet transform, spectral entropy Histogram

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1992
327 Angle Analyzer of an Encoder using the LabVIEW

Authors: Hyun-Min Kim, Yun-Seok Lim, Hyeok-Jin Yun, Jang-Mok Kim, Hee-je Kim

Abstract:

As we make progressive products for good works, and future industries want to get higher speed and resolution from various developments in the robotics as well as precise control system, the concept of control feedback is getting more important. Within a range of industrial developments, the concept is most responsible for the high reliability of a device. We explain an efficient analyzing method of a rotary encoder such as an incremental type encoder and absolute type encoder using the LabVIEW program

Keywords: LabVIEW, PFI Function, Angle analyzer, Incremental encoder, Absolute encoder

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3999
326 Influence of Mass Flow Rate on Forced Convective Heat Transfer through a Nanofluid Filled Direct Absorption Solar Collector

Authors: Salma Parvin, M. A. Alim

Abstract:

The convective and radiative heat transfer performance and entropy generation on forced convection through a direct absorption solar collector (DASC) is investigated numerically. Four different fluids, including Cu-water nanofluid, Al2O3-waternanofluid, TiO2-waternanofluid, and pure water are used as the working fluid. Entropy production has been taken into account in addition to the collector efficiency and heat transfer enhancement. Penalty finite element method with Galerkin’s weighted residual technique is used to solve the governing non-linear partial differential equations. Numerical simulations are performed for the variation of mass flow rate. The outcomes are presented in the form of isotherms, average output temperature, the average Nusselt number, collector efficiency, average entropy generation, and Bejan number. The results present that the rate of heat transfer and collector efficiency enhance significantly for raising the values of m up to a certain range.

Keywords: DASC, forced convection, mass flow rate, nanofluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 857
325 DNA Computing for an Absolute 1-Center Problem: An Evolutionary Approach

Authors: Zuwairie Ibrahim, Yusei Tsuboi, Osamu Ono, Marzuki Khalid

Abstract:

Deoxyribonucleic Acid or DNA computing has emerged as an interdisciplinary field that draws together chemistry, molecular biology, computer science and mathematics. Thus, in this paper, the possibility of DNA-based computing to solve an absolute 1-center problem by molecular manipulations is presented. This is truly the first attempt to solve such a problem by DNA-based computing approach. Since, part of the procedures involve with shortest path computation, research works on DNA computing for shortest path Traveling Salesman Problem, in short, TSP are reviewed. These approaches are studied and only the appropriate one is adapted in designing the computation procedures. This DNA-based computation is designed in such a way that every path is encoded by oligonucleotides and the path-s length is directly proportional to the length of oligonucleotides. Using these properties, gel electrophoresis is performed in order to separate the respective DNA molecules according to their length. One expectation arise from this paper is that it is possible to verify the instance absolute 1-center problem using DNA computing by laboratory experiments.

Keywords: DNA computing, operation research, 1-center problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1462
324 Finger Vein Recognition using PCA-based Methods

Authors: Sepehr Damavandinejadmonfared, Ali Khalili Mobarakeh, Mohsen Pashna, , Jiangping Gou Sayedmehran Mirsafaie Rizi, Saba Nazari, Shadi Mahmoodi Khaniabadi, Mohamad Ali Bagheri

Abstract:

In this paper a novel algorithm is proposed to merit the accuracy of finger vein recognition. The performances of Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA), and Kernel Entropy Component Analysis (KECA) in this algorithm are validated and compared with each other in order to determine which one is the most appropriate one in terms of finger vein recognition.

Keywords: Biometrics, finger vein recognition, PrincipalComponent Analysis (PCA), Kernel Principal Component Analysis(KPCA), Kernel Entropy Component Analysis (KPCA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2680
323 Energy Efficiency Index Applied to Reactive Systems

Authors: P. Góes, J. Manzi

Abstract:

This paper focuses on the development of an energy efficiency index that will be applied to reactive systems, which is based in the First and Second Law of Thermodynamics, by giving particular consideration to the concept of maximum entropy. Among the requirements of such energy efficiency index, the practical feasibility must be essential. To illustrate the performance of the proposed index, such an index was used as decisive factor of evaluation for the optimization process of an industrial reactor. The results allow the conclusion to be drawn that the energy efficiency index applied to the reactive system is consistent because it extracts the information expected of an efficient indicator, and that it is useful as an analytical tool besides being feasible from a practical standpoint. Furthermore, it has proved to be much simpler to use than tools based on traditional methodologies.

Keywords: Energy efficiency, maximum entropy, reactive systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1123
322 Improving Classification Accuracy with Discretization on Datasets Including Continuous Valued Features

Authors: Mehmet Hacibeyoglu, Ahmet Arslan, Sirzat Kahramanli

Abstract:

This study analyzes the effect of discretization on classification of datasets including continuous valued features. Six datasets from UCI which containing continuous valued features are discretized with entropy-based discretization method. The performance improvement between the dataset with original features and the dataset with discretized features is compared with k-nearest neighbors, Naive Bayes, C4.5 and CN2 data mining classification algorithms. As the result the classification accuracies of the six datasets are improved averagely by 1.71% to 12.31%.

Keywords: Data mining classification algorithms, entropy-baseddiscretization method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2461
321 Medical Image Registration by Minimizing Divergence Measure Based on Tsallis Entropy

Authors: Shaoyan Sun, Liwei Zhang, Chonghui Guo

Abstract:

As the use of registration packages spreads, the number of the aligned image pairs in image databases (either by manual or automatic methods) increases dramatically. These image pairs can serve as a set of training data. Correspondingly, the images that are to be registered serve as testing data. In this paper, a novel medical image registration method is proposed which is based on the a priori knowledge of the expected joint intensity distribution estimated from pre-aligned training images. The goal of the registration is to find the optimal transformation such that the distance between the observed joint intensity distribution obtained from the testing image pair and the expected joint intensity distribution obtained from the corresponding training image pair is minimized. The distance is measured using the divergence measure based on Tsallis entropy. Experimental results show that, compared with the widely-used Shannon mutual information as well as Tsallis mutual information, the proposed method is computationally more efficient without sacrificing registration accuracy.

Keywords: Multimodality images, image registration, Shannonentropy, Tsallis entropy, mutual information, Powell optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
320 APPLE: Providing Absolute and Proportional Throughput Guarantees in Wireless LANs

Authors: Zhijie Ma, Qinglin Zhao, Hongning Dai, Huan Zhang

Abstract:

This paper proposes an APPLE scheme that aims at providing absolute and proportional throughput guarantees, and maximizing system throughput simultaneously for wireless LANs with homogeneous and heterogenous traffic. We formulate our objectives as an optimization problem, present its exact and approximate solutions, and prove the existence and uniqueness of the approximate solution. Simulations validate that APPLE scheme is accurate, and the approximate solution can well achieve the desired objectives already.

Keywords: IEEE 802.11e, throughput guarantee, priority.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
319 1/Sigma Term Weighting Scheme for Sentiment Analysis

Authors: Hanan Alshaher, Jinsheng Xu

Abstract:

Large amounts of data on the web can provide valuable information. For example, product reviews help business owners measure customer satisfaction. Sentiment analysis classifies texts into two polarities: positive and negative. This paper examines movie reviews and tweets using a new term weighting scheme, called one-over-sigma (1/sigma), on benchmark datasets for sentiment classification. The proposed method aims to improve the performance of sentiment classification. The results show that 1/sigma is more accurate than the popular term weighting schemes. In order to verify if the entropy reflects the discriminating power of terms, we report a comparison of entropy values for different term weighting schemes.

Keywords: Sentiment analysis, term weighting scheme, 1/sigma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 534
318 Phenomenological and Theoretical Analysis of Relativistic Temperature Transformation and Relativistic Entropy

Authors: Marko Popovic

Abstract:

There are three possible effects of Special Theory of Relativity (STR) on a thermodynamic system. Planck and Einstein looked upon this process as isobaric; on the other hand Ott saw it as an adiabatic process. However plenty of logical reasons show that the process is isotherm. Our phenomenological consideration demonstrates that the temperature is invariant with Lorenz transformation. In that case process is isotherm, so volume and pressure are Lorentz covariant. If the process is isotherm the Boyles law is Lorentz invariant. Also equilibrium constant and Gibbs energy, activation energy, enthalpy entropy and extent of the reaction became Lorentz invariant.

Keywords: STR, relativistic temperature transformation, Boyle'slaw, equilibrium constant, Gibbs energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2291
317 Conjugate Mixed Convection Heat Transfer and Entropy Generation of Cu-Water Nanofluid in an Enclosure with Thick Wavy Bottom Wall

Authors: Sanjib Kr Pal, S. Bhattacharyya

Abstract:

Mixed convection of Cu-water nanofluid in an enclosure with thick wavy bottom wall has been investigated numerically. A co-ordinate transformation method is used to transform the computational domain into an orthogonal co-ordinate system. The governing equations in the computational domain are solved through a pressure correction based iterative algorithm. The fluid flow and heat transfer characteristics are analyzed for a wide range of Richardson number (0.1 ≤ Ri ≤ 5), nanoparticle volume concentration (0.0 ≤ ϕ ≤ 0.2), amplitude (0.0 ≤ α ≤ 0.1) of the wavy thick- bottom wall and the wave number (ω) at a fixed Reynolds number. Obtained results showed that heat transfer rate increases remarkably by adding the nanoparticles. Heat transfer rate is dependent on the wavy wall amplitude and wave number and decreases with increasing Richardson number for fixed amplitude and wave number. The Bejan number and the entropy generation are determined to analyze the thermodynamic optimization of the mixed convection.

Keywords: Entropy generation, mixed convection, conjugate heat transfer, numerical, nanofluid, wall waviness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046
316 Highly Accurate Target Motion Compensation Using Entropy Function Minimization

Authors: Amin Aghatabar Roodbary, Mohammad Hassan Bastani

Abstract:

One of the defects of stepped frequency radar systems is their sensitivity to target motion. In such systems, target motion causes range cell shift, false peaks, Signal to Noise Ratio (SNR) reduction and range profile spreading because of power spectrum interference of each range cell in adjacent range cells which induces distortion in High Resolution Range Profile (HRRP) and disrupt target recognition process. Thus Target Motion Parameters (TMPs) effects compensation should be employed. In this paper, such a method for estimating TMPs (velocity and acceleration) and consequently eliminating or suppressing the unwanted effects on HRRP based on entropy minimization has been proposed. This method is carried out in two major steps: in the first step, a discrete search method has been utilized over the whole acceleration-velocity lattice network, in a specific interval seeking to find a less-accurate minimum point of the entropy function. Then in the second step, a 1-D search over velocity is done in locus of the minimum for several constant acceleration lines, in order to enhance the accuracy of the minimum point found in the first step. The provided simulation results demonstrate the effectiveness of the proposed method.

Keywords: ATR, HRRP, motion compensation, SFW, TMP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 657
315 Recent Trends in Nonlinear Methods of HRV Analysis: A Review

Authors: Ramesh K. Sunkaria

Abstract:

The linear methods of heart rate variability analysis such as non-parametric (e.g. fast Fourier transform analysis) and parametric methods (e.g. autoregressive modeling) has become an established non-invasive tool for marking the cardiac health, but their sensitivity and specificity were found to be lower than expected with positive predictive value <30%. This may be due to considering the RR-interval series as stationary and re-sampling them prior to their use for analysis, whereas actually it is not. This paper reviews the non-linear methods of HRV analysis such as correlation dimension, largest Lyupnov exponent, power law slope, fractal analysis, detrended fluctuation analysis, complexity measure etc. which are currently becoming popular as these uses the actual RR-interval series. These methods are expected to highly accurate cardiac health prognosis.

Keywords: chaos, nonlinear dynamics, sample entropy, approximate entropy, detrended fluctuation analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2351
314 Data-driven Multiscale Tsallis Complexity: Application to EEG Analysis

Authors: Young-Seok Choi

Abstract:

This work proposes a data-driven multiscale based quantitative measures to reveal the underlying complexity of electroencephalogram (EEG), applying to a rodent model of hypoxic-ischemic brain injury and recovery. Motivated by that real EEG recording is nonlinear and non-stationary over different frequencies or scales, there is a need of more suitable approach over the conventional single scale based tools for analyzing the EEG data. Here, we present a new framework of complexity measures considering changing dynamics over multiple oscillatory scales. The proposed multiscale complexity is obtained by calculating entropies of the probability distributions of the intrinsic mode functions extracted by the empirical mode decomposition (EMD) of EEG. To quantify EEG recording of a rat model of hypoxic-ischemic brain injury following cardiac arrest, the multiscale version of Tsallis entropy is examined. To validate the proposed complexity measure, actual EEG recordings from rats (n=9) experiencing 7 min cardiac arrest followed by resuscitation were analyzed. Experimental results demonstrate that the use of the multiscale Tsallis entropy leads to better discrimination of the injury levels and improved correlations with the neurological deficit evaluation after 72 hours after cardiac arrest, thus suggesting an effective metric as a prognostic tool.

Keywords: Electroencephalogram (EEG), multiscale complexity, empirical mode decomposition, Tsallis entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2061
313 Video Coding Algorithm for Video Sequences with Abrupt Luminance Change

Authors: Sang Hyun Kim

Abstract:

In this paper, a fast motion compensation algorithm is proposed that improves coding efficiency for video sequences with brightness variations. We also propose a cross entropy measure between histograms of two frames to detect brightness variations. The framewise brightness variation parameters, a multiplier and an offset field for image intensity, are estimated and compensated. Simulation results show that the proposed method yields a higher peak signal to noise ratio (PSNR) compared with the conventional method, with a greatly reduced computational load, when the video scene contains illumination changes.

Keywords: Motion estimation, Fast motion compensation, Brightness variation compensation, Brightness change detection, Cross entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1766