Search results for: Kolmogorov entropy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 408

Search results for: Kolmogorov entropy

258 Conjugate Mixed Convection Heat Transfer and Entropy Generation of Cu-Water Nanofluid in an Enclosure with Thick Wavy Bottom Wall

Authors: Sanjib Kr Pal, S. Bhattacharyya

Abstract:

Mixed convection of Cu-water nanofluid in an enclosure with thick wavy bottom wall has been investigated numerically. A co-ordinate transformation method is used to transform the computational domain into an orthogonal co-ordinate system. The governing equations in the computational domain are solved through a pressure correction based iterative algorithm. The fluid flow and heat transfer characteristics are analyzed for a wide range of Richardson number (0.1 ≤ Ri ≤ 5), nanoparticle volume concentration (0.0 ≤ ϕ ≤ 0.2), amplitude (0.0 ≤ α ≤ 0.1) of the wavy thick- bottom wall and the wave number (ω) at a fixed Reynolds number. Obtained results showed that heat transfer rate increases remarkably by adding the nanoparticles. Heat transfer rate is dependent on the wavy wall amplitude and wave number and decreases with increasing Richardson number for fixed amplitude and wave number. The Bejan number and the entropy generation are determined to analyze the thermodynamic optimization of the mixed convection.

Keywords: conjugate heat transfer, mixed convection, nano fluid, wall waviness

Procedia PDF Downloads 234
257 Thermodynamic Approach of Lanthanide-Iron Double Oxides Formation

Authors: Vera Varazashvili, Murman Tsarakhov, Tamar Mirianashvili, Teimuraz Pavlenishvili, Tengiz Machaladze, Mzia Khundadze

Abstract:

Standard Gibbs energy of formation ΔGfor(298.15) of lanthanide-iron double oxides of garnet-type crystal structure R3Fe5O12 - RIG (R – are rare earth ions) from initial oxides are evaluated. The calculation is based on the data of standard entropies S298.15 and standard enthalpies ΔH298.15 of formation of compounds which are involved in the process of garnets synthesis. Gibbs energy of formation is presented as temperature function ΔGfor(T) for the range 300-1600K. The necessary starting thermodynamic data were obtained from calorimetric study of heat capacity – temperature functions and by using the semi-empirical method for calculation of ΔH298.15 of formation. Thermodynamic functions for standard temperature – enthalpy, entropy and Gibbs energy - are recommended as reference data for technological evaluations. Through the isostructural series of rare earth-iron garnets the correlation between thermodynamic properties and characteristics of lanthanide ions are elucidated.

Keywords: calorimetry, entropy, enthalpy, heat capacity, gibbs energy of formation, rare earth iron garnets

Procedia PDF Downloads 352
256 Content-Based Image Retrieval Using HSV Color Space Features

Authors: Hamed Qazanfari, Hamid Hassanpour, Kazem Qazanfari

Abstract:

In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods.

Keywords: content-based image retrieval, color difference histogram, efficient features selection, entropy, correlation

Procedia PDF Downloads 221
255 Applied Complement of Probability and Information Entropy for Prediction in Student Learning

Authors: Kennedy Efosa Ehimwenma, Sujatha Krishnamoorthy, Safiya Al‑Sharji

Abstract:

The probability computation of events is in the interval of [0, 1], which are values that are determined by the number of outcomes of events in a sample space S. The probability Pr(A) that an event A will never occur is 0. The probability Pr(B) that event B will certainly occur is 1. This makes both events A and B a certainty. Furthermore, the sum of probabilities Pr(E₁) + Pr(E₂) + … + Pr(Eₙ) of a finite set of events in a given sample space S equals 1. Conversely, the difference of the sum of two probabilities that will certainly occur is 0. This paper first discusses Bayes, the complement of probability, and the difference of probability for occurrences of learning-events before applying them in the prediction of learning objects in student learning. Given the sum of 1; to make a recommendation for student learning, this paper proposes that the difference of argMaxPr(S) and the probability of student-performance quantifies the weight of learning objects for students. Using a dataset of skill-set, the computational procedure demonstrates i) the probability of skill-set events that have occurred that would lead to higher-level learning; ii) the probability of the events that have not occurred that requires subject-matter relearning; iii) accuracy of the decision tree in the prediction of student performance into class labels and iv) information entropy about skill-set data and its implication on student cognitive performance and recommendation of learning.

Keywords: complement of probability, Bayes’ rule, prediction, pre-assessments, computational education, information theory

Procedia PDF Downloads 127
254 Gait Biometric for Person Re-Identification

Authors: Lavanya Srinivasan

Abstract:

Biometric identification is to identify unique features in a person like fingerprints, iris, ear, and voice recognition that need the subject's permission and physical contact. Gait biometric is used to identify the unique gait of the person by extracting moving features. The main advantage of gait biometric to identify the gait of a person at a distance, without any physical contact. In this work, the gait biometric is used for person re-identification. The person walking naturally compared with the same person walking with bag, coat, and case recorded using longwave infrared, short wave infrared, medium wave infrared, and visible cameras. The videos are recorded in rural and in urban environments. The pre-processing technique includes human identified using YOLO, background subtraction, silhouettes extraction, and synthesis Gait Entropy Image by averaging the silhouettes. The moving features are extracted from the Gait Entropy Energy Image. The extracted features are dimensionality reduced by the principal component analysis and recognised using different classifiers. The comparative results with the different classifier show that linear discriminant analysis outperforms other classifiers with 95.8% for visible in the rural dataset and 94.8% for longwave infrared in the urban dataset.

Keywords: biometric, gait, silhouettes, YOLO

Procedia PDF Downloads 149
253 A Study on the Assessment of Prosthetic Infection after Total Knee Replacement Surgery

Authors: Chun-Lang Chang, Chun-Kai Liu

Abstract:

In this study, the patients that have undergone total knee replacement surgery from the 2010 National Health Insurance database were adopted as the study participants. The important factors were screened and selected through literature collection and interviews with physicians. Through the Cross Entropy Method (CE), Genetic Algorithm Logistic Regression (GALR), and Particle Swarm Optimization (PSO), the weights of the factors were obtained. In addition, the weights of the respective algorithms, coupled with the Excel VBA were adopted to construct the Case Based Reasoning (CBR) system. The results through statistical tests show that the GALR and PSO produced no significant differences, and the accuracy of both models were above 97%. Moreover, the area under the curve of ROC for these two models also exceeded 0.87. This study shall serve as a reference for medical staff as an assistance for clinical assessment of infections in order to effectively enhance medical service quality and efficiency, avoid unnecessary medical waste, and substantially contribute to resource allocations in medical institutions.

Keywords: Case Based Reasoning, Cross Entropy Method, Genetic Algorithm Logistic Regression, Particle Swarm Optimization, Total Knee Replacement Surgery

Procedia PDF Downloads 296
252 Nonstationary Increments and Casualty in the Aluminum Market

Authors: Andrew Clark

Abstract:

McCauley, Bassler, and Gunaratne show that integration I(d) processes as used in economics and finance do not necessarily produce stationary increments, which are required to determine causality in both the short term and the long term. This paper follows their lead and shows I(d) aluminum cash and futures log prices at daily and weekly intervals do not have stationary increments, which means prior causality studies using I(d) processes need to be re-examined. Wavelets based on undifferenced cash and futures log prices do have stationary increments and are used along with transfer entropy (versus cointegration) to measure causality. Wavelets exhibit causality at most daily time scales out to 1 year, and weekly time scales out to 1 year and more. To determine stationarity, localized stationary wavelets are used. LSWs have the benefit, versus other means of testing for stationarity, of using multiple hypothesis tests to determine stationarity. As informational flows exist between cash and futures at daily and weekly intervals, the aluminum market is efficient. Therefore, hedges used by producers and consumers of aluminum need not have a big concern in terms of the underestimation of hedge ratios. Questions about arbitrage given efficiency are addressed in the paper.

Keywords: transfer entropy, nonstationary increments, wavelets, localized stationary wavelets, localized stationary wavelets

Procedia PDF Downloads 174
251 Normalized Compression Distance Based Scene Alteration Analysis of a Video

Authors: Lakshay Kharbanda, Aabhas Chauhan

Abstract:

In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.

Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error

Procedia PDF Downloads 304
250 Structural, Magnetic and Magnetocaloric Properties of Iron-Doped Nd₀.₆Sr₀.₄MnO₃ Perovskite

Authors: Ismail Al-Yahmadi, Abbasher Gismelseed, Fatma Al-Mammari, Ahmed Al-Rawas, Ali Yousif, Imaddin Al-Omari, Hisham Widatallah, Mohamed Elzain

Abstract:

The influence of Fe-doping on the structural, magnetic and magnetocaloric properties of Nd₀.₆Sr₀.₄FeₓMn₁₋ₓO₃ (0≤ x ≤0.5) were investigated. The samples were synthesized by auto-combustion Sol-Gel method. The phase purity, crystallinity, and the structural properties for all prepared samples were examined by X-ray diffraction. XRD refinement indicates that the samples are crystallized in the orthorhombic single-phase with Pnma space group. Temperature dependence of magnetization measurements under a magnetic applied field of 0.02 T reveals that the samples with (x=0.0, 0.1, 0.2 and 0.3) exhibit a paramagnetic (PM) to ferromagnetic (FM) transition with decreasing temperature. The Curie temperature decreased with increasing Fe content from 256 K for x =0.0 to 80 K for x =0.3 due to increasing of antiferromagnetic superexchange (SE) interaction coupling. Moreover, the magnetization as a function of applied magnetic field (M-H) curves was measured at 2 K, and 300 K. the results of such measurements confirm the temperature dependence of magnetization measurements. The magnetic entropy change|∆SM | was evaluated using Maxwell's relation. The maximum values of the magnetic entropy change |-∆SMax |for x=0.0, 0.1, 0.2, 0.3 are found to be 15.35, 5.13, 3.36, 1.08 J/kg.K for an applied magnetic field of 9 T. Our result on magnetocaloric properties suggests that the parent sample Nd₀.₆Sr₀.₄MnO₃ could be a good refrigerant for low-temperature magnetic refrigeration.

Keywords: manganite perovskite, magnetocaloric effect, X-ray diffraction, relative cooling power

Procedia PDF Downloads 120
249 Stochastic Repair and Replacement with a Single Repair Channel

Authors: Mohammed A. Hajeeh

Abstract:

This paper examines the behavior of a system, which upon failure is either replaced with certain probability p or imperfectly repaired with probability q. The system is analyzed using Kolmogorov's forward equations method; the analytical expression for the steady state availability is derived as an indicator of the system’s performance. It is found that the analysis becomes more complex as the number of imperfect repairs increases. It is also observed that the availability increases as the number of states and replacement probability increases. Using such an approach in more complex configurations and in dynamic systems is cumbersome; therefore, it is advisable to resort to simulation or heuristics. In this paper, an example is provided for demonstration.

Keywords: repairable models, imperfect, availability, exponential distribution

Procedia PDF Downloads 262
248 Multi-Criteria Test Case Selection Using Ant Colony Optimization

Authors: Niranjana Devi N.

Abstract:

Test case selection is to select the subset of only the fit test cases and remove the unfit, ambiguous, redundant, unnecessary test cases which in turn improve the quality and reduce the cost of software testing. Test cases optimization is the problem of finding the best subset of test cases from a pool of the test cases to be audited. It will meet all the objectives of testing concurrently. But most of the research have evaluated the fitness of test cases only on single parameter fault detecting capability and optimize the test cases using a single objective. In the proposed approach, nine parameters are considered for test case selection and the best subset of parameters for test case selection is obtained using Interval Type-2 Fuzzy Rough Set. Test case selection is done in two stages. The first stage is the fuzzy entropy-based filtration technique, used for estimating and reducing the ambiguity in test case fitness evaluation and selection. The second stage is the ant colony optimization-based wrapper technique with a forward search strategy, employed to select test cases from the reduced test suite of the first stage. The results are evaluated using the Coverage parameters, Precision, Recall, F-Measure, APSC, APDC, and SSR. The experimental evaluation demonstrates that by this approach considerable computational effort can be avoided.

Keywords: ant colony optimization, fuzzy entropy, interval type-2 fuzzy rough set, test case selection

Procedia PDF Downloads 630
247 Developing Cause-effect Model of Urban Resilience versus Flood in Karaj City using TOPSIS and Shannon Entropy Techniques

Authors: Mohammad Saber Eslamlou, Manouchehr Tabibian, Mahta Mirmoghtadaei

Abstract:

The history of urban development and the increasing complexities of urban life have long been intertwined with different natural and man-made disasters. Sometimes, these unpleasant events have destroyed the cities forever. The growth of the urban population and the increase of social and economic resources in the cities increased the importance of developing a holistic approach to dealing with unknown urban disasters. As a result, the interest in resilience has increased in most of the scientific fields, and the urban planning literature has been enriched with the studies of the social, economic, infrastructural, and physical abilities of the cities. In this regard, different conceptual frameworks and patterns have been developed focusing on dimensions of resilience and different kinds of disasters. As the most frequent and likely natural disaster in Iran is flooding, the present study aims to develop a cause-effect model of urban resilience against flood in Karaj City. In this theoretical study, desk research and documentary studies were used to find the elements and dimensions of urban resilience. In this regard, 6 dimensions and 32 elements were found for urban resilience and a questionnaire was made by considering the requirements of TOPSIS techniques (pairwise comparison). The sample of the research consisted of 10 participants who were faculty members, academicians, board members of research centers, managers of the Ministry of Road and Urban Development, board members of New Towns Development Company, experts, and practitioners of consulting companies who had scientific and research backgrounds. The gathered data in this survey were analyzed using TOPSIS and Shannon Entropy techniques. The results show that Infrastructure/Physical, Social, Organizational/ Institutional, Structural/Physical, Economic, and Environmental dimensions are the most effective factors in urban resilience against floods in Karaj, respectively. Finally, a comprehensive model and a systematic framework of factors that affect the urban resilience of Karaj against floods was developed. This cause – effect model shows how different factors are related and influence each other, based on their connected structure and preferences.

Keywords: urban resilience, TOPSIS, Shannon entropy, cause-effect model of resilience, flood

Procedia PDF Downloads 30
246 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks

Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz

Abstract:

Small cell deployment in 5G networks is a promising technology to enhance capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn will result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers, and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision according to Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). In this paper, we propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method shows better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.

Keywords: handover, HetNets, interference, MADM, small cells, TOPSIS, weight

Procedia PDF Downloads 113
245 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand

Authors: Chukiat Chaiboonsri, Satawat Wannapan

Abstract:

This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.

Keywords: TThailand tourism, Maximum Entropy Bootstrapping approach, macroeconomic model, asymmetric information

Procedia PDF Downloads 269
244 Accelerated Molecular Simulation: A Convolution Approach

Authors: Jannes Quer, Amir Niknejad, Marcus Weber

Abstract:

Computational Drug Design is often based on Molecular Dynamics simulations of molecular systems. Molecular Dynamics can be used to simulate, e.g., the binding and unbinding event of a small drug-like molecule with regard to the active site of an enzyme or a receptor. However, the time-scale of the overall binding event is many orders of magnitude longer than the time-scale of simulation. Thus, there is a need to speed-up molecular simulations. In order to speed up simulations, the molecular dynamics trajectories have to be ”steared” out of local minimizers of the potential energy surface – the so-called metastabilities – of the molecular system. Increasing the kinetic energy (temperature) is one possibility to accelerate simulated processes. However, with temperature the entropy of the molecular system increases, too. But this kind ”stearing” is not directed enough to stear the molecule out of the minimum toward the saddle point. In this article, we give a new mathematical idea, how a potential energy surface can be changed in such a way, that entropy is kept under control while the trajectories are still steared out of the metastabilities. In order to compute the unsteared transition behaviour based on a steared simulation, we propose to use extrapolation methods. In the end we mathematically show, that our method accelerates the simulations along the direction, in which the curvature of the potential energy surface changes the most, i.e., from local minimizers towards saddle points.

Keywords: extrapolation, Eyring-Kramers, metastability, multilevel sampling

Procedia PDF Downloads 296
243 A Brief Study about Nonparametric Adherence Tests

Authors: Vinicius R. Domingues, Luan C. S. M. Ozelim

Abstract:

The statistical study has become indispensable for various fields of knowledge. Not any different, in Geotechnics the study of probabilistic and statistical methods has gained power considering its use in characterizing the uncertainties inherent in soil properties. One of the situations where engineers are constantly faced is the definition of a probability distribution that represents significantly the sampled data. To be able to discard bad distributions, goodness-of-fit tests are necessary. In this paper, three non-parametric goodness-of-fit tests are applied to a data set computationally generated to test the goodness-of-fit of them to a series of known distributions. It is shown that the use of normal distribution does not always provide satisfactory results regarding physical and behavioral representation of the modeled parameters.

Keywords: Kolmogorov-Smirnov test, Anderson-Darling test, Cramer-Von-Mises test, nonparametric adherence tests

Procedia PDF Downloads 414
242 Energy Conservation in Heat Exchangers

Authors: Nadia Allouache

Abstract:

Energy conservation is one of the major concerns in the modern high tech era due to the limited amount of energy resources and the increasing cost of energy. Predicting an efficient use of energy in thermal systems like heat exchangers can only be achieved if the second law of thermodynamics is accounted for. The performance of heat exchangers can be substantially improved by many passive heat transfer augmentation techniques. These letters permit to improve heat transfer rate and to increase exchange surface, but on the other side, they also increase the friction factor associated with the flow. This raises the question of how to employ these passive techniques in order to minimize the useful energy. The objective of this present study is to use a porous substrate attached to the walls as a passive enhancement technique in heat exchangers and to find the compromise between the hydrodynamic and thermal performances under turbulent flow conditions, by using a second law approach. A modified k- ε model is used to simulating the turbulent flow in the porous medium and the turbulent shear flow is accounted for in the entropy generation equation. A numerical modeling, based on the finite volume method is employed for discretizing the governing equations. Effects of several parameters are investigated such as the porous substrate properties and the flow conditions. Results show that under certain conditions of the porous layer thickness, its permeability, and its effective thermal conductivity the minimum rate of entropy production is obtained.

Keywords: second law approach, annular heat exchanger, turbulent flow, porous medium, modified model, numerical analysis

Procedia PDF Downloads 255
241 Direct Measurements of the Electrocaloric Effect in Solid Ferroelectric Materials via Thermoreflectance

Authors: Layla Farhat, Mathieu Bardoux, Stéphane Longuemart, Ziad Herro, Abdelhak Hadj Sahraoui

Abstract:

Electrocaloric (EC) effect refers to the isothermal entropy or adiabatic temperature changes of a dielectric material induced by an external electric field. This phenomenon has been largely ignored for application because only modest EC effects (2.6

Keywords: electrocaloric effect, thermoreflectance, ferroelectricity, cooling system

Procedia PDF Downloads 153
240 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 377
239 Key Parameters Analysis of the Stirring Systems in the Optmization Procedures

Authors: T. Gomes, J. Manzi

Abstract:

The inclusion of stirring systems in the calculation and optimization procedures has been undergone a significant lack of attention, what it can reflect in the results because such systems provide an additional energy to the process, besides promote a better distribution of mass and energy. This is meaningful for the reactive systems, particularly for the Continuous Stirred Tank Reactor (CSTR), for which the key variables and parameters, as well as the operating conditions of stirring systems, can play a pivotal role and it has been showed in the literature that neglect these factors can lead to sub-optimal results. It is also well known that the sole use of the First Law of Thermodynamics as an optimization tool cannot yield satisfactory results, since the joint use of the First and Second Laws condensed into a procedure so-called entropy generation minimization (EGM) has shown itself able to drive the system towards better results. Therefore, the main objective of this paper is to determine the effects of key parameters of the stirring system in the optimization procedures by means of EGM applied to the reactive systems. Such considerations have been possible by dimensional analysis according to Rayleigh and Buckingham's method, which takes into account the physical and geometric parameters and the variables of the reactive system. For the simulation purpose based on the production of propylene glycol, the results have shown a significant increase in the conversion rate from 36% (not-optimized system) to 95% (optimized system) with a consequent reduction of by-products. In addition, it has been possible to establish the influence of the work of the stirrer in the optimization procedure, in which can be described as a function of the fluid viscosity and consequently of the temperature. The conclusions to be drawn also indicate that the use of the entropic analysis as optimization tool has been proved to be simple, easy to apply and requiring low computational effort.

Keywords: stirring systems, entropy, reactive system, optimization

Procedia PDF Downloads 221
238 Response Solutions of 2-Dimensional Elliptic Degenerate Quasi-Periodic Systems With Small Parameters

Authors: Song Ni, Junxiang Xu

Abstract:

This paper concerns quasi-periodic perturbations with parameters of 2-dimensional degenerate systems. If the equilibrium point of the unperturbed system is elliptic-type degenerate. Assume that the perturbation is real analytic quasi-periodic with diophantine frequency. Without imposing any assumption on the perturbation, we can use a path of equilibrium points to tackle with the Melnikov non-resonance condition, then by the Leray-Schauder Continuation Theorem and the Kolmogorov-Arnold-Moser technique, it is proved that the equation has a small response solution for many sufficiently small parameters.

Keywords: quasi-periodic systems, KAM-iteration, degenerate equilibrium point, response solution

Procedia PDF Downloads 55
237 Catalytic Thermodynamics of Nanocluster Adsorbates from Informational Statistical Mechanics

Authors: Forrest Kaatz, Adhemar Bultheel

Abstract:

We use an informational statistical mechanics approach to study the catalytic thermodynamics of platinum and palladium cuboctahedral nanoclusters. Nanoclusters and their adatoms are viewed as chemical graphs with a nearest neighbor adjacency matrix. We use the Morse potential to determine bond energies between cluster atoms in a coordination type calculation. We use adsorbate energies calculated from density functional theory (DFT) to study the adatom effects on the thermodynamic quantities, which are derived from a Hamiltonian. Oxygen radical and molecular adsorbates are studied on platinum clusters and hydrogen on palladium clusters. We calculate the entropy, free energy, and total energy as the coverage of adsorbates increases from bridge and hollow sites on the surface. Thermodynamic behavior versus adatom coverage is related to the structural distribution of adatoms on the nanocluster surfaces. The thermodynamic functions are characterized using a simple adsorption model, with linear trends as the coverage of adatoms increases. The data exhibits size effects for the measured thermodynamic properties with cluster diameters between 2 and 5 nm. Entropy and enthalpy calculations of Pt-O2 compare well with previous theoretical data for Pt(111)-O2, and our Pd-H results show similar trends as experimental measurements for Pd-H2 nanoclusters. Our methods are general and may be applied to wide variety of nanocluster adsorbate systems.

Keywords: catalytic thermodynamics, palladium nanocluster absorbates, platinum nanocluster absorbates, statistical mechanics

Procedia PDF Downloads 125
236 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 101
235 Groundwater Potential Mapping using Frequency Ratio and Shannon’s Entropy Models in Lesser Himalaya Zone, Nepal

Authors: Yagya Murti Aryal, Bipin Adhikari, Pradeep Gyawali

Abstract:

The Lesser Himalaya zone of Nepal consists of thrusting and folding belts, which play an important role in the sustainable management of groundwater in the Himalayan regions. The study area is located in the Dolakha and Ramechhap Districts of Bagmati Province, Nepal. Geologically, these districts are situated in the Lesser Himalayas and partly encompass the Higher Himalayan rock sequence, which includes low-grade to high-grade metamorphic rocks. Following the Gorkha Earthquake in 2015, numerous springs dried up, and many others are currently experiencing depletion due to the distortion of the natural groundwater flow. The primary objective of this study is to identify potential groundwater areas and determine suitable sites for artificial groundwater recharge. Two distinct statistical approaches were used to develop models: The Frequency Ratio (FR) and Shannon Entropy (SE) methods. The study utilized both primary and secondary datasets and incorporated significant role and controlling factors derived from field works and literature reviews. Field data collection involved spring inventory, soil analysis, lithology assessment, and hydro-geomorphology study. Additionally, slope, aspect, drainage density, and lineament density were extracted from a Digital Elevation Model (DEM) using GIS and transformed into thematic layers. For training and validation, 114 springs were divided into a 70/30 ratio, with an equal number of non-spring pixels. After assigning weights to each class based on the two proposed models, a groundwater potential map was generated using GIS, classifying the area into five levels: very low, low, moderate, high, and very high. The model's outcome reveals that over 41% of the area falls into the low and very low potential categories, while only 30% of the area demonstrates a high probability of groundwater potential. To evaluate model performance, accuracy was assessed using the Area under the Curve (AUC). The success rate AUC values for the FR and SE methods were determined to be 78.73% and 77.09%, respectively. Additionally, the prediction rate AUC values for the FR and SE methods were calculated as 76.31% and 74.08%. The results indicate that the FR model exhibits greater prediction capability compared to the SE model in this case study.

Keywords: groundwater potential mapping, frequency ratio, Shannon’s Entropy, Lesser Himalaya Zone, sustainable groundwater management

Procedia PDF Downloads 45
234 Extended Intuitionistic Fuzzy VIKOR Method in Group Decision Making: The Case of Vendor Selection Decision

Authors: Nastaran Hajiheydari, Mohammad Soltani Delgosha

Abstract:

Vendor (supplier) selection is a group decision-making (GDM) process, in which, based on some predetermined criteria, the experts’ preferences are provided in order to rank and choose the most desirable suppliers. In the real business environment, our attitudes or our choices would be made in an uncertain and indecisive situation could not be expressed in a crisp framework. Intuitionistic fuzzy sets (IFSs) could handle such situations in the best way. VIKOR method was developed to solve multi-criteria decision-making (MCDM) problems. This method, which is used to determine the compromised feasible solution with respect to the conflicting criteria, introduces a multi-criteria ranking index based on the particular measure of 'closeness' to the 'ideal solution'. Until now, there has been a little investigation of VIKOR with IFS, therefore we extended the intuitionistic fuzzy (IF) VIKOR to solve vendor selection problem under IF GDM environment. The present study intends to develop an IF VIKOR method in a GDM situation. Therefore, a model is presented to calculate the criterion weights based on entropy measure. Then, the interval-valued intuitionistic fuzzy weighted geometric (IFWG) operator utilized to obtain the total decision matrix. In the next stage, an approach based on the positive idle intuitionistic fuzzy number (PIIFN) and negative idle intuitionistic fuzzy number (NIIFN) was developed. Finally, the application of the proposed method to solve a vendor selection problem illustrated.

Keywords: group decision making, intuitionistic fuzzy set, intuitionistic fuzzy entropy measure, vendor selection, VIKOR

Procedia PDF Downloads 122
233 Behaviours of Energy Spectrum at Low Reynolds Numbers in Grid Turbulence

Authors: Md Kamruzzaman, Lyazid Djenidi, R. A. Antonia

Abstract:

This paper reports an experimental investigation of the energy spectrum of turbulent velocity fields at low Reynolds numbers ( Rλ ) in grid turbulence. Hot wire measurements are carried out in grid turbulence with subjected to a 1.36:1 contraction of the wind tunnel. Three different grids are used: (i) large square perforated grid (mesh size 43.75 mm), (ii) small square perforated grid (mesh size 14 and (iii) woven mesh grid (mesh size 5mm). The results indicate that the energy spectrum at small Rλ does not follow Kolmogorov’s universal scaling. It is further found that the critical Reynolds number,Rλ,ϲ below which the scaling breaks down is around 25.

Keywords: energy spectrum, Taylor microscale, Reynolds number, turbulent kinetic energy, decay exponent

Procedia PDF Downloads 264
232 Probabilistic Simulation of Triaxial Undrained Cyclic Behavior of Soils

Authors: Arezoo Sadrinezhad, Kallol Sett, S. I. Hariharan

Abstract:

In this paper, a probabilistic framework based on Fokker-Planck-Kolmogorov (FPK) approach has been applied to simulate triaxial cyclic constitutive behavior of uncertain soils. The framework builds upon previous work of the writers, and it has been extended for cyclic probabilistic simulation of triaxial undrained behavior of soils. von Mises elastic-perfectly plastic material model is considered. It is shown that by using probabilistic framework, some of the most important aspects of soil behavior under cyclic loading can be captured even with a simple elastic-perfectly plastic constitutive model.

Keywords: elasto-plasticity, uncertainty, soils, fokker-planck equation, fourier spectral method, finite difference method

Procedia PDF Downloads 339
231 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 529
230 Analyzing the Results of Buildings Energy Audit by Using Grey Set Theory

Authors: Tooraj Karimi, Mohammadreza Sadeghi Moghadam

Abstract:

Grey set theory has the advantage of using fewer data to analyze many factors, and it is therefore more appropriate for system study rather than traditional statistical regression which require massive data, normal distribution in the data and few variant factors. So, in this paper grey clustering and entropy of coefficient vector of grey evaluations are used to analyze energy consumption in buildings of the Oil Ministry in Tehran. In fact, this article intends to analyze the results of energy audit reports and defines most favorable characteristics of system, which is energy consumption of buildings, and most favorable factors affecting these characteristics in order to modify and improve them. According to the results of the model, ‘the real Building Load Coefficient’ has been selected as the most important system characteristic and ‘uncontrolled area of the building’ has been diagnosed as the most favorable factor which has the greatest effect on energy consumption of building. Grey clustering in this study has been used for two purposes: First, all the variables of building relate to energy audit cluster in two main groups of indicators and the number of variables is reduced. Second, grey clustering with variable weights has been used to classify all buildings in three categories named ‘no standard deviation’, ‘low standard deviation’ and ‘non- standard’. Entropy of coefficient vector of Grey evaluations is calculated to investigate greyness of results. It shows that among the 38 buildings surveyed in terms of energy consumption, 3 cases are in standard group, 24 cases are in ‘low standard deviation’ group and 11 buildings are completely non-standard. In addition, clustering greyness of 13 buildings is less than 0.5 and average uncertainly of clustering results is 66%.

Keywords: energy audit, grey set theory, grey incidence matrixes, grey clustering, Iran oil ministry

Procedia PDF Downloads 343
229 Kinetic Alfvén Wave Localization and Turbulent Spectrum

Authors: Anju Kumari, R. P. Sharma

Abstract:

The localization of Kinetic Alfvén Wave (KAW) caused by finite amplitude background density fluctuations has been studied in intermediate beta plasma. KAW breaks up into localized large amplitude structures when perturbed by MHD fluctuations of the medium which are in the form of magnetosonic waves. Numerical simulation has been performed to analyse the localized structures and resulting turbulent spectrum of KAW applicable to magnetopause. Simulation results reveal that power spectrum deviates from Kolmogorov scaling at the transverse size of KAW, equal to ion gyroradius. Steepening of power spectrum at shorter wavelengths may be accountable for heating and acceleration of the plasma particles. The obtained results are compared with observations collected from the THEMIS spacecraft in magnetopause.

Keywords: Kinetic Alfvén Wave (KAW), localization, turbulence, turbulent spectrum

Procedia PDF Downloads 465