Search results for: multi-scale entropy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 409

Search results for: multi-scale entropy

229 Gait Biometric for Person Re-Identification

Authors: Lavanya Srinivasan

Abstract:

Biometric identification is to identify unique features in a person like fingerprints, iris, ear, and voice recognition that need the subject's permission and physical contact. Gait biometric is used to identify the unique gait of the person by extracting moving features. The main advantage of gait biometric to identify the gait of a person at a distance, without any physical contact. In this work, the gait biometric is used for person re-identification. The person walking naturally compared with the same person walking with bag, coat, and case recorded using longwave infrared, short wave infrared, medium wave infrared, and visible cameras. The videos are recorded in rural and in urban environments. The pre-processing technique includes human identified using YOLO, background subtraction, silhouettes extraction, and synthesis Gait Entropy Image by averaging the silhouettes. The moving features are extracted from the Gait Entropy Energy Image. The extracted features are dimensionality reduced by the principal component analysis and recognised using different classifiers. The comparative results with the different classifier show that linear discriminant analysis outperforms other classifiers with 95.8% for visible in the rural dataset and 94.8% for longwave infrared in the urban dataset.

Keywords: biometric, gait, silhouettes, YOLO

Procedia PDF Downloads 156
228 A Study on the Assessment of Prosthetic Infection after Total Knee Replacement Surgery

Authors: Chun-Lang Chang, Chun-Kai Liu

Abstract:

In this study, the patients that have undergone total knee replacement surgery from the 2010 National Health Insurance database were adopted as the study participants. The important factors were screened and selected through literature collection and interviews with physicians. Through the Cross Entropy Method (CE), Genetic Algorithm Logistic Regression (GALR), and Particle Swarm Optimization (PSO), the weights of the factors were obtained. In addition, the weights of the respective algorithms, coupled with the Excel VBA were adopted to construct the Case Based Reasoning (CBR) system. The results through statistical tests show that the GALR and PSO produced no significant differences, and the accuracy of both models were above 97%. Moreover, the area under the curve of ROC for these two models also exceeded 0.87. This study shall serve as a reference for medical staff as an assistance for clinical assessment of infections in order to effectively enhance medical service quality and efficiency, avoid unnecessary medical waste, and substantially contribute to resource allocations in medical institutions.

Keywords: Case Based Reasoning, Cross Entropy Method, Genetic Algorithm Logistic Regression, Particle Swarm Optimization, Total Knee Replacement Surgery

Procedia PDF Downloads 304
227 Nonstationary Increments and Casualty in the Aluminum Market

Authors: Andrew Clark

Abstract:

McCauley, Bassler, and Gunaratne show that integration I(d) processes as used in economics and finance do not necessarily produce stationary increments, which are required to determine causality in both the short term and the long term. This paper follows their lead and shows I(d) aluminum cash and futures log prices at daily and weekly intervals do not have stationary increments, which means prior causality studies using I(d) processes need to be re-examined. Wavelets based on undifferenced cash and futures log prices do have stationary increments and are used along with transfer entropy (versus cointegration) to measure causality. Wavelets exhibit causality at most daily time scales out to 1 year, and weekly time scales out to 1 year and more. To determine stationarity, localized stationary wavelets are used. LSWs have the benefit, versus other means of testing for stationarity, of using multiple hypothesis tests to determine stationarity. As informational flows exist between cash and futures at daily and weekly intervals, the aluminum market is efficient. Therefore, hedges used by producers and consumers of aluminum need not have a big concern in terms of the underestimation of hedge ratios. Questions about arbitrage given efficiency are addressed in the paper.

Keywords: transfer entropy, nonstationary increments, wavelets, localized stationary wavelets, localized stationary wavelets

Procedia PDF Downloads 181
226 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 327
225 Structural, Magnetic and Magnetocaloric Properties of Iron-Doped Nd₀.₆Sr₀.₄MnO₃ Perovskite

Authors: Ismail Al-Yahmadi, Abbasher Gismelseed, Fatma Al-Mammari, Ahmed Al-Rawas, Ali Yousif, Imaddin Al-Omari, Hisham Widatallah, Mohamed Elzain

Abstract:

The influence of Fe-doping on the structural, magnetic and magnetocaloric properties of Nd₀.₆Sr₀.₄FeₓMn₁₋ₓO₃ (0≤ x ≤0.5) were investigated. The samples were synthesized by auto-combustion Sol-Gel method. The phase purity, crystallinity, and the structural properties for all prepared samples were examined by X-ray diffraction. XRD refinement indicates that the samples are crystallized in the orthorhombic single-phase with Pnma space group. Temperature dependence of magnetization measurements under a magnetic applied field of 0.02 T reveals that the samples with (x=0.0, 0.1, 0.2 and 0.3) exhibit a paramagnetic (PM) to ferromagnetic (FM) transition with decreasing temperature. The Curie temperature decreased with increasing Fe content from 256 K for x =0.0 to 80 K for x =0.3 due to increasing of antiferromagnetic superexchange (SE) interaction coupling. Moreover, the magnetization as a function of applied magnetic field (M-H) curves was measured at 2 K, and 300 K. the results of such measurements confirm the temperature dependence of magnetization measurements. The magnetic entropy change|∆SM | was evaluated using Maxwell's relation. The maximum values of the magnetic entropy change |-∆SMax |for x=0.0, 0.1, 0.2, 0.3 are found to be 15.35, 5.13, 3.36, 1.08 J/kg.K for an applied magnetic field of 9 T. Our result on magnetocaloric properties suggests that the parent sample Nd₀.₆Sr₀.₄MnO₃ could be a good refrigerant for low-temperature magnetic refrigeration.

Keywords: manganite perovskite, magnetocaloric effect, X-ray diffraction, relative cooling power

Procedia PDF Downloads 130
224 Multi-Criteria Test Case Selection Using Ant Colony Optimization

Authors: Niranjana Devi N.

Abstract:

Test case selection is to select the subset of only the fit test cases and remove the unfit, ambiguous, redundant, unnecessary test cases which in turn improve the quality and reduce the cost of software testing. Test cases optimization is the problem of finding the best subset of test cases from a pool of the test cases to be audited. It will meet all the objectives of testing concurrently. But most of the research have evaluated the fitness of test cases only on single parameter fault detecting capability and optimize the test cases using a single objective. In the proposed approach, nine parameters are considered for test case selection and the best subset of parameters for test case selection is obtained using Interval Type-2 Fuzzy Rough Set. Test case selection is done in two stages. The first stage is the fuzzy entropy-based filtration technique, used for estimating and reducing the ambiguity in test case fitness evaluation and selection. The second stage is the ant colony optimization-based wrapper technique with a forward search strategy, employed to select test cases from the reduced test suite of the first stage. The results are evaluated using the Coverage parameters, Precision, Recall, F-Measure, APSC, APDC, and SSR. The experimental evaluation demonstrates that by this approach considerable computational effort can be avoided.

Keywords: ant colony optimization, fuzzy entropy, interval type-2 fuzzy rough set, test case selection

Procedia PDF Downloads 643
223 Key Drivers for Nighttime Construction under the EPC Contract

Authors: Aditya Pal, S. Z. S. Tabish, Kumar Neeraj Jha

Abstract:

In the construction industry, engineering procurement and construction (EPC) projects are becoming increasingly prevalent; they provide clients with benefits such as decreased workload, streamlined execution, and a singular point of accountability. EPC projects entail round-the-clock operations, which calls for an analysis of the variables that impact productivity during nocturnal hours. The current body of research on the distinctions between daytime and nighttime construction lacks a comprehensive examination of nocturnal attributes. The objective of this research is to ascertain the critical factors that influence the productivity of nighttime construction by conducting site investigations and reviewing relevant literature. The influence of factors such as illumination conditions, equipment deployment, quality procedures, and government regulations on productivity is subject to careful examination. The studies rank the significance of these factors in accordance with the relative importance index (RII) and entropy weighted method (EWM). The primary determinants identified in the study are temperature (RII: 0.8444), weather conditions (RII: 0.8222), and material and apparatus maintenance (RII: 0.8222). The findings function as recommendations for project managers and EPC contractors to reduce setbacks and increase efficiency. By comparing the outcomes of EWM and RII, the most effective approach to resolving the most crucial characteristics is achieved.

Keywords: productivity, nighttime work, statistical methods, construction, entropy weighted method, relative importance indexing

Procedia PDF Downloads 17
222 Underwater Image Enhancement and Reconstruction Using CNN and the MultiUNet Model

Authors: Snehal G. Teli, R. J. Shelke

Abstract:

CNN and MultiUNet models are the framework for the proposed method for enhancing and reconstructing underwater images. Multiscale merging of features and regeneration are both performed by the MultiUNet. CNN collects relevant features. Extensive tests on benchmark datasets show that the proposed strategy performs better than the latest methods. As a result of this work, underwater images can be represented and interpreted in a number of underwater applications with greater clarity. This strategy will advance underwater exploration and marine research by enhancing real-time underwater image processing systems, underwater robotic vision, and underwater surveillance.

Keywords: convolutional neural network, image enhancement, machine learning, multiunet, underwater images

Procedia PDF Downloads 48
221 Multiscale Modelling of Citrus Black Spot Transmission Dynamics along the Pre-Harvest Supply Chain

Authors: Muleya Nqobile, Winston Garira

Abstract:

We presented a compartmental deterministic multi-scale model which encompass internal plant defensive mechanism and pathogen interaction, then we consider nesting the model into the epidemiological model. The objective was to improve our understanding of the transmission dynamics of within host and between host of Guignardia citricapa Kiely. The inflow of infected class was scaled down to individual level while the outflow was scaled up to average population level. Conceptual model and mathematical model were constructed to display a theoretical framework which can be used for predicting or identify disease pattern.

Keywords: epidemiological model, mathematical modelling, multi-scale modelling, immunological model

Procedia PDF Downloads 434
220 Developing Cause-effect Model of Urban Resilience versus Flood in Karaj City using TOPSIS and Shannon Entropy Techniques

Authors: Mohammad Saber Eslamlou, Manouchehr Tabibian, Mahta Mirmoghtadaei

Abstract:

The history of urban development and the increasing complexities of urban life have long been intertwined with different natural and man-made disasters. Sometimes, these unpleasant events have destroyed the cities forever. The growth of the urban population and the increase of social and economic resources in the cities increased the importance of developing a holistic approach to dealing with unknown urban disasters. As a result, the interest in resilience has increased in most of the scientific fields, and the urban planning literature has been enriched with the studies of the social, economic, infrastructural, and physical abilities of the cities. In this regard, different conceptual frameworks and patterns have been developed focusing on dimensions of resilience and different kinds of disasters. As the most frequent and likely natural disaster in Iran is flooding, the present study aims to develop a cause-effect model of urban resilience against flood in Karaj City. In this theoretical study, desk research and documentary studies were used to find the elements and dimensions of urban resilience. In this regard, 6 dimensions and 32 elements were found for urban resilience and a questionnaire was made by considering the requirements of TOPSIS techniques (pairwise comparison). The sample of the research consisted of 10 participants who were faculty members, academicians, board members of research centers, managers of the Ministry of Road and Urban Development, board members of New Towns Development Company, experts, and practitioners of consulting companies who had scientific and research backgrounds. The gathered data in this survey were analyzed using TOPSIS and Shannon Entropy techniques. The results show that Infrastructure/Physical, Social, Organizational/ Institutional, Structural/Physical, Economic, and Environmental dimensions are the most effective factors in urban resilience against floods in Karaj, respectively. Finally, a comprehensive model and a systematic framework of factors that affect the urban resilience of Karaj against floods was developed. This cause – effect model shows how different factors are related and influence each other, based on their connected structure and preferences.

Keywords: urban resilience, TOPSIS, Shannon entropy, cause-effect model of resilience, flood

Procedia PDF Downloads 38
219 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks

Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz

Abstract:

Small cell deployment in 5G networks is a promising technology to enhance capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn will result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers, and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision according to Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). In this paper, we propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method shows better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.

Keywords: handover, HetNets, interference, MADM, small cells, TOPSIS, weight

Procedia PDF Downloads 121
218 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand

Authors: Chukiat Chaiboonsri, Satawat Wannapan

Abstract:

This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.

Keywords: TThailand tourism, Maximum Entropy Bootstrapping approach, macroeconomic model, asymmetric information

Procedia PDF Downloads 275
217 Accelerated Molecular Simulation: A Convolution Approach

Authors: Jannes Quer, Amir Niknejad, Marcus Weber

Abstract:

Computational Drug Design is often based on Molecular Dynamics simulations of molecular systems. Molecular Dynamics can be used to simulate, e.g., the binding and unbinding event of a small drug-like molecule with regard to the active site of an enzyme or a receptor. However, the time-scale of the overall binding event is many orders of magnitude longer than the time-scale of simulation. Thus, there is a need to speed-up molecular simulations. In order to speed up simulations, the molecular dynamics trajectories have to be ”steared” out of local minimizers of the potential energy surface – the so-called metastabilities – of the molecular system. Increasing the kinetic energy (temperature) is one possibility to accelerate simulated processes. However, with temperature the entropy of the molecular system increases, too. But this kind ”stearing” is not directed enough to stear the molecule out of the minimum toward the saddle point. In this article, we give a new mathematical idea, how a potential energy surface can be changed in such a way, that entropy is kept under control while the trajectories are still steared out of the metastabilities. In order to compute the unsteared transition behaviour based on a steared simulation, we propose to use extrapolation methods. In the end we mathematically show, that our method accelerates the simulations along the direction, in which the curvature of the potential energy surface changes the most, i.e., from local minimizers towards saddle points.

Keywords: extrapolation, Eyring-Kramers, metastability, multilevel sampling

Procedia PDF Downloads 303
216 Energy Conservation in Heat Exchangers

Authors: Nadia Allouache

Abstract:

Energy conservation is one of the major concerns in the modern high tech era due to the limited amount of energy resources and the increasing cost of energy. Predicting an efficient use of energy in thermal systems like heat exchangers can only be achieved if the second law of thermodynamics is accounted for. The performance of heat exchangers can be substantially improved by many passive heat transfer augmentation techniques. These letters permit to improve heat transfer rate and to increase exchange surface, but on the other side, they also increase the friction factor associated with the flow. This raises the question of how to employ these passive techniques in order to minimize the useful energy. The objective of this present study is to use a porous substrate attached to the walls as a passive enhancement technique in heat exchangers and to find the compromise between the hydrodynamic and thermal performances under turbulent flow conditions, by using a second law approach. A modified k- ε model is used to simulating the turbulent flow in the porous medium and the turbulent shear flow is accounted for in the entropy generation equation. A numerical modeling, based on the finite volume method is employed for discretizing the governing equations. Effects of several parameters are investigated such as the porous substrate properties and the flow conditions. Results show that under certain conditions of the porous layer thickness, its permeability, and its effective thermal conductivity the minimum rate of entropy production is obtained.

Keywords: second law approach, annular heat exchanger, turbulent flow, porous medium, modified model, numerical analysis

Procedia PDF Downloads 262
215 Direct Measurements of the Electrocaloric Effect in Solid Ferroelectric Materials via Thermoreflectance

Authors: Layla Farhat, Mathieu Bardoux, Stéphane Longuemart, Ziad Herro, Abdelhak Hadj Sahraoui

Abstract:

Electrocaloric (EC) effect refers to the isothermal entropy or adiabatic temperature changes of a dielectric material induced by an external electric field. This phenomenon has been largely ignored for application because only modest EC effects (2.6

Keywords: electrocaloric effect, thermoreflectance, ferroelectricity, cooling system

Procedia PDF Downloads 159
214 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 384
213 Key Parameters Analysis of the Stirring Systems in the Optmization Procedures

Authors: T. Gomes, J. Manzi

Abstract:

The inclusion of stirring systems in the calculation and optimization procedures has been undergone a significant lack of attention, what it can reflect in the results because such systems provide an additional energy to the process, besides promote a better distribution of mass and energy. This is meaningful for the reactive systems, particularly for the Continuous Stirred Tank Reactor (CSTR), for which the key variables and parameters, as well as the operating conditions of stirring systems, can play a pivotal role and it has been showed in the literature that neglect these factors can lead to sub-optimal results. It is also well known that the sole use of the First Law of Thermodynamics as an optimization tool cannot yield satisfactory results, since the joint use of the First and Second Laws condensed into a procedure so-called entropy generation minimization (EGM) has shown itself able to drive the system towards better results. Therefore, the main objective of this paper is to determine the effects of key parameters of the stirring system in the optimization procedures by means of EGM applied to the reactive systems. Such considerations have been possible by dimensional analysis according to Rayleigh and Buckingham's method, which takes into account the physical and geometric parameters and the variables of the reactive system. For the simulation purpose based on the production of propylene glycol, the results have shown a significant increase in the conversion rate from 36% (not-optimized system) to 95% (optimized system) with a consequent reduction of by-products. In addition, it has been possible to establish the influence of the work of the stirrer in the optimization procedure, in which can be described as a function of the fluid viscosity and consequently of the temperature. The conclusions to be drawn also indicate that the use of the entropic analysis as optimization tool has been proved to be simple, easy to apply and requiring low computational effort.

Keywords: stirring systems, entropy, reactive system, optimization

Procedia PDF Downloads 229
212 Catalytic Thermodynamics of Nanocluster Adsorbates from Informational Statistical Mechanics

Authors: Forrest Kaatz, Adhemar Bultheel

Abstract:

We use an informational statistical mechanics approach to study the catalytic thermodynamics of platinum and palladium cuboctahedral nanoclusters. Nanoclusters and their adatoms are viewed as chemical graphs with a nearest neighbor adjacency matrix. We use the Morse potential to determine bond energies between cluster atoms in a coordination type calculation. We use adsorbate energies calculated from density functional theory (DFT) to study the adatom effects on the thermodynamic quantities, which are derived from a Hamiltonian. Oxygen radical and molecular adsorbates are studied on platinum clusters and hydrogen on palladium clusters. We calculate the entropy, free energy, and total energy as the coverage of adsorbates increases from bridge and hollow sites on the surface. Thermodynamic behavior versus adatom coverage is related to the structural distribution of adatoms on the nanocluster surfaces. The thermodynamic functions are characterized using a simple adsorption model, with linear trends as the coverage of adatoms increases. The data exhibits size effects for the measured thermodynamic properties with cluster diameters between 2 and 5 nm. Entropy and enthalpy calculations of Pt-O2 compare well with previous theoretical data for Pt(111)-O2, and our Pd-H results show similar trends as experimental measurements for Pd-H2 nanoclusters. Our methods are general and may be applied to wide variety of nanocluster adsorbate systems.

Keywords: catalytic thermodynamics, palladium nanocluster absorbates, platinum nanocluster absorbates, statistical mechanics

Procedia PDF Downloads 144
211 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 106
210 Groundwater Potential Mapping using Frequency Ratio and Shannon’s Entropy Models in Lesser Himalaya Zone, Nepal

Authors: Yagya Murti Aryal, Bipin Adhikari, Pradeep Gyawali

Abstract:

The Lesser Himalaya zone of Nepal consists of thrusting and folding belts, which play an important role in the sustainable management of groundwater in the Himalayan regions. The study area is located in the Dolakha and Ramechhap Districts of Bagmati Province, Nepal. Geologically, these districts are situated in the Lesser Himalayas and partly encompass the Higher Himalayan rock sequence, which includes low-grade to high-grade metamorphic rocks. Following the Gorkha Earthquake in 2015, numerous springs dried up, and many others are currently experiencing depletion due to the distortion of the natural groundwater flow. The primary objective of this study is to identify potential groundwater areas and determine suitable sites for artificial groundwater recharge. Two distinct statistical approaches were used to develop models: The Frequency Ratio (FR) and Shannon Entropy (SE) methods. The study utilized both primary and secondary datasets and incorporated significant role and controlling factors derived from field works and literature reviews. Field data collection involved spring inventory, soil analysis, lithology assessment, and hydro-geomorphology study. Additionally, slope, aspect, drainage density, and lineament density were extracted from a Digital Elevation Model (DEM) using GIS and transformed into thematic layers. For training and validation, 114 springs were divided into a 70/30 ratio, with an equal number of non-spring pixels. After assigning weights to each class based on the two proposed models, a groundwater potential map was generated using GIS, classifying the area into five levels: very low, low, moderate, high, and very high. The model's outcome reveals that over 41% of the area falls into the low and very low potential categories, while only 30% of the area demonstrates a high probability of groundwater potential. To evaluate model performance, accuracy was assessed using the Area under the Curve (AUC). The success rate AUC values for the FR and SE methods were determined to be 78.73% and 77.09%, respectively. Additionally, the prediction rate AUC values for the FR and SE methods were calculated as 76.31% and 74.08%. The results indicate that the FR model exhibits greater prediction capability compared to the SE model in this case study.

Keywords: groundwater potential mapping, frequency ratio, Shannon’s Entropy, Lesser Himalaya Zone, sustainable groundwater management

Procedia PDF Downloads 53
209 Extended Intuitionistic Fuzzy VIKOR Method in Group Decision Making: The Case of Vendor Selection Decision

Authors: Nastaran Hajiheydari, Mohammad Soltani Delgosha

Abstract:

Vendor (supplier) selection is a group decision-making (GDM) process, in which, based on some predetermined criteria, the experts’ preferences are provided in order to rank and choose the most desirable suppliers. In the real business environment, our attitudes or our choices would be made in an uncertain and indecisive situation could not be expressed in a crisp framework. Intuitionistic fuzzy sets (IFSs) could handle such situations in the best way. VIKOR method was developed to solve multi-criteria decision-making (MCDM) problems. This method, which is used to determine the compromised feasible solution with respect to the conflicting criteria, introduces a multi-criteria ranking index based on the particular measure of 'closeness' to the 'ideal solution'. Until now, there has been a little investigation of VIKOR with IFS, therefore we extended the intuitionistic fuzzy (IF) VIKOR to solve vendor selection problem under IF GDM environment. The present study intends to develop an IF VIKOR method in a GDM situation. Therefore, a model is presented to calculate the criterion weights based on entropy measure. Then, the interval-valued intuitionistic fuzzy weighted geometric (IFWG) operator utilized to obtain the total decision matrix. In the next stage, an approach based on the positive idle intuitionistic fuzzy number (PIIFN) and negative idle intuitionistic fuzzy number (NIIFN) was developed. Finally, the application of the proposed method to solve a vendor selection problem illustrated.

Keywords: group decision making, intuitionistic fuzzy set, intuitionistic fuzzy entropy measure, vendor selection, VIKOR

Procedia PDF Downloads 131
208 Ab Initio Multiscale Catalytic Synthesis/Cracking Reaction Modelling of Ammonia as Liquid Hydrogen Carrier

Authors: Blaž Likozar, Andraž Pavlišič, Matic Pavlin, Taja Žibert, Aleksandra Zamljen, Sašo Gyergyek, Matej Huš

Abstract:

Ammonia is gaining recognition as a carbon-free fuel for energy-intensive applications, particularly transportation, industry, and power generation. Due to its physical properties, high energy density of 3 kWh kg-1, and high gravimetric hydrogen capacity of 17.6 wt%, ammonia is an efficient energy vector for green hydrogen, capable of mitigating hydrogen’s storage, distribution, and infrastructure deployment limitations. Chemicalstorage in the form of ammonia provides an efficient and affordable solution for energy storage, which is currently a critical step in overcoming the intermittency of abundant renewable energy sources with minimal or no environmental impact. Experiments were carried out to validate the modelling in a packed bed reactor, which proved to be agreeing.

Keywords: hydrogen, ammonia, catalysis, modelling, kinetics

Procedia PDF Downloads 37
207 Multiscale Connected Component Labelling and Applications to Scientific Microscopy Image Processing

Authors: Yayun Hsu, Henry Horng-Shing Lu

Abstract:

In this paper, a new method is proposed to extending the method of connected component labeling from processing binary images to multi-scale modeling of images. By using the adaptive threshold of multi-scale attributes, this approach minimizes the possibility of missing those important components with weak intensities. In addition, the computational cost of this approach remains similar to that of the typical approach of component labeling. Then, this methodology is applied to grain boundary detection and Drosophila Brain-bow neuron segmentation. These demonstrate the feasibility of the proposed approach in the analysis of challenging microscopy images for scientific discovery.

Keywords: microscopic image processing, scientific data mining, multi-scale modeling, data mining

Procedia PDF Downloads 416
206 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 545
205 Automatic Segmentation of the Clean Speech Signal

Authors: M. A. Ben Messaoud, A. Bouzid, N. Ellouze

Abstract:

Speech Segmentation is the measure of the change point detection for partitioning an input speech signal into regions each of which accords to only one speaker. In this paper, we apply two features based on multi-scale product (MP) of the clean speech, namely the spectral centroid of MP, and the zero crossings rate of MP. We focus on multi-scale product analysis as an important tool for segmentation extraction. The multi-scale product is based on making the product of the speech wavelet transform coefficients at three successive dyadic scales. We have evaluated our method on the Keele database. Experimental results show the effectiveness of our method presenting a good performance. It shows that the two simple features can find word boundaries, and extracted the segments of the clean speech.

Keywords: multiscale product, spectral centroid, speech segmentation, zero crossings rate

Procedia PDF Downloads 480
204 Analyzing the Results of Buildings Energy Audit by Using Grey Set Theory

Authors: Tooraj Karimi, Mohammadreza Sadeghi Moghadam

Abstract:

Grey set theory has the advantage of using fewer data to analyze many factors, and it is therefore more appropriate for system study rather than traditional statistical regression which require massive data, normal distribution in the data and few variant factors. So, in this paper grey clustering and entropy of coefficient vector of grey evaluations are used to analyze energy consumption in buildings of the Oil Ministry in Tehran. In fact, this article intends to analyze the results of energy audit reports and defines most favorable characteristics of system, which is energy consumption of buildings, and most favorable factors affecting these characteristics in order to modify and improve them. According to the results of the model, ‘the real Building Load Coefficient’ has been selected as the most important system characteristic and ‘uncontrolled area of the building’ has been diagnosed as the most favorable factor which has the greatest effect on energy consumption of building. Grey clustering in this study has been used for two purposes: First, all the variables of building relate to energy audit cluster in two main groups of indicators and the number of variables is reduced. Second, grey clustering with variable weights has been used to classify all buildings in three categories named ‘no standard deviation’, ‘low standard deviation’ and ‘non- standard’. Entropy of coefficient vector of Grey evaluations is calculated to investigate greyness of results. It shows that among the 38 buildings surveyed in terms of energy consumption, 3 cases are in standard group, 24 cases are in ‘low standard deviation’ group and 11 buildings are completely non-standard. In addition, clustering greyness of 13 buildings is less than 0.5 and average uncertainly of clustering results is 66%.

Keywords: energy audit, grey set theory, grey incidence matrixes, grey clustering, Iran oil ministry

Procedia PDF Downloads 353
203 Numerical Investigation of the Transverse Instability in Radiation Pressure Acceleration

Authors: F. Q. Shao, W. Q. Wang, Y. Yin, T. P. Yu, D. B. Zou, J. M. Ouyang

Abstract:

The Radiation Pressure Acceleration (RPA) mechanism is very promising in laser-driven ion acceleration because of high laser-ion energy conversion efficiency. Although some experiments have shown the characteristics of RPA, the energy of ions is quite limited. The ion energy obtained in experiments is only several MeV/u, which is much lower than theoretical prediction. One possible limiting factor is the transverse instability incited in the RPA process. The transverse instability is basically considered as the Rayleigh-Taylor (RT) instability, which is a kind of interfacial instability and occurs when a light fluid pushes against a heavy fluid. Multi-dimensional particle-in-cell (PIC) simulations show that the onset of transverse instability will destroy the acceleration process and broaden the energy spectrum of fast ions during the RPA dominant ion acceleration processes. The evidence of the RT instability driven by radiation pressure has been observed in a laser-foil interaction experiment in a typical RPA regime, and the dominant scale of RT instability is close to the laser wavelength. The development of transverse instability in the radiation-pressure-acceleration dominant laser-foil interaction is numerically examined by two-dimensional particle-in-cell simulations. When a laser interacts with a foil with modulated surface, the internal instability is quickly incited and it develops. The linear growth and saturation of the transverse instability are observed, and the growth rate is numerically diagnosed. In order to optimize interaction parameters, a method of information entropy is put forward to describe the chaotic degree of the transverse instability. With moderate modulation, the transverse instability shows a low chaotic degree and a quasi-monoenergetic proton beam is produced.

Keywords: information entropy, radiation pressure acceleration, Rayleigh-Taylor instability, transverse instability

Procedia PDF Downloads 325
202 A Multi-Scale Contact Temperature Model for Dry Sliding Rough Surfaces

Authors: Jamal Choudhry, Roland Larsson, Andreas Almqvist

Abstract:

A multi-scale flash temperature model has been developed and validated against existing work. The core strength of the proposed model is that it can be adapted to predict flash contact temperatures occurring in various types of sliding systems. In this paper, it is used to investigate how different surface roughness parameters affect the flash temperatures. The results show that for decreasing Hurst exponents as well as increasing values of the high-frequency cut-off, the maximum flash temperature increases. It was also shown that the effect of surface roughness does not influence the average interface temperature. The model predictions were validated against data from an experiment conducted in a pin-on-disc machine. This also showed the importance of including a wear model when simulating flash temperature development in a sliding system.

Keywords: multiscale, pin-on-disc, finite element method, flash temperature, surface roughness

Procedia PDF Downloads 92
201 Multiscale Structures and Their Evolution in a Screen Cylinder Wake

Authors: Azlin Mohd Azmi, Tongming Zhou, Akira Rinoshika, Liang Cheng

Abstract:

The turbulent structures in the wake (x/d =10 to 60) of a screen cylinder have been reduced to understand the roles of the various structures as evolving downstream by comparing with those obtained in a solid circular cylinder wake at Reynolds number, Re of 7000. Using a wavelet multi-resolution technique, the flow structures are decomposed into a number of wavelet components based on their central frequencies. It is observed that in the solid cylinder wake, large-scale structures (of frequency f0 and 1.2 f0) make the largest contribution to the Reynolds stresses although they start to lose their roles significantly at x/d > 20. In the screen cylinder wake, the intermediate-scale structures (2f0 and 4f0) contribute the most to the Reynolds stresses at x/d =10 before being taken over by the large-scale structures (f0) further downstream.

Keywords: turbulent structure, screen cylinder, vortex, wavelet multi-resolution analysis

Procedia PDF Downloads 439
200 Multifunctional Composite Structural Elements for Sensing and Energy Harvesting

Authors: Amir H. Alavi, Kaveh Barri, Qianyun Zhang

Abstract:

This study presents a new generation of lightweight and mechanically tunable structural composites with sensing and energy harvesting functionalities. This goal is achieved by integrating metamaterial and triboelectric energy harvesting concepts. Proof-of-concept polymeric beam prototypes are fabricated using 3D printing methods based on the proposed concept. Experiments and theoretical analyses are conducted to quantitatively investigate the mechanical and electrical properties of the designed multifunctional beams. The results show that these integrated structural elements can serve as nanogenerators and distributed sensing mediums without a need to incorporating any external sensing modules and electronics. The feasibility of design self-sensing and self-powering structural elements at multiscale for next generation infrastructure systems is further discussed.

Keywords: multifunctional structures, composites, metamaterial, triboelectric nanogenerator, sensors, structural health monitoring, energy harvesting

Procedia PDF Downloads 181