Search results for: reconstruction entropy.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 382

Search results for: reconstruction entropy.

292 Feature Level Fusion of Multimodal Images Using Haar Lifting Wavelet Transform

Authors: Sudipta Majumdar, Jayant Bharadwaj

Abstract:

This paper presents feature level image fusion using Haar lifting wavelet transform. Feature fused is edge and boundary information, which is obtained using wavelet transform modulus maxima criteria. Simulation results show the superiority of the result as entropy, gradient, standard deviation are increased for fused image as compared to input images. The proposed methods have the advantages of simplicity of implementation, fast algorithm, perfect reconstruction, and reduced computational complexity. (Computational cost of Haar wavelet is very small as compared to other lifting wavelets.)

Keywords: Lifting wavelet transform, wavelet transform modulus maxima.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2375
291 AC Signals Estimation from Irregular Samples

Authors: Predrag B. Petrović

Abstract:

The paper deals with the estimation of amplitude and phase of an analogue multi-harmonic band-limited signal from irregularly spaced sampling values. To this end, assuming the signal fundamental frequency is known in advance (i.e., estimated at an independent stage), a complexity-reduced algorithm for signal reconstruction in time domain is proposed. The reduction in complexity is achieved owing to completely new analytical and summarized expressions that enable a quick estimation at a low numerical error. The proposed algorithm for the calculation of the unknown parameters requires O((2M+1)2) flops, while the straightforward solution of the obtained equations takes O((2M+1)3) flops (M is the number of the harmonic components). It is applied in signal reconstruction, spectral estimation, system identification, as well as in other important signal processing problems. The proposed method of processing can be used for precise RMS measurements (for power and energy) of a periodic signal based on the presented signal reconstruction. The paper investigates the errors related to the signal parameter estimation, and there is a computer simulation that demonstrates the accuracy of these algorithms.

Keywords: Band-limited signals, Fourier coefficient estimation, analytical solutions, signal reconstruction, time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1705
290 Combination of Tensile Strength and Elongation of Reverse Rolled TaNbHfZrTi Refractory High Entropy Alloy

Authors: M. Veerasham

Abstract:

The refractory high entropy alloys are potential materials for high-temperature applications because of their ability to retain high strength up to 1600°C. However, their practical applications were limited due to poor elongation at room temperature. Therefore, decreasing the average valence electron concentrations (VEC) is an effective design strategy to improve the intrinsic ductility of refractory high entropy alloys. In this work, the high-entropy alloy TaNbHfZrTi was processed at room temperature by each step reverse rolling up to a 90% reduction in thickness. Subsequently, the reverse rolled 90% samples were utilized for annealing treatment at 800°C and 1000°C for 1 h to understand phase stability, microstructure, texture, and mechanical properties. The reverse rolled 90% condition contains body-centered cubic (BCC) single-phase; upon annealing at 800 °C, the formation of secondary phase BCC-2 prevailed. The partial recrystallization and complete recrystallization microstructures were developed for annealed at 800°C and 1000°C, respectively. The reverse rolled condition and 1000°C annealed temperature exhibit extraordinary room temperature tensile properties with high ultimate tensile strength (UTS) without compromising loss of ductility called “strength-ductility” trade-off. The reverse-rolled 90% and annealing treatment carried out at temperature about 1000°C for 1 h consist of UTS 1430 MPa and 1556 MPa with an appreciable amount of 21% and 20% elongation, respectively. The development of hierarchical microstructure prevailed for the annealed 1000°C which led to the simultaneous increase in tensile strength and elongation.

Keywords: refractory high entropy alloys, reverse rolling, recrystallization, microstructure, tensile properties

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 465
289 Automatic 3D Reconstruction of Coronary Artery Centerlines from Monoplane X-ray Angiogram Images

Authors: Ali Zifan, Panos Liatsis, Panagiotis Kantartzis, Manolis Gavaises, Nicos Karcanias, Demosthenes Katritsis

Abstract:

We present a new method for the fully automatic 3D reconstruction of the coronary artery centerlines, using two X-ray angiogram projection images from a single rotating monoplane acquisition system. During the first stage, the input images are smoothed using curve evolution techniques. Next, a simple yet efficient multiscale method, based on the information of the Hessian matrix, for the enhancement of the vascular structure is introduced. Hysteresis thresholding using different image quantiles, is used to threshold the arteries. This stage is followed by a thinning procedure to extract the centerlines. The resulting skeleton image is then pruned using morphological and pattern recognition techniques to remove non-vessel like structures. Finally, edge-based stereo correspondence is solved using a parallel evolutionary optimization method based on f symbiosis. The detected 2D centerlines combined with disparity map information allow the reconstruction of the 3D vessel centerlines. The proposed method has been evaluated on patient data sets for evaluation purposes.

Keywords: Vessel enhancement, centerline extraction, symbiotic reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2234
288 MAP-Based Image Super-resolution Reconstruction

Authors: Xueting Liu, Daojin Song, Chuandai Dong, Hongkui Li

Abstract:

From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem Є=An into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.

Keywords: High-resolution MAP image, Reconstruction, Image interpolation, Motion Estimation, Hermitian positive definite solutions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2117
287 Implementation of Vertical Neutron Camera (VNC) for ITER Fusion Plasma Neutron Source Profile Reconstruction

Authors: V. Amosov, Yu. Kashchuk, A. Krasilnikov, A. Kostin, A. Khovanskiy, A. Leonov, N. Rodionov, R. Rodionov

Abstract:

In present work the problem of the ITER fusion plasma neutron source parameter reconstruction using only the Vertical Neutron Camera data was solved. The possibility of neutron source parameter reconstruction was estimated by the numerical simulations and the analysis of adequateness of mathematic model was performed. The neutron source was specified in a parametric form. The numerical analysis of solution stability with respect to data distortion was done. The influence of the data errors on the reconstructed parameters is shown: • is reconstructed with errors less than 4% at all examined values of δ (until 60%); • is determined with errors less than 10% when δ do not overcome 5%; • is reconstructed with relative error more than 10 %; • integral intensity of the neutron source is determined with error 10% while δ error is less than 15%; where -error of signal measurements, (R0,Z0), the plasma center position,- /parameter of neutron source profile.

Keywords: ITER, neutronsource, neutron source profile reconstruction, Vertical Neutron Camera.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631
286 An Improved C-Means Model for MRI Segmentation

Authors: Ying Shen, Weihua Zhu

Abstract:

Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.

Keywords: Magnetic Resonance Image, C-means model, image segmentation, information entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878
285 Quantification of Heart Rate Variability: A Measure based on Unique Heart Rates

Authors: V. I. Thajudin Ahamed, P. Dhanasekaran, A. Naseem, N. G. Karthick, T. K. Abdul Jaleel, Paul K.Joseph

Abstract:

It is established that the instantaneous heart rate (HR) of healthy humans keeps on changing. Analysis of heart rate variability (HRV) has become a popular non invasive tool for assessing the activities of autonomic nervous system. Depressed HRV has been found in several disorders, like diabetes mellitus (DM) and coronary artery disease, characterised by autonomic nervous dysfunction. A new technique, which searches for pattern repeatability in a time series, is proposed specifically for the analysis of heart rate data. These set of indices, which are termed as pattern repeatability measure and pattern repeatability ratio are compared with approximate entropy and sample entropy. In our analysis, based on the method developed, it is observed that heart rate variability is significantly different for DM patients, particularly for patients with diabetic foot ulcer.

Keywords: Autonomic nervous system, diabetes mellitus, heart rate variability, pattern identification, sample entropy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844
284 Effects of Data Correlation in a Sparse-View Compressive Sensing Based Image Reconstruction

Authors: Sajid Abbas, Joon Pyo Hong, Jung-Ryun Lee, Seungryong Cho

Abstract:

Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.

Keywords: Computed tomography, Computed laminography, Compressive sending, Low-dose.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
283 Secure Secret Recovery by using Weighted Personal Entropy

Authors: Leau Y. B., Dinna Nina M. N., Habeeb S. A. H., Jetol B.

Abstract:

Authentication plays a vital role in many secure systems. Most of these systems require user to log in with his or her secret password or pass phrase before entering it. This is to ensure all the valuables information is kept confidential guaranteeing also its integrity and availability. However, to achieve this goal, users are required to memorize high entropy passwords or pass phrases. Unfortunately, this sometimes causes difficulty for user to remember meaningless strings of data. This paper presents a new scheme which assigns a weight to each personal question given to the user in revealing the encrypted secrets or password. Concentration of this scheme is to offer fault tolerance to users by allowing them to forget the specific password to a subset of questions and still recover the secret and achieve successful authentication. Comparison on level of security for weight-based and weightless secret recovery scheme is also discussed. The paper concludes with the few areas that requires more investigation in this research.

Keywords: Secret Recovery, Personal Entropy, Cryptography, Secret Sharing and Key Management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921
282 Visual Hull with Imprecise Input

Authors: Peng He

Abstract:

Imprecision is a long-standing problem in CAD design and high accuracy image-based reconstruction applications. The visual hull which is the closed silhouette equivalent shape of the objects of interest is an important concept in image-based reconstruction. We extend the domain-theoretic framework, which is a robust and imprecision capturing geometric model, to analyze the imprecision in the output shape when the input vertices are given with imprecision. Under this framework, we show an efficient algorithm to generate the 2D partial visual hull which represents the exact information of the visual hull with only basic imprecision assumptions. We also show how the visual hull from polyhedra problem can be efficiently solved in the context of imprecise input.

Keywords: Geometric Domain, Computer Vision, Computational Geometry, Visual Hull, Image-Based reconstruction, Imprecise Input, CAD object

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433
281 River Flow Prediction Using Nonlinear Prediction Method

Authors: N. H. Adenan, M. S. M. Noorani

Abstract:

River flow prediction is an essential to ensure proper management of water resources can be optimally distribute water to consumers. This study presents an analysis and prediction by using nonlinear prediction method involving monthly river flow data in Tanjung Tualang from 1976 to 2006. Nonlinear prediction method involves the reconstruction of phase space and local linear approximation approach. The phase space reconstruction involves the reconstruction of one-dimensional (the observed 287 months of data) in a multidimensional phase space to reveal the dynamics of the system. Revenue of phase space reconstruction is used to predict the next 72 months. A comparison of prediction performance based on correlation coefficient (CC) and root mean square error (RMSE) have been employed to compare prediction performance for nonlinear prediction method, ARIMA and SVM. Prediction performance comparisons show the prediction results using nonlinear prediction method is better than ARIMA and SVM. Therefore, the result of this study could be used to develop an efficient water management system to optimize the allocation water resources.

Keywords: River flow, nonlinear prediction method, phase space, local linear approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2308
280 An Attribute-Centre Based Decision Tree Classification Algorithm

Authors: Gökhan Silahtaroğlu

Abstract:

Decision tree algorithms have very important place at classification model of data mining. In literature, algorithms use entropy concept or gini index to form the tree. The shape of the classes and their closeness to each other some of the factors that affect the performance of the algorithm. In this paper we introduce a new decision tree algorithm which employs data (attribute) folding method and variation of the class variables over the branches to be created. A comparative performance analysis has been held between the proposed algorithm and C4.5.

Keywords: Classification, decision tree, split, pruning, entropy, gini.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1330
279 Monthly River Flow Prediction Using a Nonlinear Prediction Method

Authors: N. H. Adenan, M. S. M. Noorani

Abstract:

River flow prediction is an essential tool to ensure proper management of water resources and the optimal distribution of water to consumers. This study presents an analysis and prediction by using nonlinear prediction method with monthly river flow data for Tanjung Tualang from 1976 to 2006. Nonlinear prediction method involves the reconstruction of phase space and local linear approximation approach. The reconstruction of phase space involves the reconstruction of one-dimension (the observed 287 months of data) in a multidimensional phase space to reveal the dynamics of the system. The revenue of phase space reconstruction is used to predict the next 72 months. A comparison of prediction performance based on correlation coefficient (CC) and root mean square error (RMSE) was employed to compare prediction performance for the nonlinear prediction method, ARIMA and SVM. Prediction performance comparisons show that the prediction results using the nonlinear prediction method are better than ARIMA and SVM. Therefore, the results of this study could be used to develop an efficient water management system to optimize the allocation of water resources.

Keywords: River flow, nonlinear prediction method, phase space, local linear approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916
278 An Advanced Time-Frequency Domain Method for PD Extraction with Non-Intrusive Measurement

Authors: Guomin Luo, Daming Zhang, Yong Kwee Koh, Kim Teck Ng, Helmi Kurniawan, Weng Hoe Leong

Abstract:

Partial discharge (PD) detection is an important method to evaluate the insulation condition of metal-clad apparatus. Non-intrusive sensors which are easy to install and have no interruptions on operation are preferred in onsite PD detection. However, it often lacks of accuracy due to the interferences in PD signals. In this paper a novel PD extraction method that uses frequency analysis and entropy based time-frequency (TF) analysis is introduced. The repetitive pulses from convertor are first removed via frequency analysis. Then, the relative entropy and relative peak-frequency of each pulse (i.e. time-indexed vector TF spectrum) are calculated and all pulses with similar parameters are grouped. According to the characteristics of non-intrusive sensor and the frequency distribution of PDs, the pulses of PD and interferences are separated. Finally the PD signal and interferences are recovered via inverse TF transform. The de-noised result of noisy PD data demonstrates that the combination of frequency and time-frequency techniques can discriminate PDs from interferences with various frequency distributions.

Keywords: Entropy, Fourier analysis, non-intrusive measurement, time-frequency analysis, partial discharge

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540
277 Effect of Aging on the Second Law Efficiency, Exergy Destruction and Entropy Generation in the Skeletal Muscles during Exercise

Authors: Jale Çatak, Bayram Yılmaz, Mustafa Ozilgen

Abstract:

The second law muscle work efficiency is obtained by multiplying the metabolic and mechanical work efficiencies. Thermodynamic analyses are carried out with 19 sets of arms and legs exercise data which were obtained from the healthy young people. These data are used to simulate the changes occurring during aging. The muscle work efficiency decreases with aging as a result of the reduction of the metabolic energy generation in the mitochondria. The reduction of the mitochondrial energy efficiency makes it difficult to carry out the maintenance of the muscle tissue, which in turn causes a decline of the muscle work efficiency. When the muscle attempts to produce more work, entropy generation and exergy destruction increase. Increasing exergy destruction may be regarded as the result of the deterioration of the muscles. When the exergetic efficiency is 0.42, exergy destruction becomes 1.49 folds of the work performance. This proportionality becomes 2.50 and 5.21 folds when the exergetic efficiency decreases to 0.30 and 0.17 respectively.

Keywords: Aging mitochondria, entropy generation, exergy destruction, muscle work performance, second law efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1334
276 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and roughsets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: Rough-sets, Classification, Feature Selection, Entropy, Outliers, Frequent itemset mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2394
275 The Efficiency of Cytochrome Oxidase Subunit 1 Gene (cox1) in Reconstruction of Phylogenetic Relations among Some Crustacean Species

Authors: Yasser M. Saad, Heba El-Sebaie Abd El-Sadek

Abstract:

Some Metapenaeus monoceros cox1 gene fragments were isolated, purified, sequenced, and comparatively analyzed with some other Crustacean Cox1 gene sequences (obtained from National Center for Biotechnology Information). This work was designed for testing the efficiency of this system in reconstruction of phylogenetic relations among some Crustacean species belonging to four genera (Metapenaeus, Artemia, Daphnia and Calanus). The single nucleotide polymorphism and haplotype diversity were calculated for all estimated mt-DNA fragments. The genetic distance values were 0.292, 0.015, 0.151, and 0.09 within Metapenaeus species, Calanus species, Artemia species, and Daphnia species, respectively. The reconstructed phylogenetic tree is clustered into some unique clades. Cytochrome oxidase subunit 1 gene (cox1) was a powerful system in reconstruction of phylogenetic relations among evaluated crustacean species.

Keywords: Crustacean, Genetics, cox1, phylogeny.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1224
274 Optimal Model Order Selection for Transient Error Autoregressive Moving Average (TERA) MRI Reconstruction Method

Authors: Abiodun M. Aibinu, Athaur Rahman Najeeb, Momoh J. E. Salami, Amir A. Shafie

Abstract:

An alternative approach to the use of Discrete Fourier Transform (DFT) for Magnetic Resonance Imaging (MRI) reconstruction is the use of parametric modeling technique. This method is suitable for problems in which the image can be modeled by explicit known source functions with a few adjustable parameters. Despite the success reported in the use of modeling technique as an alternative MRI reconstruction technique, two important problems constitutes challenges to the applicability of this method, these are estimation of Model order and model coefficient determination. In this paper, five of the suggested method of evaluating the model order have been evaluated, these are: The Final Prediction Error (FPE), Akaike Information Criterion (AIC), Residual Variance (RV), Minimum Description Length (MDL) and Hannan and Quinn (HNQ) criterion. These criteria were evaluated on MRI data sets based on the method of Transient Error Reconstruction Algorithm (TERA). The result for each criterion is compared to result obtained by the use of a fixed order technique and three measures of similarity were evaluated. Result obtained shows that the use of MDL gives the highest measure of similarity to that use by a fixed order technique.

Keywords: Autoregressive Moving Average (ARMA), MagneticResonance Imaging (MRI), Parametric modeling, Transient Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
273 Spectral Entropy Employment in Speech Enhancement based on Wavelet Packet

Authors: Talbi Mourad, Salhi Lotfi, Chérif Adnen

Abstract:

In this work, we are interested in developing a speech denoising tool by using a discrete wavelet packet transform (DWPT). This speech denoising tool will be employed for applications of recognition, coding and synthesis. For noise reduction, instead of applying the classical thresholding technique, some wavelet packet nodes are set to zero and the others are thresholded. To estimate the non stationary noise level, we employ the spectral entropy. A comparison of our proposed technique to classical denoising methods based on thresholding and spectral subtraction is made in order to evaluate our approach. The experimental implementation uses speech signals corrupted by two sorts of noise, white and Volvo noises. The obtained results from listening tests show that our proposed technique is better than spectral subtraction. The obtained results from SNR computation show the superiority of our technique when compared to the classical thresholding method using the modified hard thresholding function based on u-law algorithm.

Keywords: Enhancement, spectral subtraction, SNR, discrete wavelet packet transform, spectral entropy Histogram

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
272 Recognition and Reconstruction of Partially Occluded Objects

Authors: Michela Lecca, Stefano Messelodi

Abstract:

A new automatic system for the recognition and re¬construction of resealed and/or rotated partially occluded objects is presented. The objects to be recognized are described by 2D views and each view is occluded by several half-planes. The whole object views and their visible parts (linear cuts) are then stored in a database. To establish if a region R of an input image represents an object possibly occluded, the system generates a set of linear cuts of R and compare them with the elements in the database. Each linear cut of R is associated to the most similar database linear cut. R is recognized as an instance of the object 0 if the majority of the linear cuts of R are associated to a linear cut of views of 0. In the case of recognition, the system reconstructs the occluded part of R and determines the scale factor and the orientation in the image plane of the recognized object view. The system has been tested on two different datasets of objects, showing good performance both in terms of recognition and reconstruction accuracy.

Keywords: Occluded Object Recognition, Shape Reconstruction, Automatic Self-Adaptive Systems, Linear Cut.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245
271 Influence of Mass Flow Rate on Forced Convective Heat Transfer through a Nanofluid Filled Direct Absorption Solar Collector

Authors: Salma Parvin, M. A. Alim

Abstract:

The convective and radiative heat transfer performance and entropy generation on forced convection through a direct absorption solar collector (DASC) is investigated numerically. Four different fluids, including Cu-water nanofluid, Al2O3-waternanofluid, TiO2-waternanofluid, and pure water are used as the working fluid. Entropy production has been taken into account in addition to the collector efficiency and heat transfer enhancement. Penalty finite element method with Galerkin’s weighted residual technique is used to solve the governing non-linear partial differential equations. Numerical simulations are performed for the variation of mass flow rate. The outcomes are presented in the form of isotherms, average output temperature, the average Nusselt number, collector efficiency, average entropy generation, and Bejan number. The results present that the rate of heat transfer and collector efficiency enhance significantly for raising the values of m up to a certain range.

Keywords: DASC, forced convection, mass flow rate, nanofluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814
270 Mean Shift-based Preprocessing Methodology for Improved 3D Buildings Reconstruction

Authors: Nikolaos Vassilas, Theocharis Tsenoglou, Djamchid Ghazanfarpour

Abstract:

In this work, we explore the capability of the mean shift algorithm as a powerful preprocessing tool for improving the quality of spatial data, acquired from airborne scanners, from densely built urban areas. On one hand, high resolution image data corrupted by noise caused by lossy compression techniques are appropriately smoothed while at the same time preserving the optical edges and, on the other, low resolution LiDAR data in the form of normalized Digital Surface Map (nDSM) is upsampled through the joint mean shift algorithm. Experiments on both the edge-preserving smoothing and upsampling capabilities using synthetic RGB-z data show that the mean shift algorithm is superior to bilateral filtering as well as to other classical smoothing and upsampling algorithms. Application of the proposed methodology for 3D reconstruction of buildings of a pilot region of Athens, Greece results in a significant visual improvement of the 3D building block model.

Keywords: 3D buildings reconstruction, data fusion, data upsampling, mean shift.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
269 Finger Vein Recognition using PCA-based Methods

Authors: Sepehr Damavandinejadmonfared, Ali Khalili Mobarakeh, Mohsen Pashna, , Jiangping Gou Sayedmehran Mirsafaie Rizi, Saba Nazari, Shadi Mahmoodi Khaniabadi, Mohamad Ali Bagheri

Abstract:

In this paper a novel algorithm is proposed to merit the accuracy of finger vein recognition. The performances of Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA), and Kernel Entropy Component Analysis (KECA) in this algorithm are validated and compared with each other in order to determine which one is the most appropriate one in terms of finger vein recognition.

Keywords: Biometrics, finger vein recognition, PrincipalComponent Analysis (PCA), Kernel Principal Component Analysis(KPCA), Kernel Entropy Component Analysis (KPCA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2630
268 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 671
267 Beta-spline Surface Fitting to Multi-slice Images

Authors: Normi Abdul Hadi, Arsmah Ibrahim, Fatimah Yahya, Jamaludin Md. Ali

Abstract:

Beta-spline is built on G2 continuity which guarantees smoothness of generated curves and surfaces using it. This curve is preferred to be used in object design rather than reconstruction. This study however, employs the Beta-spline in reconstructing a 3- dimensional G2 image of the Stanford Rabbit. The original data consists of multi-slice binary images of the rabbit. The result is then compared with related works using other techniques.

Keywords: Beta-spline, multi-slice image, rectangular surface, 3D reconstruction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1839
266 Energy Efficiency Index Applied to Reactive Systems

Authors: P. Góes, J. Manzi

Abstract:

This paper focuses on the development of an energy efficiency index that will be applied to reactive systems, which is based in the First and Second Law of Thermodynamics, by giving particular consideration to the concept of maximum entropy. Among the requirements of such energy efficiency index, the practical feasibility must be essential. To illustrate the performance of the proposed index, such an index was used as decisive factor of evaluation for the optimization process of an industrial reactor. The results allow the conclusion to be drawn that the energy efficiency index applied to the reactive system is consistent because it extracts the information expected of an efficient indicator, and that it is useful as an analytical tool besides being feasible from a practical standpoint. Furthermore, it has proved to be much simpler to use than tools based on traditional methodologies.

Keywords: Energy efficiency, maximum entropy, reactive systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1080
265 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm

Authors: Xiang Jianhong, Wang Cong, Wang Linyu

Abstract:

With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.

Keywords: telemedicine, fetal electrocardiogram, compressed sensing, joint sparse reconstruction, block sparse signal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 402
264 Face Texture Reconstruction for Illumination Variant Face Recognition

Authors: Pengfei Xiong, Lei Huang, Changping Liu

Abstract:

In illumination variant face recognition, existing methods extracting face albedo as light normalized image may lead to loss of extensive facial details, with light template discarded. To improve that, a novel approach for realistic facial texture reconstruction by combining original image and albedo image is proposed. First, light subspaces of different identities are established from the given reference face images; then by projecting the original and albedo image into each light subspace respectively, texture reference images with corresponding lighting are reconstructed and two texture subspaces are formed. According to the projections in texture subspaces, facial texture with normal light can be synthesized. Due to the combination of original image, facial details can be preserved with face albedo. In addition, image partition is applied to improve the synthesization performance. Experiments on Yale B and CMUPIE databases demonstrate that this algorithm outperforms the others both in image representation and in face recognition.

Keywords: texture reconstruction, illumination, face recognition, subspaces

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441
263 Improving Classification Accuracy with Discretization on Datasets Including Continuous Valued Features

Authors: Mehmet Hacibeyoglu, Ahmet Arslan, Sirzat Kahramanli

Abstract:

This study analyzes the effect of discretization on classification of datasets including continuous valued features. Six datasets from UCI which containing continuous valued features are discretized with entropy-based discretization method. The performance improvement between the dataset with original features and the dataset with discretized features is compared with k-nearest neighbors, Naive Bayes, C4.5 and CN2 data mining classification algorithms. As the result the classification accuracies of the six datasets are improved averagely by 1.71% to 12.31%.

Keywords: Data mining classification algorithms, entropy-baseddiscretization method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2419