Search results for: normalization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 164

Search results for: normalization

164 Applying Spanning Tree Graph Theory for Automatic Database Normalization

Authors: Chetneti Srisa-an

Abstract:

In Knowledge and Data Engineering field, relational database is the best repository to store data in a real world. It has been using around the world more than eight decades. Normalization is the most important process for the analysis and design of relational databases. It aims at creating a set of relational tables with minimum data redundancy that preserve consistency and facilitate correct insertion, deletion, and modification. Normalization is a major task in the design of relational databases. Despite its importance, very few algorithms have been developed to be used in the design of commercial automatic normalization tools. It is also rare technique to do it automatically rather manually. Moreover, for a large and complex database as of now, it make even harder to do it manually. This paper presents a new complete automated relational database normalization method. It produces the directed graph and spanning tree, first. It then proceeds with generating the 2NF, 3NF and also BCNF normal forms. The benefit of this new algorithm is that it can cope with a large set of complex function dependencies.

Keywords: relational database, functional dependency, automatic normalization, primary key, spanning tree

Procedia PDF Downloads 353
163 Pose Normalization Network for Object Classification

Authors: Bingquan Shen

Abstract:

Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline.

Keywords: convolutional neural networks, object classification, pose normalization, viewpoint invariant

Procedia PDF Downloads 352
162 Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements

Authors: Shagufta Tabassum

Abstract:

The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. In this paper, we discuss the basic calibration and normalization procedure for time-domain reflectometry measurements. Our approach is to explain the different types of error occur during TDR measurements and how these errors can be eliminated or minimized.

Keywords: time domain reflectometry measurement techinque, cable and connector loss, oscilloscope loss, and normalization technique

Procedia PDF Downloads 206
161 Normalizing Scientometric Indicators of Individual Publications Using Local Cluster Detection Methods on Citation Networks

Authors: Levente Varga, Dávid Deritei, Mária Ercsey-Ravasz, Răzvan Florian, Zsolt I. Lázár, István Papp, Ferenc Járai-Szabó

Abstract:

One of the major shortcomings of widely used scientometric indicators is that different disciplines cannot be compared with each other. The issue of cross-disciplinary normalization has been long discussed, but even the classification of publications into scientific domains poses problems. Structural properties of citation networks offer new possibilities, however, the large size and constant growth of these networks asks for precaution. Here we present a new tool that in order to perform cross-field normalization of scientometric indicators of individual publications relays on the structural properties of citation networks. Due to the large size of the networks, a systematic procedure for identifying scientific domains based on a local community detection algorithm is proposed. The algorithm is tested with different benchmark and real-world networks. Then, by the use of this algorithm, the mechanism of the scientometric indicator normalization process is shown for a few indicators like the citation number, P-index and a local version of the PageRank indicator. The fat-tail trend of the article indicator distribution enables us to successfully perform the indicator normalization process.

Keywords: citation networks, cross-field normalization, local cluster detection, scientometric indicators

Procedia PDF Downloads 203
160 Investigating Data Normalization Techniques in Swarm Intelligence Forecasting for Energy Commodity Spot Price

Authors: Yuhanis Yusof, Zuriani Mustaffa, Siti Sakira Kamaruddin

Abstract:

Data mining is a fundamental technique in identifying patterns from large data sets. The extracted facts and patterns contribute in various domains such as marketing, forecasting, and medical. Prior to that, data are consolidated so that the resulting mining process may be more efficient. This study investigates the effect of different data normalization techniques, which are Min-max, Z-score, and decimal scaling, on Swarm-based forecasting models. Recent swarm intelligence algorithms employed includes the Grey Wolf Optimizer (GWO) and Artificial Bee Colony (ABC). Forecasting models are later developed to predict the daily spot price of crude oil and gasoline. Results showed that GWO works better with Z-score normalization technique while ABC produces better accuracy with the Min-Max. Nevertheless, the GWO is more superior that ABC as its model generates the highest accuracy for both crude oil and gasoline price. Such a result indicates that GWO is a promising competitor in the family of swarm intelligence algorithms.

Keywords: artificial bee colony, data normalization, forecasting, Grey Wolf optimizer

Procedia PDF Downloads 475
159 Normalizing Logarithms of Realized Volatility in an ARFIMA Model

Authors: G. L. C. Yap

Abstract:

Modelling realized volatility with high-frequency returns is popular as it is an unbiased and efficient estimator of return volatility. A computationally simple model is fitting the logarithms of the realized volatilities with a fractionally integrated long-memory Gaussian process. The Gaussianity assumption simplifies the parameter estimation using the Whittle approximation. Nonetheless, this assumption may not be met in the finite samples and there may be a need to normalize the financial series. Based on the empirical indices S&P500 and DAX, this paper examines the performance of the linear volatility model pre-treated with normalization compared to its existing counterpart. The empirical results show that by including normalization as a pre-treatment procedure, the forecast performance outperforms the existing model in terms of statistical and economic evaluations.

Keywords: Gaussian process, long-memory, normalization, value-at-risk, volatility, Whittle estimator

Procedia PDF Downloads 354
158 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations

Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra

Abstract:

The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.

Keywords: electromyography, EMG normalization, functional EMG, older adults

Procedia PDF Downloads 91
157 A New Scheme for Chain Code Normalization in Arabic and Farsi Scripts

Authors: Reza Shakoori

Abstract:

This paper presents a structural correction of Arabic and Persian strokes using manipulation of their chain codes in order to improve the rate and performance of Persian and Arabic handwritten word recognition systems. It collects pure and effective features to represent a character with one consolidated feature vector and reduces variations in order to decrease the number of training samples and increase the chance of successful classification. Our results also show that how the proposed approaches can simplify classification and consequently recognition by reducing variations and possible noises on the chain code by keeping orientation of characters and their backbone structures.

Keywords: Arabic, chain code normalization, OCR systems, image processing

Procedia PDF Downloads 404
156 Extensions of Schwarz Lemma in the Half-Plane

Authors: Nicolae Pascu

Abstract:

Aside from being a fundamental tool in Complex analysis, Schwarz Lemma-which was finalized in its most complete form at the beginning of the last century-generated an important area of research in various fields of mathematics, which continues to advance even today. We present some properties of analytic functions in the half-plane which satisfy the conditions of the classical Schwarz Lemma (Carathéodory functions) and obtain a generalization of the well-known Aleksandrov-Sobolev Lemma for analytic functions in the half-plane (the correspondent of Schwarz-Pick Lemma from the unit disk). Using this Schwarz-type lemma, we obtain a characterization for the entire class of Carathéodory functions, which might be of independent interest. We prove two monotonicity properties for Carathéodory functions that do not depend upon their normalization at infinity (the hydrodynamic normalization). The method is based on conformal mapping arguments for analytic functions in the half-plane satisfying appropriate conditions, in the spirit of Schwarz lemma. According to the research findings in this paper, our main results give estimates for the modulus and the argument for the entire class of Carathéodory functions. As applications, we give several extensions of Julia-Wolf-Carathéodory Lemma in a half-strip and show that our results are sharp.

Keywords: schwarz lemma, Julia-wolf-caratéodory lemma, analytic function, normalization condition, caratéodory function

Procedia PDF Downloads 218
155 Evaluating the Performance of Color Constancy Algorithm

Authors: Damanjit Kaur, Avani Bhatia

Abstract:

Color constancy is significant for human vision since color is a pictorial cue that helps in solving different visions tasks such as tracking, object recognition, or categorization. Therefore, several computational methods have tried to simulate human color constancy abilities to stabilize machine color representations. Two different kinds of methods have been used, i.e., normalization and constancy. While color normalization creates a new representation of the image by canceling illuminant effects, color constancy directly estimates the color of the illuminant in order to map the image colors to a canonical version. Color constancy is the capability to determine colors of objects independent of the color of the light source. This research work studies the most of the well-known color constancy algorithms like white point and gray world.

Keywords: color constancy, gray world, white patch, modified white patch

Procedia PDF Downloads 319
154 Author Name Disambiguation for Biomedical Literature

Authors: Parthiban Srinivasan

Abstract:

PubMed provides online access to the National Library of Medicine database (MEDLINE) and other publications, which contain close to 25 million scientific citations from 1865 to the present. There are close to 80 million author name instances in those close to 25 million citations. For any work of literature, a fundamental issue is to identify the individual(s) who wrote it, and conversely, to identify all of the works that belong to a given individual. Due to the lack of universal standards for name information, there are two aspects of name ambiguity: name synonymy (a single author with multiple name representations), and name homonymy (multiple authors sharing the same name representation). In this talk, we present some results from our extensive work in author name disambiguation for PubMed citations. Information will be presented on the effectiveness and shortcomings of different aspects of successful name disambiguation such as parsing, validation, standardization and normalization.

Keywords: disambiguation, normalization, parsing, PubMed

Procedia PDF Downloads 300
153 Assessment of Pre-Processing Influence on Near-Infrared Spectra for Predicting the Mechanical Properties of Wood

Authors: Aasheesh Raturi, Vimal Kothiyal, P. D. Semalty

Abstract:

We studied mechanical properties of Eucalyptus tereticornis using FT-NIR spectroscopy. Firstly, spectra were pre-processed to eliminate useless information. Then, prediction model was constructed by partial least squares regression. To study the influence of pre-processing on prediction of mechanical properties for NIR analysis of wood samples, we applied various pretreatment methods like straight line subtraction, constant offset elimination, vector-normalization, min-max normalization, multiple scattering. Correction, first derivative, second derivatives and their combination with other treatment such as First derivative + straight line subtraction, First derivative+ vector normalization and First derivative+ multiplicative scattering correction. The data processing methods in combination of preprocessing with different NIR regions, RMSECV, RMSEP and optimum factors/rank were obtained by optimization process of model development. More than 350 combinations were obtained during optimization process. More than one pre-processing method gave good calibration/cross-validation and prediction/test models, but only the best calibration/cross-validation and prediction/test models are reported here. The results show that one can safely use NIR region between 4000 to 7500 cm-1 with straight line subtraction, constant offset elimination, first derivative and second derivative preprocessing method which were found to be most appropriate for models development.

Keywords: FT-NIR, mechanical properties, pre-processing, PLS

Procedia PDF Downloads 360
152 Enhancement of Underwater Haze Image with Edge Reveal Using Pixel Normalization

Authors: M. Dhana Lakshmi, S. Sakthivel Murugan

Abstract:

As light passes from source to observer in the water medium, it is scattered by the suspended particulate matter. This scattering effect will plague the captured images with non-uniform illumination, blurring details, halo artefacts, weak edges, etc. To overcome this, pixel normalization with an Amended Unsharp Mask (AUM) filter is proposed to enhance the degraded image. To validate the robustness of the proposed technique irrespective of atmospheric light, the considered datasets are collected on dual locations. For those images, the maxima and minima pixel intensity value is computed and normalized; then the AUM filter is applied to strengthen the blurred edges. Finally, the enhanced image is obtained with good illumination and contrast. Thus, the proposed technique removes the effect of scattering called de-hazing and restores the perceptual information with enhanced edge detail. Both qualitative and quantitative analyses are done on considering the standard non-reference metric called underwater image sharpness measure (UISM), and underwater image quality measure (UIQM) is used to measure color, sharpness, and contrast for both of the location images. It is observed that the proposed technique has shown overwhelming performance compared to other deep-based enhancement networks and traditional techniques in an adaptive manner.

Keywords: underwater drone imagery, pixel normalization, thresholding, masking, unsharp mask filter

Procedia PDF Downloads 194
151 A Normalized Non-Stationary Wavelet Based Analysis Approach for a Computer Assisted Classification of Laryngoscopic High-Speed Video Recordings

Authors: Mona K. Fehling, Jakob Unger, Dietmar J. Hecker, Bernhard Schick, Joerg Lohscheller

Abstract:

Voice disorders origin from disturbances of the vibration patterns of the two vocal folds located within the human larynx. Consequently, the visual examination of vocal fold vibrations is an integral part within the clinical diagnostic process. For an objective analysis of the vocal fold vibration patterns, the two-dimensional vocal fold dynamics are captured during sustained phonation using an endoscopic high-speed camera. In this work, we present an approach allowing a fully automatic analysis of the high-speed video data including a computerized classification of healthy and pathological voices. The approach bases on a wavelet-based analysis of so-called phonovibrograms (PVG), which are extracted from the high-speed videos and comprise the entire two-dimensional vibration pattern of each vocal fold individually. Using a principal component analysis (PCA) strategy a low-dimensional feature set is computed from each phonovibrogram. From the PCA-space clinically relevant measures can be derived that quantify objectively vibration abnormalities. In the first part of the work it will be shown that, using a machine learning approach, the derived measures are suitable to distinguish automatically between healthy and pathological voices. Within the approach the formation of the PCA-space and consequently the extracted quantitative measures depend on the clinical data, which were used to compute the principle components. Therefore, in the second part of the work we proposed a strategy to achieve a normalization of the PCA-space by registering the PCA-space to a coordinate system using a set of synthetically generated vibration patterns. The results show that owing to the normalization step potential ambiguousness of the parameter space can be eliminated. The normalization further allows a direct comparison of research results, which bases on PCA-spaces obtained from different clinical subjects.

Keywords: Wavelet-based analysis, Multiscale product, normalization, computer assisted classification, high-speed laryngoscopy, vocal fold analysis, phonovibrogram

Procedia PDF Downloads 265
150 Improved Pitch Detection Using Fourier Approximation Method

Authors: Balachandra Kumaraswamy, P. G. Poonacha

Abstract:

Automatic Music Information Retrieval has been one of the challenging topics of research for a few decades now with several interesting approaches reported in the literature. In this paper we have developed a pitch extraction method based on a finite Fourier series approximation to the given window of samples. We then estimate pitch as the fundamental period of the finite Fourier series approximation to the given window of samples. This method uses analysis of the strength of harmonics present in the signal to reduce octave as well as harmonic errors. The performance of our method is compared with three best known methods for pitch extraction, namely, Yin, Windowed Special Normalization of the Auto-Correlation Function and Harmonic Product Spectrum methods of pitch extraction. Our study with artificially created signals as well as music files show that Fourier Approximation method gives much better estimate of pitch with less octave and harmonic errors.

Keywords: pitch, fourier series, yin, normalization of the auto- correlation function, harmonic product, mean square error

Procedia PDF Downloads 412
149 A Neural Network Classifier for Identifying Duplicate Image Entries in Real-Estate Databases

Authors: Sergey Ermolin, Olga Ermolin

Abstract:

A Deep Convolution Neural Network with Triplet Loss is used to identify duplicate images in real-estate advertisements in the presence of image artifacts such as watermarking, cropping, hue/brightness adjustment, and others. The effects of batch normalization, spatial dropout, and various convergence methodologies on the resulting detection accuracy are discussed. For comparative Return-on-Investment study (per industry request), end-2-end performance is benchmarked on both Nvidia Titan GPUs and Intel’s Xeon CPUs. A new real-estate dataset from San Francisco Bay Area is used for this work. Sufficient duplicate detection accuracy is achieved to supplement other database-grounded methods of duplicate removal. The implemented method is used in a Proof-of-Concept project in the real-estate industry.

Keywords: visual recognition, convolutional neural networks, triplet loss, spatial batch normalization with dropout, duplicate removal, advertisement technologies, performance benchmarking

Procedia PDF Downloads 338
148 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 331
147 Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases

Authors: N. W. U. D. Chathurani, Shlomo Geva, Vinod Chandran, Proboda Rajapaksha

Abstract:

Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches.

Keywords: feature fusion, image retrieval, membership function, normalization

Procedia PDF Downloads 345
146 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying

Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra

Abstract:

Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.

Keywords: FT-NIR, pasta, moisture determination, food engineering

Procedia PDF Downloads 258
145 Preliminary Design of Maritime Energy Management System: Naval Architectural Approach to Resolve Recent Limitations

Authors: Seyong Jeong, Jinmo Park, Jinhyoun Park, Boram Kim, Kyoungsoo Ahn

Abstract:

Energy management in the maritime industry is being required by economics and in conformity with new legislative actions taken by the International Maritime Organization (IMO) and the European Union (EU). In response, the various performance monitoring methodologies and data collection practices have been examined by different stakeholders. While many assorted advancements in operation and technology are applicable, their adoption in the shipping industry stays small. This slow uptake can be considered due to many different barriers such as data analysis problems, misreported data, and feedback problems, etc. This study presents a conceptual design of an energy management system (EMS) and proposes the methodology to resolve the limitations (e.g., data normalization using naval architectural evaluation, management of misrepresented data, and feedback from shore to ship through management of performance analysis history). We expect this system to make even short-term charterers assess the ship performance properly and implement sustainable fleet control.

Keywords: data normalization, energy management system, naval architectural evaluation, ship performance analysis

Procedia PDF Downloads 449
144 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging

Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi

Abstract:

Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.

Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA

Procedia PDF Downloads 279
143 Impact Tensile Mechanical Properties of 316L Stainless Steel at Different Strain Rates

Authors: Jiawei Chen, Jia Qu, Dianwei Ju

Abstract:

316L stainless steel has good mechanical and technological properties, has been widely used in shipbuilding and aerospace manufacturing. In order to understand the effect of strain rate on the yield limit of 316L stainless steel and the constitutive relationship of the materials at different strain rates, this paper used the INSTRON-4505 electronic universal testing machine to study the mechanical properties of the tensile specimen under quasi-static conditions. Meanwhile, the Zwick-Roell RKP450 intelligent oscillometric impact tester was used to test the tensile specimens at different strain rates. Through the above two kinds of experimental researches, the relationship between the true stress-strain and the engineering stress-strain at different strain rates is obtained. The result shows that the tensile yield point of 316L stainless steel increases with the increase of strain rate, and the real stress-strain curve of the 316L stainless steel has a better normalization than that of the engineering stress-strain curve. The real stress-strain curves can be used in the practical engineering of impact stretch to improve its safety.

Keywords: impact stretch, 316L stainless steel, strain rate, real stress-strain, normalization

Procedia PDF Downloads 280
142 Suppression Subtractive Hybridization Technique for Identification of the Differentially Expressed Genes

Authors: Tuhina-khatun, Mohamed Hanafi Musa, Mohd Rafii Yosup, Wong Mui Yun, Aktar-uz-Zaman, Mahbod Sahebi

Abstract:

Suppression subtractive hybridization (SSH) method is valuable tool for identifying differentially regulated genes in disease specific or tissue specific genes important for cellular growth and differentiation. It is a widely used method for separating DNA molecules that distinguish two closely related DNA samples. SSH is one of the most powerful and popular methods for generating subtracted cDNA or genomic DNA libraries. It is based primarily on a suppression polymerase chain reaction (PCR) technique and combines normalization and subtraction in a solitary procedure. The normalization step equalizes the abundance of DNA fragments within the target population, and the subtraction step excludes sequences that are common to the populations being compared. This dramatically increases the probability of obtaining low-abundance differentially expressed cDNAs or genomic DNA fragments and simplifies analysis of the subtracted library. SSH technique is applicable to many comparative and functional genetic studies for the identification of disease, developmental, tissue specific, or other differentially expressed genes, as well as for the recovery of genomic DNA fragments distinguishing the samples under comparison.

Keywords: suppression subtractive hybridization, differentially expressed genes, disease specific genes, tissue specific genes

Procedia PDF Downloads 433
141 An Improved Convolution Deep Learning Model for Predicting Trip Mode Scheduling

Authors: Amin Nezarat, Naeime Seifadini

Abstract:

Trip mode selection is a behavioral characteristic of passengers with immense importance for travel demand analysis, transportation planning, and traffic management. Identification of trip mode distribution will allow transportation authorities to adopt appropriate strategies to reduce travel time, traffic and air pollution. The majority of existing trip mode inference models operate based on human selected features and traditional machine learning algorithms. However, human selected features are sensitive to changes in traffic and environmental conditions and susceptible to personal biases, which can make them inefficient. One way to overcome these problems is to use neural networks capable of extracting high-level features from raw input. In this study, the convolutional neural network (CNN) architecture is used to predict the trip mode distribution based on raw GPS trajectory data. The key innovation of this paper is the design of the layout of the input layer of CNN as well as normalization operation, in a way that is not only compatible with the CNN architecture but can also represent the fundamental features of motion including speed, acceleration, jerk, and Bearing rate. The highest prediction accuracy achieved with the proposed configuration for the convolutional neural network with batch normalization is 85.26%.

Keywords: predicting, deep learning, neural network, urban trip

Procedia PDF Downloads 138
140 Dynamic Gabor Filter Facial Features-Based Recognition of Emotion in Video Sequences

Authors: T. Hari Prasath, P. Ithaya Rani

Abstract:

In the world of visual technology, recognizing emotions from the face images is a challenging task. Several related methods have not utilized the dynamic facial features effectively for high performance. This paper proposes a method for emotions recognition using dynamic facial features with high performance. Initially, local features are captured by Gabor filter with different scale and orientations in each frame for finding the position and scale of face part from different backgrounds. The Gabor features are sent to the ensemble classifier for detecting Gabor facial features. The region of dynamic features is captured from the Gabor facial features in the consecutive frames which represent the dynamic variations of facial appearances. In each region of dynamic features is normalized using Z-score normalization method which is further encoded into binary pattern features with the help of threshold values. The binary features are passed to Multi-class AdaBoost classifier algorithm with the well-trained database contain happiness, sadness, surprise, fear, anger, disgust, and neutral expressions to classify the discriminative dynamic features for emotions recognition. The developed method is deployed on the Ryerson Multimedia Research Lab and Cohn-Kanade databases and they show significant performance improvement owing to their dynamic features when compared with the existing methods.

Keywords: detecting face, Gabor filter, multi-class AdaBoost classifier, Z-score normalization

Procedia PDF Downloads 278
139 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce

Authors: Jiao Sun, Li Pan, Shijun Liu

Abstract:

Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.

Keywords: collaborative filtering, recommendation, data normalization, mapreduce

Procedia PDF Downloads 217
138 Comparison of EMG Normalization Techniques Recommended for Back Muscles Used in Ergonomics Research

Authors: Saif Al-Qaisi, Alif Saba

Abstract:

Normalization of electromyography (EMG) data in ergonomics research is a prerequisite for interpreting the data. Normalizing accounts for variability in the data due to differences in participants’ physical characteristics, electrode placement protocols, time of day, and other nuisance factors. Typically, normalized data is reported as a percentage of the muscle’s isometric maximum voluntary contraction (%MVC). Various MVC techniques have been recommended in the literature for normalizing EMG activity of back muscles. This research tests and compares the recommended MVC techniques in the literature for three back muscles commonly used in ergonomics research, which are the lumbar erector spinae (LES), latissimus dorsi (LD), and thoracic erector spinae (TES). Six healthy males from a university population participated in this research. Five different MVC exercises were compared for each muscle using the Tringo wireless EMG system (Delsys Inc.). Since the LES and TES share similar functions in controlling trunk movements, their MVC exercises were the same, which included trunk extension at -60°, trunk extension at 0°, trunk extension while standing, hip extension, and the arch test. The MVC exercises identified in the literature for the LD were chest-supported shoulder extension, prone shoulder extension, lat-pull down, internal shoulder rotation, and abducted shoulder flexion. The maximum EMG signal was recorded during each MVC trial, and then the averages were computed across participants. A one-way analysis of variance (ANOVA) was utilized to determine the effect of MVC technique on muscle activity. Post-hoc analyses were performed using the Tukey test. The MVC technique effect was statistically significant for each of the muscles (p < 0.05); however, a larger sample of participants was needed to detect significant differences in the Tukey tests. The arch test was associated with the highest EMG average at the LES, and also it resulted in the maximum EMG activity more often than the other techniques (three out of six participants). For the TES, trunk extension at 0° was associated with the largest EMG average, and it resulted in the maximum EMG activity the most often (three out of six participants). For the LD, participants obtained their maximum EMG either from chest-supported shoulder extension (three out of six participants) or prone shoulder extension (three out of six participants). Chest-supported shoulder extension, however, had a larger average than prone shoulder extension (0.263 and 0.240, respectively). Although all the aforementioned techniques were superior in their averages, they did not always result in the maximum EMG activity. If an accurate estimate of the true MVC is desired, more than one technique may have to be performed. This research provides additional MVC techniques for each muscle that may elicit the maximum EMG activity.

Keywords: electromyography, maximum voluntary contraction, normalization, physical ergonomics

Procedia PDF Downloads 193
137 Decision Making System for Clinical Datasets

Authors: P. Bharathiraja

Abstract:

Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.

Keywords: decision making, data mining, normalization, fuzzy rule, classification

Procedia PDF Downloads 517
136 Madness in Susanna Kaysen’s Girl, Interrupted: A Focouldian Reading

Authors: Somaye Sabetnia

Abstract:

This paper is accomplished to probe Susanna Kaysen’s memoir Girl, Interrupted in the light of Michel Foucault’s theory of madness comprehensively set forth in his History of Madness (1961). It is an endeavor to analysis this novel based on Foucault’s idea of madness. In his archeological study of madness, Foucault introduces a way to perceive madness and its association with dominant discourses. He argues that the concept of madness is constructed within the social context, and different institutions affect its definition. Furthermore, he takes into consideration how each era treats madness, and affirms that in modern times, people considered mad are exiled out of cities, confined in madhouses, and later in clinics where they are treated with drugs. Set after World War II, the novel under observation highlights women’s conditions in which they were becoming a housewife or following their own desires; in fact, choosing the second one results in labeling mad. The protagonist of novel is labeled 'mad,' and is hence impelled to go to asylums where so-called patients are under the vigilant surveillance of the authorities to go through the process of 'normalization.' To discern how she is considered 'mad,' this article probes the dominant discourse of the time when the stories take place to provide a better understanding of madness under the impact of social, cultural, and political conditions. It examines how a so-called mad considered 'Other' and treated after being confined by the disciplinary system of the asylum in a panoptic world. In addition to, it describes the aim of treatment is to punish and control a patient not to cure. This article aims to indicate that Susanna Kaysen tries to picture what is defined as women’s madness is the result of the patriarchal society of the post-war America as well as the mental illness has nothing to do with blood; it is rather the result of the social inequality of the age.

Keywords: clinical treatment, disciplining and punishment, dominant discourse, normalization, other, panoptic world, reason vs. unreason

Procedia PDF Downloads 320
135 Task Scheduling and Resource Allocation in Cloud-based on AHP Method

Authors: Zahra Ahmadi, Fazlollah Adibnia

Abstract:

Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).

Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow

Procedia PDF Downloads 145