Search results for: harmonic Ritz vector.
286 Exploiting Kinetic and Kinematic Data to Plot Cyclograms for Managing the Rehabilitation Process of BKAs by Applying Neural Networks
Authors: L. Parisi
Abstract:
Kinematic data wisely correlate vector quantities in space to scalar parameters in time to assess the degree of symmetry between the intact limb and the amputated limb with respect to a normal model derived from the gait of control group participants. Furthermore, these particular data allow a doctor to preliminarily evaluate the usefulness of a certain rehabilitation therapy. Kinetic curves allow the analysis of ground reaction forces (GRFs) to assess the appropriateness of human motion. Electromyography (EMG) allows the analysis of the fundamental lower limb force contributions to quantify the level of gait asymmetry. However, the use of this technological tool is expensive and requires patient’s hospitalization. This research work suggests overcoming the above limitations by applying artificial neural networks.
Keywords: Kinetics, kinematics, cyclograms, neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2091285 Pricing Strategy Selection Using Fuzzy Linear Programming
Authors: Elif Alaybeyoğlu, Y. Esra Albayrak
Abstract:
Marketing establishes a communication network between producers and consumers. Nowadays, marketing approach is customer-focused and products are directly oriented to meet customer needs. Marketing, which is a long process, needs organization and management. Therefore strategic marketing planning becomes more and more important in today’s competitive conditions. Main focus of this paper is to evaluate pricing strategies and select the best pricing strategy solution while considering internal and external factors influencing the company’s pricing decisions associated with new product development. To reflect the decision maker’s subjective preference information and to determine the weight vector of factors (attributes), the fuzzy linear programming technique for multidimensional analysis of preference (LINMAP) under intuitionistic fuzzy (IF) environments is used.
Keywords: IF Sets, LINMAP, MAGDM, Marketing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2267284 Using Spectral Vectors and M-Tree for Graph Clustering and Searching in Graph Databases of Protein Structures
Authors: Do Phuc, Nguyen Thi Kim Phung
Abstract:
In this paper, we represent protein structure by using graph. A protein structure database will become a graph database. Each graph is represented by a spectral vector. We use Jacobi rotation algorithm to calculate the eigenvalues of the normalized Laplacian representation of adjacency matrix of graph. To measure the similarity between two graphs, we calculate the Euclidean distance between two graph spectral vectors. To cluster the graphs, we use M-tree with the Euclidean distance to cluster spectral vectors. Besides, M-tree can be used for graph searching in graph database. Our proposal method was tested with graph database of 100 graphs representing 100 protein structures downloaded from Protein Data Bank (PDB) and we compare the result with the SCOP hierarchical structure.Keywords: Eigenvalues, m-tree, graph database, protein structure, spectra graph theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658283 Computation of Probability Coefficients using Binary Decision Diagram and their Application in Test Vector Generation
Authors: Ashutosh Kumar Singh, Anand Mohan
Abstract:
This paper deals with efficient computation of probability coefficients which offers computational simplicity as compared to spectral coefficients. It eliminates the need of inner product evaluations in determination of signature of a combinational circuit realizing given Boolean function. The method for computation of probability coefficients using transform matrix, fast transform method and using BDD is given. Theoretical relations for achievable computational advantage in terms of required additions in computing all 2n probability coefficients of n variable function have been developed. It is shown that for n ≥ 5, only 50% additions are needed to compute all probability coefficients as compared to spectral coefficients. The fault detection techniques based on spectral signature can be used with probability signature also to offer computational advantage.Keywords: Binary Decision Diagrams, Spectral Coefficients, Fault detection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469282 Fast Complex Valued Time Delay Neural Networks
Authors: Hazem M. El-Bakry, Qiangfu Zhao
Abstract:
Here, a new idea to speed up the operation of complex valued time delay neural networks is presented. The whole data are collected together in a long vector and then tested as a one input pattern. The proposed fast complex valued time delay neural networks uses cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically that the number of computation steps required for the presented fast complex valued time delay neural networks is less than that needed by classical time delay neural networks. Simulation results using MATLAB confirm the theoretical computations.Keywords: Fast Complex Valued Time Delay Neural Networks, Cross Correlation, Frequency Domain
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827281 Comparison of Different Data Acquisition Techniques for Shape Optimization Problems
Authors: Attila Vámosi, Tamás Mankovits, Dávid Huri, Imre Kocsis, Tamás Szabó
Abstract:
Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. For example rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. The shape optimization problem of rubber parts led to the study of FEM based calculation processes. This type of problems was posed and investigated by several authors. In this paper the time demand of certain calculation methods are studied and the possibilities of time reduction is presented.
Keywords: Rubber bumper, data acquisition, finite element analysis, support vector regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2151280 Investigation on Feature Extraction and Classification of Medical Images
Authors: P. Gnanasekar, A. Nagappan, S. Sharavanan, O. Saravanan, D. Vinodkumar, T. Elayabharathi, G. Karthik
Abstract:
In this paper we present the deep study about the Bio- Medical Images and tag it with some basic extracting features (e.g. color, pixel value etc). The classification is done by using a nearest neighbor classifier with various distance measures as well as the automatic combination of classifier results. This process selects a subset of relevant features from a group of features of the image. It also helps to acquire better understanding about the image by describing which the important features are. The accuracy can be improved by increasing the number of features selected. Various types of classifications were evolved for the medical images like Support Vector Machine (SVM) which is used for classifying the Bacterial types. Ant Colony Optimization method is used for optimal results. It has high approximation capability and much faster convergence, Texture feature extraction method based on Gabor wavelets etc..Keywords: ACO Ant Colony Optimization, Correlogram, CCM Co-Occurrence Matrix, RTS Rough-Set theory
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3017279 Advanced Information Extraction with n-gram based LSI
Authors: Ahmet Güven, Ö. Özgür Bozkurt, Oya Kalıpsız
Abstract:
Number of documents being created increases at an increasing pace while most of them being in already known topics and little of them introducing new concepts. This fact has started a new era in information retrieval discipline where the requirements have their own specialties. That is digging into topics and concepts and finding out subtopics or relations between topics. Up to now IR researches were interested in retrieving documents about a general topic or clustering documents under generic subjects. However these conventional approaches can-t go deep into content of documents which makes it difficult for people to reach to right documents they were searching. So we need new ways of mining document sets where the critic point is to know much about the contents of the documents. As a solution we are proposing to enhance LSI, one of the proven IR techniques by supporting its vector space with n-gram forms of words. Positive results we have obtained are shown in two different application area of IR domain; querying a document database, clustering documents in the document database.Keywords: Document clustering, Information Extraction, Information Retrieval, LSI, n-gram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805278 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets Using an Open-Source Energy System Optimization Model
Authors: A. Balbo, G. Colucci, M. Nicoli, L. Savoldi
Abstract:
Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results are ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.
Keywords: Decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 703277 Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques
Authors: Hossein Nezamabadi-pour, Saeid Saryazdi
Abstract:
In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.
Keywords: Object-based image retrieval, DCT domain, Image indexing, Image classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027276 Road Vehicle Recognition Using Magnetic Sensing Feature Extraction and Classification
Authors: Xiao Chen, Xiaoying Kong, Min Xu
Abstract:
This paper presents a road vehicle detection approach for the intelligent transportation system. This approach mainly uses low-cost magnetic sensor and associated data collection system to collect magnetic signals. This system can measure the magnetic field changing, and it also can detect and count vehicles. We extend Mel Frequency Cepstral Coefficients to analyze vehicle magnetic signals. Vehicle type features are extracted using representation of cepstrum, frame energy, and gap cepstrum of magnetic signals. We design a 2-dimensional map algorithm using Vector Quantization to classify vehicle magnetic features to four typical types of vehicles in Australian suburbs: sedan, VAN, truck, and bus. Experiments results show that our approach achieves a high level of accuracy for vehicle detection and classification.
Keywords: Vehicle classification, signal processing, road traffic model, magnetic sensing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409275 Modelling Export Dynamics in the CSEE Countries Using GVAR Model
Abstract:
The paper investigates the key factors of export dynamics for a set of Central and Southeast European (CSEE) countries in the context of current economic and financial crisis. In order to model the export dynamics a Global Vector Auto Regressive (GVAR) model is defined. As opposed to models which model each country separately, the GVAR combines all country models in a global model which enables obtaining important information on spillover effects in the context of globalisation and rising international linkages. The results of the study indicate that for most of the CSEE countries, exports are mainly driven by domestic shocks, both in the short run and in the long run. This study is the first application of the GVAR model to studying the export dynamics in the CSEE countries and therefore the results of the study present an important empirical contribution.
Keywords: Export, GFEVD, Global VAR, International trade, weak exogeneity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2495274 The First Integral Approach in Stability Problem of Large Scale Nonlinear Dynamical Systems
Authors: M. Kidouche, H. Habbi, M. Zelmat, S. Grouni
Abstract:
In analyzing large scale nonlinear dynamical systems, it is often desirable to treat the overall system as a collection of interconnected subsystems. Solutions properties of the large scale system are then deduced from the solution properties of the individual subsystems and the nature of the interconnections. In this paper a new approach is proposed for the stability analysis of large scale systems, which is based upon the concept of vector Lyapunov functions and the decomposition methods. The present results make use of graph theoretic decomposition techniques in which the overall system is partitioned into a hierarchy of strongly connected components. We show then, that under very reasonable assumptions, the overall system is stable once the strongly connected subsystems are stables. Finally an example is given to illustrate the constructive methodology proposed.Keywords: Comparison principle, First integral, Large scale system, Lyapunov stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530273 Hospital Facility Location Selection Using Permanent Analytics Process
Authors: C. Ardil
Abstract:
In this paper, a new MCDMA approach, the permanent analytics process is proposed to assess the immovable valuation criteria and their significance in the placement of the healthcare facility. Five decision factors are considered for the value and selection of immovables. In the multiple factor selection problems, the priority vector of the criteria used to compare several immovables is first determined using the permanent analytics method, a mathematical model for the multiple criteria decisionmaking process. Then, to demonstrate the viability and efficacy of the suggested approach, twenty potential candidate locations were evaluated using the hospital site selection problem's decision criteria. The ranking accuracy of estimation was evaluated using composite programming, which took into account both the permanent analytics process and the weighted multiplicative model.
Keywords: Hospital Facility Location Selection, Permanent Analytics Process, Multiple Criteria Decision Making (MCDM)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 445272 Automatic Classification of Initial Categories of Alzheimer's Disease from Structural MRI Phase Images: A Comparison of PSVM, KNN and ANN Methods
Authors: Ahsan Bin Tufail, Ali Abidi, Adil Masood Siddiqui, Muhammad Shahzad Younis
Abstract:
An early and accurate detection of Alzheimer's disease (AD) is an important stage in the treatment of individuals suffering from AD. We present an approach based on the use of structural magnetic resonance imaging (sMRI) phase images to distinguish between normal controls (NC), mild cognitive impairment (MCI) and AD patients with clinical dementia rating (CDR) of 1. Independent component analysis (ICA) technique is used for extracting useful features which form the inputs to the support vector machines (SVM), K nearest neighbour (kNN) and multilayer artificial neural network (ANN) classifiers to discriminate between the three classes. The obtained results are encouraging in terms of classification accuracy and effectively ascertain the usefulness of phase images for the classification of different stages of Alzheimer-s disease.
Keywords: Biomedical image processing, classification algorithms, feature extraction, statistical learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2769271 Building a Trend Based Segmentation Method with SVR Model for Stock Turning Detection
Authors: Jheng-Long Wu, Pei-Chann Chang, Yi-Fang Pan
Abstract:
This research focus on developing a new segmentation method for improving forecasting model which is call trend based segmentation method (TBSM). Generally, the piece-wise linear representation (PLR) can finds some of pair of trading points is well for time series data, but in the complicated stock environment it is not well for stock forecasting because of the stock has more trends of trading. If we consider the trends of trading in stock price for the trading signal which it will improve the precision of forecasting model. Therefore, a TBSM with SVR model used to detect the trading points for various stocks of Taiwanese and America under different trend tendencies. The experimental results show our trading system is more profitable and can be implemented in real time of stock market
Keywords: Trend based segmentation method, support vector machine, turning detection, stock forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3171270 The Fiscal-Monetary Policy and Economic Growth in Algeria: VECM Approach
Authors: K. Bokreta, D. Benanaya
Abstract:
The objective of this study is to examine the relative effectiveness of monetary and fiscal policy in Algeria using the econometric modelling techniques of cointegration and vector error correction modelling to analyse and draw policy inferences. The chosen variables of fiscal policy are government expenditure and net taxes on products, while the effect of monetary policy is presented by the inflation rate and the official exchange rate. From the results, we find that in the long-run, the impact of government expenditures is positive, while the effect of taxes is negative on growth. Additionally, we find that the inflation rate is found to have little effect on GDP per capita but the impact of the exchange rate is insignificant. We conclude that fiscal policy is more powerful then monetary policy in promoting economic growth in Algeria.Keywords: Economic growth, fiscal policy, monetary policy, VECM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2663269 A Stereo Vision System for Top View Book Scanners
Authors: Erik Lilienblum, Robert Niese, Bernd Michaelis
Abstract:
This paper proposes a novel stereo vision technique for top view book scanners which provide us with dense 3d point clouds of page surfaces. This is a precondition to dewarp bound volumes independent of 2d information on the page. Our method is based on algorithms, which normally require the projection of pattern sequences with structured light. We use image sequences of the moving stripe lighting of the top view scanner instead of an additional light projection. Thus the stereo vision setup is simplified without losing measurement accuracy. Furthermore we improve a surface model dewarping method through introducing a difference vector based on real measurements. Although our proposed method is hardly expensive neither in calculation time nor in hardware requirements we present good dewarping results even for difficult examples.Keywords: stereo vision, 3d surface reconstruction, dewarpingdocuments, book scanner
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590268 Optimal Efficiency Control of Pulse Width Modulation - Inverter Fed Motor Pump Drive Using Neural Network
Authors: O. S. Ebrahim, M. A. Badr, A. S. Elgendy, K. O. Shawky, P. K. Jain
Abstract:
This paper demonstrates an improved Loss Model Control (LMC) for a 3-phase induction motor (IM) driving pump load. Compared with other power loss reduction algorithms for IM, the presented one has the advantages of fast and smooth flux adaptation, high accuracy, and versatile implementation. The performance of LMC depends mainly on the accuracy of modeling the motor drive and losses. A loss-model for IM drive that considers the surplus power loss caused by inverter voltage harmonics using closed-form equations and also includes the magnetic saturation has been developed. Further, an Artificial Neural Network (ANN) controller is synthesized and trained offline to determine the optimal flux level that achieves maximum drive efficiency. The drive’s voltage and speed control loops are connecting via the stator frequency to avoid the possibility of excessive magnetization. Besides, the resistance change due to temperature is considered by a first-order thermal model. The obtained thermal information enhances motor protection and control. These together have the potential of making the proposed algorithm reliable. Simulation and experimental studies are performed on 5.5 kW test motor using the proposed control method. The test results are provided and compared with the fixed flux operation to validate the effectiveness.
Keywords: Artificial neural network, ANN, efficiency optimization, induction motor, IM, Pulse Width Modulated, PWM, harmonic losses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 362267 A Study on Architectural Characteristics of Traditional Iranian Ordinary Houses in Mashhad, Iran
Authors: Rana Daneshvar Salehi
Abstract:
In many Iranian cities including Mashhad, the capital of Razavi Khorasan Province, ordinary samples of domestic architecture on a small scale is not considered as heritage. While the principals of house formation are respected in all traditional Iranian houses; from moderate to great ones. During the past decade, Mashhad has lost its identity, and has become a modern city. Identifying it as the capital of the Islamic Culture in 2017 by ISESCO and consequently looking for new developments and transfiguration caused to demolish a large number of traditional modest habitation. For this reason, the present paper aims to introduce the three undiscovered houses with the historical and monumental values located in the oldest neighborhoods of Mashhad which have been neglected in the cultural heritage field. The preliminary phase of this approach will be a measured survey to identify the significant characteristics of selected dwellings and understand the challenges through focusing on building form, orientation, room function, space proportion and ornamental elements’ details. A comparison between the case studies and the wealthy domestically buildings presents that a house belongs to inhabitants with an average income could introduce the same accurate, regular, harmonic and proportionate design which can be found in the great mansions. It reveals that an ordinary traditional house can be regarded as valuable construction not only for its historical characteristics but also for its aesthetical and architectural features that could avoid further destructions in the future.
Keywords: Traditional ordinary house, architectural characteristic, proportion, heritage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 813266 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups
Authors: Naushad Mamode Khan
Abstract:
The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood-based estimating methodology. The joint generalized quasi-likelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill-conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQL-III) that is based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.
Keywords: Longitudinal, Com-Poisson, Ill-conditioned, INAR(1), GLMS, GQL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778265 Numerical Investigation of Soft Clayey Soil Improved by Soil-Cement Columns under Harmonic Load
Authors: R. Ziaie Moayed, E. Ghanbari Alamouty
Abstract:
Deep soil mixing is one of the improvement methods in geotechnical engineering which is widely used in soft soils. This article investigates the consolidation behavior of a soft clay soil which is improved by soil-cement column (SCC) by numerical modeling using Plaxis2D program. This behavior is simulated under vertical static and cyclic load which is applied on the soil surface. The static load problem is the simulation of a physical model test in an axisymmetric condition which uses a single SCC in the model center. The results of numerical modeling consist of settlement of soft soil composite, stress on soft soil and column, and excessive pore water pressure in the soil show a good correspondence with the test results. The response of soft soil composite to the cyclic load in vertical direction also compared with the static results. Also the effects of two variables namely the cement content used in a SCC and the area ratio (the ratio of the diameter of SCC to the diameter of composite soil model, a) is investigated. The results show that the stress on the column with the higher value of a, is lesser compared with the stress on other columns. Different rate of consolidation and excessive pore pressure distribution is observed in cyclic load problem. Also comparing the results of settlement of soil shows higher compressibility in the cyclic load problem.
Keywords: Area ratio, consolidation behavior, cyclic load, numerical modeling, soil-cement column.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 836264 Liver Tumor Detection by Classification through FD Enhancement of CT Image
Authors: N. Ghatwary, A. Ahmed, H. Jalab
Abstract:
In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique.Keywords: Fractional differential (FD), Computed Tomography (CT), fusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685263 An Approach Based on Statistics and Multi-Resolution Representation to Classify Mammograms
Authors: Nebi Gedik
Abstract:
One of the significant and continual public health problems in the world is breast cancer. Early detection is very important to fight the disease, and mammography has been one of the most common and reliable methods to detect the disease in the early stages. However, it is a difficult task, and computer-aided diagnosis (CAD) systems are needed to assist radiologists in providing both accurate and uniform evaluation for mass in mammograms. In this study, a multiresolution statistical method to classify mammograms as normal and abnormal in digitized mammograms is used to construct a CAD system. The mammogram images are represented by wave atom transform, and this representation is made by certain groups of coefficients, independently. The CAD system is designed by calculating some statistical features using each group of coefficients. The classification is performed by using support vector machine (SVM).
Keywords: Wave atom transform, statistical features, multi-resolution representation, mammogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 883262 Optimal Duty-Cycle Modulation Scheme for Analog-To-Digital Conversion Systems
Authors: G. Sonfack, J. Mbihi, B. Lonla Moffo
Abstract:
This paper presents an optimal duty-cycle modulation (ODCM) scheme for analog-to-digital conversion (ADC) systems. The overall ODCM-Based ADC problem is decoupled into optimal DCM and digital filtering sub-problems, while taking into account constraints of mutual design parameters between the two. Using a set of three lemmas and four morphological theorems, the ODCM sub-problem is modelled as a nonlinear cost function with nonlinear constraints. Then, a weighted least pth norm of the error between ideal and predicted frequency responses is used as a cost function for the digital filtering sub-problem. In addition, MATLAB fmincon and MATLAB iirlnorm tools are used as optimal DCM and least pth norm solvers respectively. Furthermore, the virtual simulation scheme of an overall prototyping ODCM-based ADC system is implemented and well tested with the help of Simulink tool according to relevant set of design data, i.e., 3 KHz of modulating bandwidth, 172 KHz of maximum modulation frequency and 25 MHZ of sampling frequency. Finally, the results obtained and presented show that the ODCM-based ADC achieves under 3 KHz of modulating bandwidth: 57 dBc of SINAD (signal-to-noise and distorsion), 58 dB of SFDR (Surpious free dynamic range) -80 dBc of THD (total harmonic distorsion), and 10 bits of minimum resolution. These performance levels appear to be a great challenge within the class of oversampling ADC topologies, with 2nd order IIR (infinite impulse response) decimation filter.
Keywords: Digital IIR filter, morphological lemmas and theorems, optimal DCM-based DAC, virtual simulation, weighted least pth norm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 937261 Development of the Academic Model to Predict Student Success at VUT-FSASEC Using Decision Trees
Authors: Langa Hendrick Musawenkosi, Twala Bhekisipho
Abstract:
The success or failure of students is a concern for every academic institution, college, university, governments and students themselves. Several approaches have been researched to address this concern. In this paper, a view is held that when a student enters a university or college or an academic institution, he or she enters an academic environment. The academic environment is unique concept used to develop the solution for making predictions effectively. This paper presents a model to determine the propensity of a student to succeed or fail in the French South African Schneider Electric Education Center (FSASEC) at the Vaal University of Technology (VUT). The Decision Tree algorithm is used to implement the model at FSASEC.
Keywords: Academic environment model, decision trees, FSASEC, K-nearest neighbor, machine learning, popularity index, support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1140260 Real-Time Hand Tracking and Gesture Recognition System Using Neural Networks
Authors: Tin Hninn Hninn Maung
Abstract:
This paper introduces a hand gesture recognition system to recognize real time gesture in unstrained environments. Efforts should be made to adapt computers to our natural means of communication: Speech and body language. A simple and fast algorithm using orientation histograms will be developed. It will recognize a subset of MAL static hand gestures. A pattern recognition system will be using a transforrn that converts an image into a feature vector, which will be compared with the feature vectors of a training set of gestures. The final system will be Perceptron implementation in MATLAB. This paper includes experiments of 33 hand postures and discusses the results. Experiments shows that the system can achieve a 90% recognition average rate and is suitable for real time applications.
Keywords: Hand gesture recognition, Orientation Histogram, Myanmar Alphabet Language, Perceptronnetwork, MATLAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4704259 Novel Approach for Promoting the Generalization Ability of Neural Networks
Authors: Naiqin Feng, Fang Wang, Yuhui Qiu
Abstract:
A new approach to promote the generalization ability of neural networks is presented. It is based on the point of view of fuzzy theory. This approach is implemented through shrinking or magnifying the input vector, thereby reducing the difference between training set and testing set. It is called “shrinking-magnifying approach" (SMA). At the same time, a new algorithm; α-algorithm is presented to find out the appropriate shrinking-magnifying-factor (SMF) α and obtain better generalization ability of neural networks. Quite a few simulation experiments serve to study the effect of SMA and α-algorithm. The experiment results are discussed in detail, and the function principle of SMA is analyzed in theory. The results of experiments and analyses show that the new approach is not only simpler and easier, but also is very effective to many neural networks and many classification problems. In our experiments, the proportions promoting the generalization ability of neural networks have even reached 90%.Keywords: Fuzzy theory, generalization, misclassification rate, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538258 Outlier Pulse Detection and Feature Extraction for Wrist Pulse Analysis
Authors: Bhaskar Thakker, Anoop Lal Vyas
Abstract:
Wrist pulse analysis for identification of health status is found in Ancient Indian as well as Chinese literature. The preprocessing of wrist pulse is necessary to remove outlier pulses and fluctuations prior to the analysis of pulse pressure signal. This paper discusses the identification of irregular pulses present in the pulse series and intricacies associated with the extraction of time domain pulse features. An approach of Dynamic Time Warping (DTW) has been utilized for the identification of outlier pulses in the wrist pulse series. The ambiguity present in the identification of pulse features is resolved with the help of first derivative of Ensemble Average of wrist pulse series. An algorithm for detecting tidal and dicrotic notch in individual wrist pulse segment is proposed.Keywords: Wrist Pulse Segment, Ensemble Average, Dynamic Time Warping (DTW), Pulse Similarity Vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2097257 An Optimal Feature Subset Selection for Leaf Analysis
Authors: N. Valliammal, S.N. Geethalakshmi
Abstract:
This paper describes an optimal approach for feature subset selection to classify the leaves based on Genetic Algorithm (GA) and Kernel Based Principle Component Analysis (KPCA). Due to high complexity in the selection of the optimal features, the classification has become a critical task to analyse the leaf image data. Initially the shape, texture and colour features are extracted from the leaf images. These extracted features are optimized through the separate functioning of GA and KPCA. This approach performs an intersection operation over the subsets obtained from the optimization process. Finally, the most common matching subset is forwarded to train the Support Vector Machine (SVM). Our experimental results successfully prove that the application of GA and KPCA for feature subset selection using SVM as a classifier is computationally effective and improves the accuracy of the classifier.Keywords: Optimization, Feature extraction, Feature subset, Classification, GA, KPCA, SVM and Computation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2246