Search results for: vector processing.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2238

Search results for: vector processing.

1398 A Study of Under Actuator Dynamic System by Comparing between Minimum Energy and Minimum Jerk Problems

Authors: Tawiwat V., Phermsak S., Noppasit C.

Abstract:

This paper deals with under actuator dynamic systems such as spring-mass-damper system when the number of control variable is less than the number of state variable. In order to apply optimal control, the controllability must be checked. There are many objective functions to be selected as the goal of the optimal control such as minimum energy, maximum energy and minimum jerk. As the objective function is the first priority, if one like to have the second goal to be applied; however, it could not fit in the objective function format and also avoiding the vector cost for the objective, this paper will illustrate the problem of under actuator dynamic systems with the easiest to deal with comparing between minimum energy and minimum jerk.

Keywords: Under actuator, Dynamic optimal control, Minimumjerk, Minimum energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1301
1397 SVM Based Model as an Optimal Classifier for the Classification of Sonar Signals

Authors: Suresh S. Salankar, Balasaheb M. Patre

Abstract:

Research into the problem of classification of sonar signals has been taken up as a challenging task for the neural networks. This paper investigates the design of an optimal classifier using a Multi layer Perceptron Neural Network (MLP NN) and Support Vector Machines (SVM). Results obtained using sonar data sets suggest that SVM classifier perform well in comparison with well-known MLP NN classifier. An average classification accuracy of 91.974% is achieved with SVM classifier and 90.3609% with MLP NN classifier, on the test instances. The area under the Receiver Operating Characteristics (ROC) curve for the proposed SVM classifier on test data set is found as 0.981183, which is very close to unity and this clearly confirms the excellent quality of the proposed classifier. The SVM classifier employed in this paper is implemented using kernel Adatron algorithm is seen to be robust and relatively insensitive to the parameter initialization in comparison to MLP NN.

Keywords: Classification, MLP NN, backpropagation algorithm, SVM, Receiver Operating Characteristics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799
1396 Analytical Comparison of Conventional Algorithms with Vedic Algorithm for Digital Multiplier

Authors: Akhilesh G. Naik, Dipankar Pal

Abstract:

In today’s scenario, the complexity of digital signal processing (DSP) applications and various microcontroller architectures have been increasing to such an extent that the traditional approaches to multiplier design in most processors are becoming outdated for being comparatively slow. Modern processing applications require suitable pipelined approaches, and therefore, algorithms that are friendlier with pipelined architectures. Traditional algorithms like Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda architectures have been proven to be comparatively slow for pipelined architectures. These architectures, therefore, need to be optimized or combined with other architectures amongst them to enhance its performances and to be made suitable for pipelined hardware/architectures. Recently, Vedic algorithm mathematically has proven to be efficient by appearing to be less complex and with fewer steps for its output establishment and have assumed renewed importance. This paper describes and shows how the Vedic algorithm can be better suited for pipelined architectures and also can be combined with traditional architectures and algorithms for enhancing its ability even further. In this paper, we also established that for complex applications on DSP and other microcontroller architectures, using Vedic approach for multiplication proves to be the best available and efficient option.

Keywords: Wallace tree, Radix-4 Booth, Radix-8 Booth, Dadda, Vedic, Single-Stage Karatsuba, Looped Karatsuba.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
1395 Organization Model of Semantic Document Repository and Search Techniques for Studying Information Technology

Authors: Nhon Do, Thuong Huynh, An Pham

Abstract:

Nowadays, organizing a repository of documents and resources for learning on a special field as Information Technology (IT), together with search techniques based on domain knowledge or document-s content is an urgent need in practice of teaching, learning and researching. There have been several works related to methods of organization and search by content. However, the results are still limited and insufficient to meet user-s demand for semantic document retrieval. This paper presents a solution for the organization of a repository that supports semantic representation and processing in search. The proposed solution is a model which integrates components such as an ontology describing domain knowledge, a database of document repository, semantic representation for documents and a file system; with problems, semantic processing techniques and advanced search techniques based on measuring semantic similarity. The solution is applied to build a IT learning materials management system of a university with semantic search function serving students, teachers, and manager as well. The application has been implemented, tested at the University of Information Technology, Ho Chi Minh City, Vietnam and has achieved good results.

Keywords: document retrieval system, knowledgerepresentation, document representation, semantic search, ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690
1394 Tidal Data Analysis using ANN

Authors: Ritu Vijay, Rekha Govil

Abstract:

The design of a complete expansion that allows for compact representation of certain relevant classes of signals is a central problem in signal processing applications. Achieving such a representation means knowing the signal features for the purpose of denoising, classification, interpolation and forecasting. Multilayer Neural Networks are relatively a new class of techniques that are mathematically proven to approximate any continuous function arbitrarily well. Radial Basis Function Networks, which make use of Gaussian activation function, are also shown to be a universal approximator. In this age of ever-increasing digitization in the storage, processing, analysis and communication of information, there are numerous examples of applications where one needs to construct a continuously defined function or numerical algorithm to approximate, represent and reconstruct the given discrete data of a signal. Many a times one wishes to manipulate the data in a way that requires information not included explicitly in the data, which is done through interpolation and/or extrapolation. Tidal data are a very perfect example of time series and many statistical techniques have been applied for tidal data analysis and representation. ANN is recent addition to such techniques. In the present paper we describe the time series representation capabilities of a special type of ANN- Radial Basis Function networks and present the results of tidal data representation using RBF. Tidal data analysis & representation is one of the important requirements in marine science for forecasting.

Keywords: ANN, RBF, Tidal Data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
1393 Influence of Microstructural Features on Wear Resistance of Biomedical Titanium Materials

Authors: Mohsin T. Mohammed, Zahid A. Khan, Arshad N. Siddiquee

Abstract:

The field of biomedical materials plays an imperative requisite and a critical role in manufacturing a variety of biological artificial replacements in a modern world. Recently, titanium (Ti) materials are being used as biomaterials because of their superior corrosion resistance and tremendous specific strength, free- allergic problems and the greatest biocompatibility compared to other competing biomaterials such as stainless steel, Co-Cr alloys, ceramics, polymers, and composite materials. However, regardless of these excellent performance properties, Implantable Ti materials have poor shear strength and wear resistance which limited their applications as biomaterials. Even though the wear properties of Ti alloys has revealed some improvements, the crucial effectiveness of biomedical Ti alloys as wear components requires a comprehensive deep understanding of the wear reasons, mechanisms, and techniques that can be used to improve wear behavior. This review examines current information on the effect of thermal and thermomechanical processing of implantable Ti materials on the long-term prosthetic requirement which related with wear behavior. This paper focuses mainly on the evolution, evaluation and development of effective microstructural features that can improve wear properties of bio grade Ti materials using thermal and thermomechanical treatments.

Keywords: Wear Resistance, Heat Treatment, Thermomechanical Processing, Biomedical Titanium Materials.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3639
1392 Identification of Cardiac Arrhythmias using Natural Resonance Complex Frequencies

Authors: Moustafa A. Bani-Hasan, Yasser M. Kadah, Fatma M. El-Hefnawi

Abstract:

An electrocardiogram (ECG) feature extraction system based on the calculation of the complex resonance frequency employing Prony-s method is developed. Prony-s method is applied on five different classes of ECG signals- arrhythmia as a finite sum of exponentials depending on the signal-s poles and the resonant complex frequencies. Those poles and resonance frequencies of the ECG signals- arrhythmia are evaluated for a large number of each arrhythmia. The ECG signals of lead II (ML II) were taken from MIT-BIH database for five different types. These are the ventricular couplet (VC), ventricular tachycardia (VT), ventricular bigeminy (VB), and ventricular fibrillation (VF) and the normal (NR). This novel method can be extended to any number of arrhythmias. Different classification techniques were tried using neural networks (NN), K nearest neighbor (KNN), linear discriminant analysis (LDA) and multi-class support vector machine (MC-SVM).

Keywords: Arrhythmias analysis, electrocardiogram, featureextraction, statistical classifiers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2055
1391 Simulink Model of Reference Frame Theory Based Three Phase Shunt Active Filter

Authors: P. Nammalvar, P. Meganathan, A. Balamuguran

Abstract:

Among various active filters, shunt active filter is a viable solution for reactive power and harmonics compensation. In this paper, the SRF plan is used to generate current reference for compensation and conventional PI controllers were used as the controller to compensate the reactive power. The design of the closed loop controllers is reserved simple by modeling them as first order systems. Computationally uncomplicated and efficient SVM system is used in the present work for better utilization of dc bus voltage. The rating of shunt active filter has been finalized based on the reactive power demand of the selected reactive load. The proposed control and SVM technique are validated by simulating in MATLAB software.

Keywords: Shunt Active Filter, Space vector pulse width modulation, Voltage Source Converter, Reactive Power, Synchronous Reference Frame, Point of common coupling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2567
1390 Reduced Order Modeling of Natural Gas Transient Flow in Pipelines

Authors: M. Behbahani-Nejad, Y. Shekari

Abstract:

A reduced order modeling approach for natural gas transient flow in pipelines is presented. The Euler equations are considered as the governing equations and solved numerically using the implicit Steger-Warming flux vector splitting method. Next, the linearized form of the equations is derived and the corresponding eigensystem is obtained. Then, a few dominant flow eigenmodes are used to construct an efficient reduced-order model. A well-known test case is presented to demonstrate the accuracy and the computational efficiency of the proposed method. The results obtained are in good agreement with those of the direct numerical method and field data. Moreover, it is shown that the present reduced-order model is more efficient than the conventional numerical techniques for transient flow analysis of natural gas in pipelines.

Keywords: Eigenmode, Natural Gas, Reduced Order Modeling, Transient Flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910
1389 Exploiting Kinetic and Kinematic Data to Plot Cyclograms for Managing the Rehabilitation Process of BKAs by Applying Neural Networks

Authors: L. Parisi

Abstract:

Kinematic data wisely correlate vector quantities in space to scalar parameters in time to assess the degree of symmetry between the intact limb and the amputated limb with respect to a normal model derived from the gait of control group participants. Furthermore, these particular data allow a doctor to preliminarily evaluate the usefulness of a certain rehabilitation therapy. Kinetic curves allow the analysis of ground reaction forces (GRFs) to assess the appropriateness of human motion. Electromyography (EMG) allows the analysis of the fundamental lower limb force contributions to quantify the level of gait asymmetry. However, the use of this technological tool is expensive and requires patient’s hospitalization. This research work suggests overcoming the above limitations by applying artificial neural networks.

Keywords: Kinetics, kinematics, cyclograms, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
1388 Pricing Strategy Selection Using Fuzzy Linear Programming

Authors: Elif Alaybeyoğlu, Y. Esra Albayrak

Abstract:

Marketing establishes a communication network between producers and consumers. Nowadays, marketing approach is customer-focused and products are directly oriented to meet customer needs. Marketing, which is a long process, needs organization and management. Therefore strategic marketing planning becomes more and more important in today’s competitive conditions. Main focus of this paper is to evaluate pricing strategies and select the best pricing strategy solution while considering internal and external factors influencing the company’s pricing decisions associated with new product development. To reflect the decision maker’s subjective preference information and to determine the weight vector of factors (attributes), the fuzzy linear programming technique for multidimensional analysis of preference (LINMAP) under intuitionistic fuzzy (IF) environments is used.

Keywords: IF Sets, LINMAP, MAGDM, Marketing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2240
1387 Using Spectral Vectors and M-Tree for Graph Clustering and Searching in Graph Databases of Protein Structures

Authors: Do Phuc, Nguyen Thi Kim Phung

Abstract:

In this paper, we represent protein structure by using graph. A protein structure database will become a graph database. Each graph is represented by a spectral vector. We use Jacobi rotation algorithm to calculate the eigenvalues of the normalized Laplacian representation of adjacency matrix of graph. To measure the similarity between two graphs, we calculate the Euclidean distance between two graph spectral vectors. To cluster the graphs, we use M-tree with the Euclidean distance to cluster spectral vectors. Besides, M-tree can be used for graph searching in graph database. Our proposal method was tested with graph database of 100 graphs representing 100 protein structures downloaded from Protein Data Bank (PDB) and we compare the result with the SCOP hierarchical structure.

Keywords: Eigenvalues, m-tree, graph database, protein structure, spectra graph theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
1386 Computation of Probability Coefficients using Binary Decision Diagram and their Application in Test Vector Generation

Authors: Ashutosh Kumar Singh, Anand Mohan

Abstract:

This paper deals with efficient computation of probability coefficients which offers computational simplicity as compared to spectral coefficients. It eliminates the need of inner product evaluations in determination of signature of a combinational circuit realizing given Boolean function. The method for computation of probability coefficients using transform matrix, fast transform method and using BDD is given. Theoretical relations for achievable computational advantage in terms of required additions in computing all 2n probability coefficients of n variable function have been developed. It is shown that for n ≥ 5, only 50% additions are needed to compute all probability coefficients as compared to spectral coefficients. The fault detection techniques based on spectral signature can be used with probability signature also to offer computational advantage.

Keywords: Binary Decision Diagrams, Spectral Coefficients, Fault detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1442
1385 Application of Medium High Hydrostatic Pressure in Preserving Textural Quality and Safety of Pineapple Compote

Authors: Nazim Uddin, Yohiko Nakaura, Kazutaka Yamamoto

Abstract:

Compote (fruit in syrup) of pineapple (Ananas comosus L. Merrill) is expected to have a high market potential as one of convenient ready-to-eat (RTE) foods worldwide. High hydrostatic pressure (HHP) in combination with low temperature (LT) was applied to the processing of pineapple compote as well as medium HHP (MHHP) in combination with medium-high temperature (MHT) since both processes can enhance liquid impregnation and inactivate microbes. MHHP+MHT (55 or 65 °C) process, as well as the HHP+LT process, has successfully inactivated the microbes in the compote to a non-detectable level. Although the compotes processed by MHHP+MHT or HHP+LT have lost the fresh texture as in a similar manner as those processed solely by heat, it was indicated that the texture degradations by heat were suppressed under MHHP. Degassing process reduced the hardness, while calcium (Ca) contributed to be retained hardness in MHT and MHHP+MHT processes. Electrical impedance measurement supported the damage due to degassing and heat. The color, Brix, and appearance were not affected by the processing methods significantly. MHHP+MHT and HHP+LT processes may be applicable to produce high-quality, safe RTE pineapple compotes. Further studies on the optimization of packaging and storage condition will be indispensable for commercialization.

Keywords: Compote of pineapple, ready-to-eat, medium high hydrostatic pressure, postharvest loss, and texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 775
1384 Fast Complex Valued Time Delay Neural Networks

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

Here, a new idea to speed up the operation of complex valued time delay neural networks is presented. The whole data are collected together in a long vector and then tested as a one input pattern. The proposed fast complex valued time delay neural networks uses cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically that the number of computation steps required for the presented fast complex valued time delay neural networks is less than that needed by classical time delay neural networks. Simulation results using MATLAB confirm the theoretical computations.

Keywords: Fast Complex Valued Time Delay Neural Networks, Cross Correlation, Frequency Domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
1383 Comparison of Different Data Acquisition Techniques for Shape Optimization Problems

Authors: Attila Vámosi, Tamás Mankovits, Dávid Huri, Imre Kocsis, Tamás Szabó

Abstract:

Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. For example rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. The shape optimization problem of rubber parts led to the study of FEM based calculation processes. This type of problems was posed and investigated by several authors. In this paper the time demand of certain calculation methods are studied and the possibilities of time reduction is presented.

Keywords: Rubber bumper, data acquisition, finite element analysis, support vector regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2119
1382 Investigation on Feature Extraction and Classification of Medical Images

Authors: P. Gnanasekar, A. Nagappan, S. Sharavanan, O. Saravanan, D. Vinodkumar, T. Elayabharathi, G. Karthik

Abstract:

In this paper we present the deep study about the Bio- Medical Images and tag it with some basic extracting features (e.g. color, pixel value etc). The classification is done by using a nearest neighbor classifier with various distance measures as well as the automatic combination of classifier results. This process selects a subset of relevant features from a group of features of the image. It also helps to acquire better understanding about the image by describing which the important features are. The accuracy can be improved by increasing the number of features selected. Various types of classifications were evolved for the medical images like Support Vector Machine (SVM) which is used for classifying the Bacterial types. Ant Colony Optimization method is used for optimal results. It has high approximation capability and much faster convergence, Texture feature extraction method based on Gabor wavelets etc..

Keywords: ACO Ant Colony Optimization, Correlogram, CCM Co-Occurrence Matrix, RTS Rough-Set theory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2989
1381 Advanced Information Extraction with n-gram based LSI

Authors: Ahmet Güven, Ö. Özgür Bozkurt, Oya Kalıpsız

Abstract:

Number of documents being created increases at an increasing pace while most of them being in already known topics and little of them introducing new concepts. This fact has started a new era in information retrieval discipline where the requirements have their own specialties. That is digging into topics and concepts and finding out subtopics or relations between topics. Up to now IR researches were interested in retrieving documents about a general topic or clustering documents under generic subjects. However these conventional approaches can-t go deep into content of documents which makes it difficult for people to reach to right documents they were searching. So we need new ways of mining document sets where the critic point is to know much about the contents of the documents. As a solution we are proposing to enhance LSI, one of the proven IR techniques by supporting its vector space with n-gram forms of words. Positive results we have obtained are shown in two different application area of IR domain; querying a document database, clustering documents in the document database.

Keywords: Document clustering, Information Extraction, Information Retrieval, LSI, n-gram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1777
1380 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets Using an Open-Source Energy System Optimization Model

Authors: A. Balbo, G. Colucci, M. Nicoli, L. Savoldi

Abstract:

Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results are ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.

Keywords: Decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 630
1379 Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques

Authors: Hossein Nezamabadi-pour, Saeid Saryazdi

Abstract:

In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.

Keywords: Object-based image retrieval, DCT domain, Image indexing, Image classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2000
1378 Modelling Export Dynamics in the CSEE Countries Using GVAR Model

Authors: S. Jakšić, B. Žmuk

Abstract:

The paper investigates the key factors of export dynamics for a set of Central and Southeast European (CSEE) countries in the context of current economic and financial crisis. In order to model the export dynamics a Global Vector Auto Regressive (GVAR) model is defined. As opposed to models which model each country separately, the GVAR combines all country models in a global model which enables obtaining important information on spillover effects in the context of globalisation and rising international linkages. The results of the study indicate that for most of the CSEE countries, exports are mainly driven by domestic shocks, both in the short run and in the long run. This study is the first application of the GVAR model to studying the export dynamics in the CSEE countries and therefore the results of the study present an important empirical contribution.

Keywords: Export, GFEVD, Global VAR, International trade, weak exogeneity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2463
1377 The First Integral Approach in Stability Problem of Large Scale Nonlinear Dynamical Systems

Authors: M. Kidouche, H. Habbi, M. Zelmat, S. Grouni

Abstract:

In analyzing large scale nonlinear dynamical systems, it is often desirable to treat the overall system as a collection of interconnected subsystems. Solutions properties of the large scale system are then deduced from the solution properties of the individual subsystems and the nature of the interconnections. In this paper a new approach is proposed for the stability analysis of large scale systems, which is based upon the concept of vector Lyapunov functions and the decomposition methods. The present results make use of graph theoretic decomposition techniques in which the overall system is partitioned into a hierarchy of strongly connected components. We show then, that under very reasonable assumptions, the overall system is stable once the strongly connected subsystems are stables. Finally an example is given to illustrate the constructive methodology proposed.

Keywords: Comparison principle, First integral, Large scale system, Lyapunov stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
1376 Hospital Facility Location Selection Using Permanent Analytics Process

Authors: C. Ardil

Abstract:

In this paper, a new MCDMA approach, the permanent analytics process is proposed to assess the immovable valuation criteria and their significance in the placement of the healthcare facility. Five decision factors are considered for the value and selection of immovables. In the multiple factor selection problems, the priority vector of the criteria used to compare several immovables is first determined using the permanent analytics method, a mathematical model for the multiple criteria decisionmaking process. Then, to demonstrate the viability and efficacy of the suggested approach, twenty potential candidate locations were evaluated using the hospital site selection problem's decision criteria. The ranking accuracy of estimation was evaluated using composite programming, which took into account both the permanent analytics process and the weighted multiplicative model. 

Keywords: Hospital Facility Location Selection, Permanent Analytics Process, Multiple Criteria Decision Making (MCDM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 378
1375 Building a Trend Based Segmentation Method with SVR Model for Stock Turning Detection

Authors: Jheng-Long Wu, Pei-Chann Chang, Yi-Fang Pan

Abstract:

This research focus on developing a new segmentation method for improving forecasting model which is call trend based segmentation method (TBSM). Generally, the piece-wise linear representation (PLR) can finds some of pair of trading points is well for time series data, but in the complicated stock environment it is not well for stock forecasting because of the stock has more trends of trading. If we consider the trends of trading in stock price for the trading signal which it will improve the precision of forecasting model. Therefore, a TBSM with SVR model used to detect the trading points for various stocks of Taiwanese and America under different trend tendencies. The experimental results show our trading system is more profitable and can be implemented in real time of stock market

Keywords: Trend based segmentation method, support vector machine, turning detection, stock forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3139
1374 The Fiscal-Monetary Policy and Economic Growth in Algeria: VECM Approach

Authors: K. Bokreta, D. Benanaya

Abstract:

The objective of this study is to examine the relative effectiveness of monetary and fiscal policy in Algeria using the econometric modelling techniques of cointegration and vector error correction modelling to analyse and draw policy inferences. The chosen variables of fiscal policy are government expenditure and net taxes on products, while the effect of monetary policy is presented by the inflation rate and the official exchange rate. From the results, we find that in the long-run, the impact of government expenditures is positive, while the effect of taxes is negative on growth. Additionally, we find that the inflation rate is found to have little effect on GDP per capita but the impact of the exchange rate is insignificant. We conclude that fiscal policy is more powerful then monetary policy in promoting economic growth in Algeria.

Keywords: Economic growth, fiscal policy, monetary policy, VECM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2609
1373 A Stereo Vision System for Top View Book Scanners

Authors: Erik Lilienblum, Robert Niese, Bernd Michaelis

Abstract:

This paper proposes a novel stereo vision technique for top view book scanners which provide us with dense 3d point clouds of page surfaces. This is a precondition to dewarp bound volumes independent of 2d information on the page. Our method is based on algorithms, which normally require the projection of pattern sequences with structured light. We use image sequences of the moving stripe lighting of the top view scanner instead of an additional light projection. Thus the stereo vision setup is simplified without losing measurement accuracy. Furthermore we improve a surface model dewarping method through introducing a difference vector based on real measurements. Although our proposed method is hardly expensive neither in calculation time nor in hardware requirements we present good dewarping results even for difficult examples.

Keywords: stereo vision, 3d surface reconstruction, dewarpingdocuments, book scanner

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1561
1372 Hot Workability of High Strength Low Alloy Steels

Authors: Seok Hong Min, Jung Ho Moon, Woo Young Jung, Tae Kwon Ha

Abstract:

The hot deformation behavior of high strength low alloy (HSLA) steels with different chemical compositions under hot working conditions in the temperature range of 900 to 1100℃ and strain rate range from 0.1 to 10 s-1 has been studied by performing a series of hot compression tests. The dynamic materials model has been employed for developing the processing maps, which show variation of the efficiency of power dissipation with temperature and strain rate. Also the Kumar-s model has been used for developing the instability map, which shows variation of the instability for plastic deformation with temperature and strain rate. The efficiency of power dissipation increased with decreasing strain rate and increasing temperature in the steel with higher Cr and Ti content. High efficiency of power dissipation over 20 % was obtained at a finite strain level of 0.1 under the conditions of strain rate lower than 1 s-1 and temperature higher than 1050 ℃ . Plastic instability was expected in the regime of temperatures lower than 1000 ℃ and strain rate lower than 0.3 s-1. Steel with lower Cr and Ti contents showed high efficiency of power dissipation at higher strain rate and lower temperature conditions.

Keywords: High strength low alloys steels, hot workability, Dynamic materials model, Processing maps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
1371 Performance of an Improved Fluidized System for Processing Green Tea

Authors: Nickson Kipng’etich Lang’at, Thomas Thoruwa, John Abraham, John Wanyoko

Abstract:

Green tea is made from the top two leaves and buds of a shrub, Camellia sinensis, of the family Theaceae and the order Theales. The green tea leaves are picked and immediately sent to be dried or steamed to prevent fermentation. Fluid bed drying technique is a common drying method used in drying green tea because of its ease in design and construction and fluidization of fine tea particles. Major problems in this method are significant loss of chemical content of the leaf and green appearance of tea, retention of high moisture content in the leaves and bed channeling and defluidization. The energy associated with the drying technology has been shown to be a vital factor in determining the quality of green tea. As part of the implementation, prototype dryer was built that facilitated sequence of operations involving steaming, cooling, pre-drying and final drying. The major findings of the project were in terms of quality characteristics of tea leaves and energy consumption during processing. The optimal design achieved a moisture content of 4.2 ± 0.84%. With the optimum drying temperature of 100 ºC, the specific energy consumption was 1697.8 kj.Kg-1 and evaporation rate of 4.272 x 10-4 Kg.m-2.s-1. The energy consumption in a fluidized system can be further reduced by focusing on energy saving designs.

Keywords: Evaporation rate, fluid bed dryer, maceration, specific energy consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674
1370 Timescape-Based Panoramic View for Historic Landmarks

Authors: H. Ali, A. Whitehead

Abstract:

Providing a panoramic view of famous landmarks around the world offers artistic and historic value for historians, tourists, and researchers. Exploring the history of famous landmarks by presenting a comprehensive view of a temporal panorama merged with geographical and historical information presents a unique challenge of dealing with images that span a long period, from the 1800’s up to the present. This work presents the concept of temporal panorama through a timeline display of aligned historic and modern images for many famous landmarks. Utilization of this panorama requires a collection of hundreds of thousands of landmark images from the Internet comprised of historic images and modern images of the digital age. These images have to be classified for subset selection to keep the more suitable images that chronologically document a landmark’s history. Processing of historic images captured using older analog technology under various different capturing conditions represents a big challenge when they have to be used with modern digital images. Successful processing of historic images to prepare them for next steps of temporal panorama creation represents an active contribution in cultural heritage preservation through the fulfillment of one of UNESCO goals in preservation and displaying famous worldwide landmarks.

Keywords: Cultural heritage, image registration, image subset selection, registered image similarity, temporal panorama, timescapes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1023
1369 A Study to Assess the Energy Saving Potential and Economic Analysis of an Agro Based Industry in Karnataka, India

Authors: Sangamesh G. Sakri, Akash N. Patil, Sadashivappa M. Kotli

Abstract:

Agro based industries in India are considered as the micro, small and medium enterprises (MSME). In India, MSMEs contribute approximately 8 percent of the country’s GDP, 42 percent of the manufacturing output and 40 percent of exports. The toor dal (scientific name Cajanus cajan, commonly known as yellow gram, pigeon pea) is the second largest pulse crop in India accounting for about 20% of total pulse production. The toor dal milling industry in India is one of the major agro-processing industries in the country. Most of the dal mills are concentrated in pulse producing areas, which are spread all over the country. In Karnataka state, Gulbarga is a district, where toor dal is the main crop and is grown extensively. There are more than 500 dal mills in and around the Gulbarga district to process dal. However, the majority of these dal milling units use traditional methods of processing which are energy and capital intensive. There exists a huge energy saving potential in these mills. An energy audit is conducted on a dal mill in Gulbarga to understand the energy consumption pattern to assess the energy saving potential, and an economic analysis is conducted to identify energy conservation opportunities.

Keywords: Conservation, demand side management, load curve, toor dal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1502