Search results for: Hybrid evolutionary optimization algorithm
147 A Novel Hopfield Neural Network for Perfect Calculation of Magnetic Resonance Spectroscopy
Authors: Hazem M. El-Bakry
Abstract:
In this paper, an automatic determination algorithm for nuclear magnetic resonance (NMR) spectra of the metabolites in the living body by magnetic resonance spectroscopy (MRS) without human intervention or complicated calculations is presented. In such method, the problem of NMR spectrum determination is transformed into the determination of the parameters of a mathematical model of the NMR signal. To calculate these parameters efficiently, a new model called modified Hopfield neural network is designed. The main achievement of this paper over the work in literature [30] is that the speed of the modified Hopfield neural network is accelerated. This is done by applying cross correlation in the frequency domain between the input values and the input weights. The modified Hopfield neural network can accomplish complex dignals perfectly with out any additinal computation steps. This is a valuable advantage as NMR signals are complex-valued. In addition, a technique called “modified sequential extension of section (MSES)" that takes into account the damping rate of the NMR signal is developed to be faster than that presented in [30]. Simulation results show that the calculation precision of the spectrum improves when MSES is used along with the neural network. Furthermore, MSES is found to reduce the local minimum problem in Hopfield neural networks. Moreover, the performance of the proposed method is evaluated and there is no effect on the performance of calculations when using the modified Hopfield neural networks.
Keywords: Hopfield Neural Networks, Cross Correlation, Nuclear Magnetic Resonance, Magnetic Resonance Spectroscopy, Fast Fourier Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846146 Prediction of the Epileptic Events 'Epileptic Seizures' by Neural Networks and Expert Systems
Authors: Kifah Tout, Nisrine Sinno, Mohamad Mikati
Abstract:
Many studies have focused on the nonlinear analysis of electroencephalography (EEG) mainly for the characterization of epileptic brain states. It is assumed that at least two states of the epileptic brain are possible: the interictal state characterized by a normal apparently random, steady-state EEG ongoing activity; and the ictal state that is characterized by paroxysmal occurrence of synchronous oscillations and is generally called in neurology, a seizure. The spatial and temporal dynamics of the epileptogenic process is still not clear completely especially the most challenging aspects of epileptology which is the anticipation of the seizure. Despite all the efforts we still don-t know how and when and why the seizure occurs. However actual studies bring strong evidence that the interictal-ictal state transition is not an abrupt phenomena. Findings also indicate that it is possible to detect a preseizure phase. Our approach is to use the neural network tool to detect interictal states and to predict from those states the upcoming seizure ( ictal state). Analysis of the EEG signal based on neural networks is used for the classification of EEG as either seizure or non-seizure. By applying prediction methods it will be possible to predict the upcoming seizure from non-seizure EEG. We will study the patients admitted to the epilepsy monitoring unit for the purpose of recording their seizures. Preictal, ictal, and post ictal EEG recordings are available on such patients for analysis The system will be induced by taking a body of samples then validate it using another. Distinct from the two first ones a third body of samples is taken to test the network for the achievement of optimum prediction. Several methods will be tried 'Backpropagation ANN' and 'RBF'.Keywords: Artificial neural network (ANN), automatic prediction, epileptic seizures analysis, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541145 Decision Support System for Flood Crisis Management using Artificial Neural Network
Authors: Muhammad Aqil, Ichiro Kita, Akira Yano, Nishiyama Soichi
Abstract:
This paper presents an alternate approach that uses artificial neural network to simulate the flood level dynamics in a river basin. The algorithm was developed in a decision support system environment in order to enable users to process the data. The decision support system is found to be useful due to its interactive nature, flexibility in approach and evolving graphical feature and can be adopted for any similar situation to predict the flood level. The main data processing includes the gauging station selection, input generation, lead-time selection/generation, and length of prediction. This program enables users to process the flood level data, to train/test the model using various inputs and to visualize results. The program code consists of a set of files, which can as well be modified to match other purposes. This program may also serve as a tool for real-time flood monitoring and process control. The running results indicate that the decision support system applied to the flood level seems to have reached encouraging results for the river basin under examination. The comparison of the model predictions with the observed data was satisfactory, where the model is able to forecast the flood level up to 5 hours in advance with reasonable prediction accuracy. Finally, this program may also serve as a tool for real-time flood monitoring and process control.Keywords: Decision Support System, Neural Network, Flood Level
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628144 A Propagator Method like Algorithm for Estimation of Multiple Real-Valued Sinusoidal Signal Frequencies
Authors: Sambit Prasad Kar, P.Palanisamy
Abstract:
In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.
Keywords: Frequency estimation, peak search, subspace-based method without eigen decomposition, quadratic convex function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735143 Cold Flow Investigation of Primary Zone Characteristics in Combustor Utilizing Axial Air Swirler
Authors: Yehia A. Eldrainy, Mohammad Nazri Mohd. Jaafar, Tholudin Mat Lazim
Abstract:
This paper presents a cold flow simulation study of a small gas turbine combustor performed using laboratory scale test rig. The main objective of this investigation is to obtain physical insight of the main vortex, responsible for the efficient mixing of fuel and air. Such models are necessary for predictions and optimization of real gas turbine combustors. Air swirler can control the combustor performance by assisting in the fuel-air mixing process and by producing recirculation region which can act as flame holders and influences residence time. Thus, proper selection of a swirler is needed to enhance combustor performance and to reduce NOx emissions. Three different axial air swirlers were used based on their vane angles i.e., 30°, 45°, and 60°. Three-dimensional, viscous, turbulent, isothermal flow characteristics of the combustor model operating at room temperature were simulated via Reynolds- Averaged Navier-Stokes (RANS) code. The model geometry has been created using solid model, and the meshing has been done using GAMBIT preprocessing package. Finally, the solution and analysis were carried out in a FLUENT solver. This serves to demonstrate the capability of the code for design and analysis of real combustor. The effects of swirlers and mass flow rate were examined. Details of the complex flow structure such as vortices and recirculation zones were obtained by the simulation model. The computational model predicts a major recirculation zone in the central region immediately downstream of the fuel nozzle and a second recirculation zone in the upstream corner of the combustion chamber. It is also shown that swirler angles changes have significant effects on the combustor flowfield as well as pressure losses.
Keywords: cold flow, numerical simulation, combustor;turbulence, axial swirler.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206142 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: Time-series, features engineering methods for forecasting, energy demand forecasting, Azure machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1293141 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer
Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved
Abstract:
Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.
Keywords: Computer-aided system, detection, image segmentation, morphology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 546140 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography
Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi
Abstract:
Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620139 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks
Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin
Abstract:
Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3303138 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images
Authors: SP. Chokkalingam, K. Komathy
Abstract:
Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.
Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2481137 Optimization of Samarium Extraction via Nanofluid-Based Emulsion Liquid Membrane Using Cyanex 272 as Mobile Carrier
Authors: Maliheh Raji, Hossein Abolghasemi, Jaber Safdari, Ali Kargari
Abstract:
Samarium as a rare-earth element is playing a growing important role in high technology. Traditional methods for extraction of rare earth metals such as ion exchange and solvent extraction have disadvantages of high investment and high energy consumption. Emulsion liquid membrane (ELM) as an improved solvent extraction technique is an effective transport method for separation of various compounds from aqueous solutions. In this work, the extraction of samarium from aqueous solutions by ELM was investigated using response surface methodology (RSM). The organic membrane phase of the ELM was a nanofluid consisted of multiwalled carbon nanotubes (MWCNT), Span80 as surfactant, Cyanex 272 as mobile carrier, and kerosene as base fluid. 1 M nitric acid solution was used as internal aqueous phase. The effects of the important process parameters on samarium extraction were investigated, and the values of these parameters were optimized using the Central Composition Design (CCD) of RSM. These parameters were the concentration of MWCNT in nanofluid, the carrier concentration, and the volume ratio of organic membrane phase to internal phase (Roi). The three-dimensional (3D) response surfaces of samarium extraction efficiency were obtained to visualize the individual and interactive effects of the process variables. A regression model for % extraction was developed, and its adequacy was evaluated. The result shows that % extraction improves by using MWCNT nanofluid in organic membrane phase and extraction efficiency of 98.92% can be achieved under the optimum conditions. In addition, demulsification was successfully performed and the recycled membrane phase was proved to be effective in the optimum condition.
Keywords: Cyanex 272, emulsion liquid membrane, multiwalled carbon nanotubes, nanofluid, response surface methodology, Samarium.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859136 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications
Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber
Abstract:
Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.
Keywords: Classification, High dimensional data, Machine learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2386135 Development of Manufacturing Simulation Model for Semiconductor Fabrication
Authors: Syahril Ridzuan Ab Rahim, Ibrahim Ahmad, Mohd Azizi Chik, Ahmad Zafir Md. Rejab, and U. Hashim
Abstract:
This research presents the development of simulation modeling for WIP management in semiconductor fabrication. Manufacturing simulation modeling is needed for productivity optimization analysis due to the complex process flows involved more than 35 percent re-entrance processing steps more than 15 times at same equipment. Furthermore, semiconductor fabrication required to produce high product mixed with total processing steps varies from 300 to 800 steps and cycle time between 30 to 70 days. Besides the complexity, expansive wafer cost that potentially impact the company profits margin once miss due date is another motivation to explore options to experiment any analysis using simulation modeling. In this paper, the simulation model is developed using existing commercial software platform AutoSched AP, with customized integration with Manufacturing Execution Systems (MES) and Advanced Productivity Family (APF) for data collections used to configure the model parameters and data source. Model parameters such as processing steps cycle time, equipment performance, handling time, efficiency of operator are collected through this customization. Once the parameters are validated, few customizations are made to ensure the prior model is executed. The accuracy for the simulation model is validated with the actual output per day for all equipments. The comparison analysis from result of the simulation model compared to actual for achieved 95 percent accuracy for 30 days. This model later was used to perform various what if analysis to understand impacts on cycle time and overall output. By using this simulation model, complex manufacturing environment like semiconductor fabrication (fab) now have alternative source of validation for any new requirements impact analysis.Keywords: Advanced Productivity Family (APF), Complementary Metal Oxide Semiconductor (CMOS), Manufacturing Execution Systems (MES), Work In Progress (WIP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3223134 Optimization of Acid Treatments by Assessing Diversion Strategies in Carbonate and Sandstone Formations
Authors: Ragi Poyyara, Vijaya Patnana, Mohammed Alam
Abstract:
When acid is pumped into damaged reservoirs for damage removal/stimulation, distorted inflow of acid into the formation occurs caused by acid preferentially traveling into highly permeable regions over low permeable regions, or (in general) into the path of least resistance. This can lead to poor zonal coverage and hence warrants diversion to carry out an effective placement of acid. Diversion is desirably a reversible technique of temporarily reducing the permeability of high perm zones, thereby forcing the acid into lower perm zones. The uniqueness of each reservoir can pose several challenges to engineers attempting to devise optimum and effective diversion strategies. Diversion techniques include mechanical placement and/or chemical diversion of treatment fluids, further sub-classified into ball sealers, bridge plugs, packers, particulate diverters, viscous gels, crosslinked gels, relative permeability modifiers (RPMs), foams, and/or the use of placement techniques, such as coiled tubing (CT) and the maximum pressure difference and injection rate (MAPDIR) methodology. It is not always realized that the effectiveness of diverters greatly depends on reservoir properties, such as formation type, temperature, reservoir permeability, heterogeneity, and physical well characteristics (e.g., completion type, well deviation, length of treatment interval, multiple intervals, etc.). This paper reviews the mechanisms by which each variety of diverter functions and discusses the effect of various reservoir properties on the efficiency of diversion techniques. Guidelines are recommended to help enhance productivity from zones of interest by choosing the best methods of diversion while pumping an optimized amount of treatment fluid. The success of an overall acid treatment often depends on the effectiveness of the diverting agents.
Keywords: Acid treatment, carbonate, diversion, sandstone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4048133 A Spatial Repetitive Controller Applied to an Aeroelastic Model for Wind Turbines
Authors: Riccardo Fratini, Riccardo Santini, Jacopo Serafini, Massimo Gennaretti, Stefano Panzieri
Abstract:
This paper presents a nonlinear differential model, for a three-bladed horizontal axis wind turbine (HAWT) suited for control applications. It is based on a 8-dofs, lumped parameters structural dynamics coupled with a quasi-steady sectional aerodynamics. In particular, using the Euler-Lagrange Equation (Energetic Variation approach), the authors derive, and successively validate, such model. For the derivation of the aerodynamic model, the Greenbergs theory, an extension of the theory proposed by Theodorsen to the case of thin airfoils undergoing pulsating flows, is used. Specifically, in this work, the authors restricted that theory under the hypothesis of low perturbation reduced frequency k, which causes the lift deficiency function C(k) to be real and equal to 1. Furthermore, the expressions of the aerodynamic loads are obtained using the quasi-steady strip theory (Hodges and Ormiston), as a function of the chordwise and normal components of relative velocity between flow and airfoil Ut, Up, their derivatives, and section angular velocity ε˙. For the validation of the proposed model, the authors carried out open and closed-loop simulations of a 5 MW HAWT, characterized by radius R =61.5 m and by mean chord c = 3 m, with a nominal angular velocity Ωn = 1.266rad/sec. The first analysis performed is the steady state solution, where a uniform wind Vw = 11.4 m/s is considered and a collective pitch angle θ = 0.88◦ is imposed. During this step, the authors noticed that the proposed model is intrinsically periodic due to the effect of the wind and of the gravitational force. In order to reject this periodic trend in the model dynamics, the authors propose a collective repetitive control algorithm coupled with a PD controller. In particular, when the reference command to be tracked and/or the disturbance to be rejected are periodic signals with a fixed period, the repetitive control strategies can be applied due to their high precision, simple implementation and little performance dependency on system parameters. The functional scheme of a repetitive controller is quite simple and, given a periodic reference command, is composed of a control block Crc(s) usually added to an existing feedback control system. The control block contains and a free time-delay system eτs in a positive feedback loop, and a low-pass filter q(s). It should be noticed that, while the time delay term reduces the stability margin, on the other hand the low pass filter is added to ensure stability. It is worth noting that, in this work, the authors propose a phase shifting for the controller and the delay system has been modified as e^(−(T−γk)), where T is the period of the signal and γk is a phase shifting of k samples of the same periodic signal. It should be noticed that, the phase shifting technique is particularly useful in non-minimum phase systems, such as flexible structures. In fact, using the phase shifting, the iterative algorithm could reach the convergence also at high frequencies. Notice that, in our case study, the shifting of k samples depends both on the rotor angular velocity Ω and on the rotor azimuth angle Ψ: we refer to this controller as a spatial repetitive controller. The collective repetitive controller has also been coupled with a C(s) = PD(s), in order to dampen oscillations of the blades. The performance of the spatial repetitive controller is compared with an industrial PI controller. In particular, starting from wind speed velocity Vw = 11.4 m/s the controller is asked to maintain the nominal angular velocity Ωn = 1.266rad/s after an instantaneous increase of wind speed (Vw = 15 m/s). Then, a purely periodic external disturbance is introduced in order to stress the capabilities of the repetitive controller. The results of the simulations show that, contrary to a simple PI controller, the spatial repetitive-PD controller has the capability to reject both external disturbances and periodic trend in the model dynamics. Finally, the nominal value of the angular velocity is reached, in accordance with results obtained with commercial software for a turbine of the same type.Keywords: Wind turbines, aeroelasticity, repetitive control, periodic systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1301132 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems
Authors: Nyeng P. Gyang
Abstract:
Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.Keywords: Cloud computing systems, multicore systems, parallel delaunay triangulation, parallel surface modeling and generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 880131 Text Mining Technique for Data Mining Application
Authors: M. Govindarajan
Abstract:
Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.Keywords: C5.0, Error Ratio, text mining, training data, test data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2489130 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band
Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant Kumar Srivastava
Abstract:
An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986 and 0.9214 respectively at HHpolarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373 and 0.9428 respectively.Keywords: Bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3229129 Rotation Invariant Fusion of Partial Image Parts in Vista Creation using Missing View Regeneration
Authors: H. B. Kekre, Sudeep D. Thepade
Abstract:
The automatic construction of large, high-resolution image vistas (mosaics) is an active area of research in the fields of photogrammetry [1,2], computer vision [1,4], medical image processing [4], computer graphics [3] and biometrics [8]. Image stitching is one of the possible options to get image mosaics. Vista Creation in image processing is used to construct an image with a large field of view than that could be obtained with a single photograph. It refers to transforming and stitching multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas. Vista creation process aligns two partial images over each other and blends them together. Image mosaics allow one to compensate for differences in viewing geometry. Thus they can be used to simplify tasks by simulating the condition in which the scene is viewed from a fixed position with single camera. While obtaining partial images the geometric anomalies like rotation, scaling are bound to happen. To nullify effect of rotation of partial images on process of vista creation, we are proposing rotation invariant vista creation algorithm in this paper. Rotation of partial image parts in the proposed method of vista creation may introduce some missing region in the vista. To correct this error, that is to fill the missing region further we have used image inpainting method on the created vista. This missing view regeneration method also overcomes the problem of missing view [31] in vista due to cropping, irregular boundaries of partial image parts and errors in digitization [35]. The method of missing view regeneration generates the missing view of vista using the information present in vista itself.Keywords: Vista, Overlap Estimation, Rotation Invariance, Missing View Regeneration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724128 Cumulative Learning based on Dynamic Clustering of Hierarchical Production Rules(HPRs)
Authors: Kamal K.Bharadwaj, Rekha Kandwal
Abstract:
An important structuring mechanism for knowledge bases is building clusters based on the content of their knowledge objects. The objects are clustered based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form Decision If < condition> Generality
Keywords: Cumulative learning, clustering, data mining, hierarchical production rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439127 A Control Strategy Based on UTT and ISCT for 3P4W UPQC
Authors: Yash Pal, A.Swarup, Bhim Singh
Abstract:
This paper presents a novel control strategy of a threephase four-wire Unified Power Quality (UPQC) for an improvement in power quality. The UPQC is realized by integration of series and shunt active power filters (APFs) sharing a common dc bus capacitor. The shunt APF is realized using a thee-phase, four leg voltage source inverter (VSI) and the series APF is realized using a three-phase, three leg VSI. A control technique based on unit vector template technique (UTT) is used to get the reference signals for series APF, while instantaneous sequence component theory (ISCT) is used for the control of Shunt APF. The performance of the implemented control algorithm is evaluated in terms of power-factor correction, load balancing, neutral source current mitigation and mitigation of voltage and current harmonics, voltage sag and swell in a three-phase four-wire distribution system for different combination of linear and non-linear loads. In this proposed control scheme of UPQC, the current/voltage control is applied over the fundamental supply currents/voltages instead of fast changing APFs currents/voltages, there by reducing the computational delay and the required sensors. MATLAB/Simulink based simulations are obtained, which support the functionality of the UPQC. MATLAB/Simulink based simulations are obtained, which support the functionality of the UPQC.Keywords: Power Quality, UPQC, Harmonics, Load Balancing, Power Factor Correction, voltage harmonic mitigation, currentharmonic mitigation, voltage sag, swell
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2271126 Profile Calculation in Water Phantom of Symmetric and Asymmetric Photon Beam
Authors: N. Chegeni, M. J. Tahmasebi Birgani
Abstract:
Nowadays, in most radiotherapy departments, the commercial treatment planning systems (TPS) used to calculate dose distributions needs to be verified; therefore, quick, easy-to-use and low cost dose distribution algorithms are desirable to test and verify the performance of the TPS. In this paper, we put forth an analytical method to calculate the phantom scatter contribution and depth dose on the central axis based on the equivalent square concept. Then, this method was generalized to calculate the profiles at any depth and for several field shapes regular or irregular fields under symmetry and asymmetry photon beam conditions. Varian 2100 C/D and Siemens Primus Plus Linacs with 6 and 18 MV photon beam were used for irradiations. Percentage depth doses (PDDs) were measured for a large number of square fields for both energies, and for 45º wedges which were employed to obtain the profiles in any depth. To assess the accuracy of the calculated profiles, several profile measurements were carried out for some treatment fields. The calculated and measured profiles were compared by gamma-index calculation. All γ–index calculations were based on a 3% dose criterion and a 3 mm dose-to-agreement (DTA) acceptance criterion. The γ values were less than 1 at most points. However, the maximum γ observed was about 1.10 in the penumbra region in most fields and in the central area for the asymmetric fields. This analytical approach provides a generally quick and fairly accurate algorithm to calculate dose distribution for some treatment fields in conventional radiotherapy.
Keywords: Dose distribution, equivalent field, asymmetric field, irregular field.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3045125 Products in Early Development Phases: Ecological Classification and Evaluation Using an Interval Arithmetic Based Calculation Approach
Authors: Helen L. Hein, Joachim Schwarte
Abstract:
As a pillar of sustainable development, ecology has become an important milestone in research community, especially due to global challenges like climate change. The ecological performance of products can be scientifically conducted with life cycle assessments. In the construction sector, significant amounts of CO2 emissions are assigned to the energy used for building heating purposes. Therefore, sustainable construction materials for insulating purposes are substantial, whereby aerogels have been explored intensively in the last years due to their low thermal conductivity. Therefore, the WALL-ACE project aims to develop an aerogel-based thermal insulating plaster that would achieve minor thermal conductivities. But as in the early stage of development phases, a lot of information is still missing or not yet accessible, the ecological performance of innovative products bases increasingly on uncertain data that can lead to significant deviations in the results. To be able to predict realistically how meaningful the results are and how viable the developed products may be with regard to their corresponding respective market, these deviations however have to be considered. Therefore, a classification method is presented in this study, which may allow comparing the ecological performance of modern products with already established and competitive materials. In order to achieve this, an alternative calculation method was used that allows computing with lower and upper bounds to consider all possible values without precise data. The life cycle analysis of the considered products was conducted with an interval arithmetic based calculation method. The results lead to the conclusion that the interval solutions describing the possible environmental impacts are so wide that the result usability is limited. Nevertheless, a further optimization in reducing environmental impacts of aerogels seems to be needed to become more competitive in the future.
Keywords: Aerogel-based, insulating material, early develop¬ment phase, interval arithmetic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 615124 Experimental Analyses of Thermoelectric Generator Behavior Using Two Types of Thermoelectric Modules for Marine Application
Authors: A. Nour Eddine, D. Chalet, L. Aixala, P. Chessé, X. Faure, N. Hatat
Abstract:
Thermal power technology such as the TEG (Thermo-Electric Generator) arouses significant attention worldwide for waste heat recovery. Despite the potential benefits of marine application due to the permanent heat sink from sea water, no significant studies on this application were to be found. In this study, a test rig has been designed and built to test the performance of the TEG on engine operating points. The TEG device is built from commercially available materials for the sake of possible economical application. Two types of commercial TEM (thermo electric module) have been studied separately on the test rig. The engine data were extracted from a commercial Diesel engine since it shares the same principle in terms of engine efficiency and exhaust with the marine Diesel engine. An open circuit water cooling system is used to replicate the sea water cold source. The characterization tests showed that the silicium-germanium alloys TEM proved a remarkable reliability on all engine operating points, with no significant deterioration of performance even under sever variation in the hot source conditions. The performance of the bismuth-telluride alloys was 100% better than the first type of TEM but it showed a deterioration in power generation when the air temperature exceeds 300 °C. The temperature distribution on the heat exchange surfaces revealed no useful combination of these two types of TEM with this tube length, since the surface temperature difference between both ends is no more than 10 °C. This study exposed the perspective of use of TEG technology for marine engine exhaust heat recovery. Although the results suggested non-sufficient power generation from the low cost commercial TEM used, it provides valuable information about TEG device optimization, including the design of heat exchanger and the types of thermo-electric materials.
Keywords: Internal combustion engine application, Seebeck, thermo-electricity, waste heat recovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625123 A Strategic Sustainability Analysis of Electric Vehicles in EU Today and Towards 2050
Authors: Sven Borén, Henrik Ny
Abstract:
Ambitions within the EU for moving towards sustainable transport include major emission reductions for fossil fuel road vehicles, especially for buses, trucks, and cars. The electric driveline seems to be an attractive solution for such development. This study first applied the Framework for Strategic Sustainable Development to compare sustainability effects of today’s fossil fuel vehicles with electric vehicles that have batteries or hydrogen fuel cells. The study then addressed a scenario were electric vehicles might be in majority in Europe by 2050. The methodology called Strategic Lifecycle Assessment was first used, were each life cycle phase was assessed for violations against sustainability principles. This indicates where further analysis could be done in order to quantify the magnitude of each violation, and later to create alternative strategies and actions that lead towards sustainability. A Life Cycle Assessment of combustion engine cars, plug-in hybrid cars, battery electric cars and hydrogen fuel cell cars was then conducted to compare and quantify environmental impacts. The authors found major violations of sustainability principles like use of fossil fuels, which contribute to the increase of emission related impacts such as climate change, acidification, eutrophication, ozone depletion, and particulate matters. Other violations were found, such as use of scarce materials for batteries and fuel cells, and also for most life cycle phases for all vehicles when using fossil fuel vehicles for mining, production and transport. Still, the studied current battery and hydrogen fuel cell cars have less severe violations than fossil fuel cars. The life cycle assessment revealed that fossil fuel cars have overall considerably higher environmental impacts compared to electric cars as long as the latter are powered by renewable electricity. By 2050, there will likely be even more sustainable alternatives than the studied electric vehicles when the EU electricity mix mainly should stem from renewable sources, batteries should be recycled, fuel cells should be a mature technology for use in vehicles (containing no scarce materials), and electric drivelines should have replaced combustion engines in other sectors. An uncertainty for fuel cells in 2050 is whether the production of hydrogen will have had time to switch to renewable resources. If so, that would contribute even more to a sustainable development. Except for being adopted in the GreenCharge roadmap, the authors suggest that the results can contribute to planning in the upcoming decades for a sustainable increase of EVs in Europe, and potentially serve as an inspiration for other smaller or larger regions. Further studies could map the environmental effects in LCA further, and include other road vehicles to get a more precise perception of how much they could affect sustainable development.Keywords: Strategic, electric vehicles, fuel cell, LCA, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3747122 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas
Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi
Abstract:
In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.
Keywords: Thermal remote sensing, insolation model, land surface temperature, geothermal anomalies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1026121 Rolling Element Bearing Diagnosis by Improved Envelope Spectrum: Optimal Frequency Band Selection
Authors: Juan David Arango, Alejandro Restrepo-Martinez
Abstract:
The Rolling Element Bearing (REB) vibration diagnosis is worth of special interest by the variety of REB and the wide necessity of those elements in industrial applications. The presence of a localized fault in a REB gives rise to a vibrational response, characterized by the modulation of a carrier signal. Frequency content of carrier signal (Spectral Frequency –f) is mainly related to resonance frequencies of the REB. This carrier signal is modulated by another signal, governed by the periodicity of the fault impact (Cyclic Frequency –α). In this sense, REB fault vibration response gives rise to a second-order cyclostationary signal. Second order cyclostationary signals could be represented in a bi-spectral map, where Spectral Coherence –SCoh are plotted against f and α. The Improved Envelope Spectrum –IES, is a useful approach to execute REB fault diagnosis. IES could be applied by the integration of SCoh over a predefined bandwidth on the f axis. Approaches to select f-bandwidth have been recently exposed by the definition of a metric which intends to evaluate the magnitude of the IES at the fault characteristics frequencies. This metric is represented in a 1/3-binary tree as a function of the frequency bandwidth and centre. Based on this binary tree the optimal frequency band is selected. However, some advantages have been seen if the metric is changed, which in fact tends to dictate different optimal f-bandwidth and so improve the IES representation. This paper evaluates the behaviour of the IES from a different metric optimization. This metric is based on the sample correlation coefficient, detecting high peaks in the selected frequencies while penalizing high peaks in the neighbours of the selected frequencies. Prior results indicate an improvement on the signal-noise ratio (SNR) on around 86% of samples analysed, which belong to IMS database.
Keywords: Sample Correlation IESFOgram, cyclostationary analysis, improved envelope spectrum, IES, rolling element bearing diagnosis, spectral coherence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743120 CFD-Parametric Study in Stator Heat Transfer of an Axial Flux Permanent Magnet Machine
Authors: Alireza Rasekh, Peter Sergeant, Jan Vierendeels
Abstract:
This paper copes with the numerical simulation for convective heat transfer in the stator disk of an axial flux permanent magnet (AFPM) electrical machine. Overheating is one of the main issues in the design of AFMPs, which mainly occurs in the stator disk, so that it needs to be prevented. A rotor-stator configuration with 16 magnets at the periphery of the rotor is considered. Air is allowed to flow through openings in the rotor disk and channels being formed between the magnets and in the gap region between the magnets and the stator surface. The rotating channels between the magnets act as a driving force for the air flow. The significant non-dimensional parameters are the rotational Reynolds number, the gap size ratio, the magnet thickness ratio, and the magnet angle ratio. The goal is to find correlations for the Nusselt number on the stator disk according to these non-dimensional numbers. Therefore, CFD simulations have been performed with the multiple reference frame (MRF) technique to model the rotary motion of the rotor and the flow around and inside the machine. A minimization method is introduced by a pattern-search algorithm to find the appropriate values of the reference temperature. It is found that the correlations are fast, robust and is capable of predicting the stator heat transfer with a good accuracy. The results reveal that the magnet angle ratio diminishes the stator heat transfer, whereas the rotational Reynolds number and the magnet thickness ratio improve the convective heat transfer. On the other hand, there a certain gap size ratio at which the stator heat transfer reaches a maximum.
Keywords: Axial flux permanent magnet, CFD, magnet parameters, stator heat transfer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480119 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 780118 Impact of Terrorism as an Asymmetrical Threat on the State's Conventional Security Forces
Authors: Igor Pejic
Abstract:
The main focus of this research will be on analyzing correlative links between terrorism as an asymmetrical threat and the consequences it leaves on conventional security forces. The methodology behind the research will include qualitative research methods focusing on comparative analysis of books, scientific papers, documents and other sources, in order to deduce, explore and formulate the results of the research. With the coming of the 21st century and the rising multi-polar, new world threats quickly emerged. The realistic approach in international relations deems that relations among nations are in a constant state of anarchy since there are no definitive rules and the distribution of power varies widely. International relations are further characterized by egoistic and self-orientated human nature, anarchy or absence of a higher government, security and lack of morality. The asymmetry of power is also reflected on countries' security capabilities and its abilities to project power. With the coming of the new millennia and the rising multi-polar world order, the asymmetry of power can be also added as an important trait of the global society which consequently brought new threats. Among various others, terrorism is probably the most well-known, well-based and well-spread asymmetric threat. In today's global political arena, terrorism is used by state and non-state actors to fulfill their political agendas. Terrorism is used as an all-inclusive tool for regime change, subversion or a revolution. Although the nature of terrorist groups is somewhat inconsistent, terrorism as a security and social phenomenon has a one constant which is reflected in its political dimension. The state's security apparatus, which was embodied in the form of conventional armed forces, is now becoming fragile, unable to tackle new threats and to a certain extent outdated. Conventional security forces were designed to defend or engage an exterior threat which is more or less symmetric and visible. On the other hand, terrorism as an asymmetrical threat is a part of hybrid, special or asymmetric warfare in which specialized units, institutions or facilities represent the primary pillars of security. In today's global society, terrorism is probably the most acute problem which can paralyze entire countries and their political systems. This problem, however, cannot be engaged on an open field of battle, but rather it requires a different approach in which conventional armed forces cannot be used traditionally and their role must be adjusted. The research will try to shed light on the phenomena of modern day terrorism and to prove its correlation with the state conventional armed forces. States are obliged to adjust their security apparatus to the new realism of global society and terrorism as an asymmetrical threat which is a side-product of the unbalanced world.
Keywords: Asymmetrical warfare, conventional forces, security, terrorism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1277