Search results for: algorithm techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9865

Search results for: algorithm techniques

8695 An Amphibious House for Flood Prone Areas in Godavari River Basin

Authors: Gangadhara Rao K.

Abstract:

In Andhra Pradesh traditionally, the flood problem had been confined to the flooding of smaller rivers. But the drainage problem in the coastal delta zones has worsened, multiplying the destructive potential of cyclones and increasing flood hazards. As a result of floods, the people living around these areas are forced to move out of their traditions in search of higher altitude places. This paper will be discussing about suitability of techniques used in Bangladesh in context of Godavari river basin in Andhra Pradesh. The study considers social, physical and environmental conditions of the region. The methods for achieving this objective includes the study of both cases from Bangladesh and Andhra Pradesh. Comparison with the existing techniques and suit to our requirements and context. If successful, we can adopt those techniques and this might help the people living in riverfront areas to stay safe during the floods without losing their traditional lands.

Keywords: amphibious, bouyancy, floating, architecture, flood resistent

Procedia PDF Downloads 172
8694 Analysis of Users’ Behavior on Book Loan Log Based on Association Rule Mining

Authors: Kanyarat Bussaban, Kunyanuth Kularbphettong

Abstract:

This research aims to create a model for analysis of student behavior using Library resources based on data mining technique in case of Suan Sunandha Rajabhat University. The model was created under association rules, apriori algorithm. The results were found 14 rules and the rules were tested with testing data set and it showed that the ability of classify data was 79.24 percent and the MSE was 22.91. The results showed that the user’s behavior model by using association rule technique can use to manage the library resources.

Keywords: behavior, data mining technique, a priori algorithm, knowledge discovery

Procedia PDF Downloads 404
8693 Parallel 2-Opt Local Search on GPU

Authors: Wen-Bao Qiao, Jean-Charles Créput

Abstract:

To accelerate the solution for large scale traveling salesman problems (TSP), a parallel 2-opt local search algorithm with simple implementation based on Graphics Processing Unit (GPU) is presented and tested in this paper. The parallel scheme is based on technique of data decomposition by dynamically assigning multiple K processors on the integral tour to treat K edges’ 2-opt local optimization simultaneously on independent sub-tours, where K can be user-defined or have a function relationship with input size N. We implement this algorithm with doubly linked list on GPU. The implementation only requires O(N) memory. We compare this parallel 2-opt local optimization against sequential exhaustive 2-opt search along integral tour on TSP instances from TSPLIB with more than 10000 cities.

Keywords: parallel 2-opt, double links, large scale TSP, GPU

Procedia PDF Downloads 625
8692 Analytical Derivative: Importance on Environment and Water Analysis/Cycle

Authors: Adesoji Sodeinde

Abstract:

Analytical derivatives has recently undergone an explosive growth in areas of separation techniques, likewise in detectability of certain compound/concentrated ions. The gloomy and depressing scenario which charaterized the application of analytical derivatives in areas of water analysis, water cycle and the environment should not be allowed to continue unabated. Due to technological advancement in various chemical/biochemical analysis separation techniques is widely used in areas of medical, forensic and to measure and assesses environment and social-economic impact of alternative control strategies. This technological improvement was dully established in the area of comparison between certain separation/detection techniques to bring about vital result in forensic[as Gas liquid chromatography reveals the evidence given in court of law during prosecution of drunk drivers]. The water quality analysis,pH and water temperature analysis can be performed in the field, the concentration of dissolved free amino-acid [DFAA] can also be detected through separation techniques. Some important derivatives/ions used in separation technique. Water analysis : Total water hardness [EDTA to determine ca and mg ions]. Gas liquid chromatography : innovative gas such as helium [He] or nitrogen [N] Water cycle : Animal bone charcoal,activated carbon and ultraviolet light [U.V light].

Keywords: analytical derivative, environment, water analysis, chemical/biochemical analysis

Procedia PDF Downloads 338
8691 Solving the Pseudo-Geometric Traveling Salesman Problem with the “Union Husk” Algorithm

Authors: Boris Melnikov, Ye Zhang, Dmitrii Chaikovskii

Abstract:

This study explores the pseudo-geometric version of the extensively researched Traveling Salesman Problem (TSP), proposing a novel generalization of existing algorithms which are traditionally confined to the geometric version. By adapting the "onion husk" method and introducing auxiliary algorithms, this research fills a notable gap in the existing literature. Through computational experiments using randomly generated data, several metrics were analyzed to validate the proposed approach's efficacy. Preliminary results align with expected outcomes, indicating a promising advancement in TSP solutions.

Keywords: optimization problems, traveling salesman problem, heuristic algorithms, “onion husk” algorithm, pseudo-geometric version

Procedia PDF Downloads 207
8690 Optimum Dewatering Network Design Using Firefly Optimization Algorithm

Authors: S. M. Javad Davoodi, Mojtaba Shourian

Abstract:

Groundwater table close to the ground surface causes major problems in construction and mining operation. One of the methods to control groundwater in such cases is using pumping wells. These pumping wells remove excess water from the site project and lower the water table to a desirable value. Although the efficiency of this method is acceptable, it needs high expenses to apply. It means even small improvement in a design of pumping wells can lead to substantial cost savings. In order to minimize the total cost in the method of pumping wells, a simulation-optimization approach is applied. The proposed model integrates MODFLOW as the simulation model with Firefly as the optimization algorithm. In fact, MODFLOW computes the drawdown due to pumping in an aquifer and the Firefly algorithm defines the optimum value of design parameters which are numbers, pumping rates and layout of the designing wells. The developed Firefly-MODFLOW model is applied to minimize the cost of the dewatering project for the ancient mosque of Kerman city in Iran. Repetitive runs of the Firefly-MODFLOW model indicates that drilling two wells with the total rate of pumping 5503 m3/day is the result of the minimization problem. Results show that implementing the proposed solution leads to at least 1.5 m drawdown in the aquifer beneath mosque region. Also, the subsidence due to groundwater depletion is less than 80 mm. Sensitivity analyses indicate that desirable groundwater depletion has an enormous impact on total cost of the project. Besides, in a hypothetical aquifer decreasing the hydraulic conductivity contributes to decrease in total water extraction for dewatering.

Keywords: groundwater dewatering, pumping wells, simulation-optimization, MODFLOW, firefly algorithm

Procedia PDF Downloads 294
8689 Image Inpainting Model with Small-Sample Size Based on Generative Adversary Network and Genetic Algorithm

Authors: Jiawen Wang, Qijun Chen

Abstract:

The performance of most machine-learning methods for image inpainting depends on the quantity and quality of the training samples. However, it is very expensive or even impossible to obtain a great number of training samples in many scenarios. In this paper, an image inpainting model based on a generative adversary network (GAN) is constructed for the cases when the number of training samples is small. Firstly, a feature extraction network (F-net) is incorporated into the GAN network to utilize the available information of the inpainting image. The weighted sum of the extracted feature and the random noise acts as the input to the generative network (G-net). The proposed network can be trained well even when the sample size is very small. Secondly, in the phase of the completion for each damaged image, a genetic algorithm is designed to search an optimized noise input for G-net; based on this optimized input, the parameters of the G-net and F-net are further learned (Once the completion for a certain damaged image ends, the parameters restore to its original values obtained in the training phase) to generate an image patch that not only can fill the missing part of the damaged image smoothly but also has visual semantics.

Keywords: image inpainting, generative adversary nets, genetic algorithm, small-sample size

Procedia PDF Downloads 130
8688 Cost-Benefit Analysis for the Optimization of Noise Abatement Treatments at the Workplace

Authors: Paolo Lenzuni

Abstract:

Cost-effectiveness of noise abatement treatments at the workplace has not yet received adequate consideration. Furthermore, most of the published work is focused on productivity, despite the poor correlation of this quantity with noise levels. There is currently no tool to estimate the social benefit associated to a specific noise abatement treatment, and no comparison among different options is accordingly possible. In this paper, we present an algorithm which has been developed to predict the cost-effectiveness of any planned noise control treatment in a workplace. This algorithm is based the estimates of hearing threshold shifts included in ISO 1999, and on compensations that workers are entitled to once their work-related hearing impairments have been certified. The benefits of a noise abatement treatment are estimated by means of the lower compensation costs which are paid to the impaired workers. Although such benefits have no real meaning in strictly monetary terms, they allow a reliable comparison between different treatments, since actual social costs can be assumed to be proportional to compensation costs. The existing European legislation on occupational exposure to noise it mandates that the noise exposure level be reduced below the upper action limit (85 dBA). There is accordingly little or no motivation for employers to sustain the extra costs required to lower the noise exposure below the lower action limit (80 dBA). In order to make this goal more appealing for employers, the algorithm proposed in this work also includes an ad-hoc element that promotes actions which bring the noise exposure down below 80 dBA. The algorithm has a twofold potential: 1) it can be used as a quality index to promote cost-effective practices; 2) it can be added to the existing criteria used by workers’ compensation authorities to evaluate the cost-effectiveness of technical actions, and support dedicated employers.

Keywords: cost-effectiveness, noise, occupational exposure, treatment

Procedia PDF Downloads 322
8687 A Weighted Approach to Unconstrained Iris Recognition

Authors: Yao-Hong Tsai

Abstract:

This paper presents a weighted approach to unconstrained iris recognition. Nowadays, commercial systems are usually characterized by strong acquisition constraints based on the subject’s cooperation. However, it is not always achievable for real scenarios in our daily life. Researchers have been focused on reducing these constraints and maintaining the performance of the system by new techniques at the same time. With large variation in the environment, there are two main improvements to develop the proposed iris recognition system. For solving extremely uneven lighting condition, statistic based illumination normalization is first used on eye region to increase the accuracy of iris feature. The detection of the iris image is based on Adaboost algorithm. Secondly, the weighted approach is designed by Gaussian functions according to the distance to the center of the iris. Furthermore, local binary pattern (LBP) histogram is then applied to texture classification with the weight. Experiment showed that the proposed system provided users a more flexible and feasible way to interact with the verification system through iris recognition.

Keywords: authentication, iris recognition, adaboost, local binary pattern

Procedia PDF Downloads 225
8686 Printed Thai Character Recognition Using Particle Swarm Optimization Algorithm

Authors: Phawin Sangsuvan, Chutimet Srinilta

Abstract:

This Paper presents the applications of Particle Swarm Optimization (PSO) Method for Thai optical character recognition (OCR). OCR consists of the pre-processing, character recognition and post-processing. Before enter into recognition process. The Character must be “Prepped” by pre-processing process. The PSO is an optimization method that belongs to the swarm intelligence family based on the imitation of social behavior patterns of animals. Route of each particle is determined by an individual data among neighborhood particles. The interaction of the particles with neighbors is the advantage of Particle Swarm to determine the best solution. So PSO is interested by a lot of researchers in many difficult problems including character recognition. As the previous this research used a Projection Histogram to extract printed digits features and defined the simple Fitness Function for PSO. The results reveal that PSO gives 67.73% for testing dataset. So in the future there can be explored enhancement the better performance of PSO with improve the Fitness Function.

Keywords: character recognition, histogram projection, particle swarm optimization, pattern recognition techniques

Procedia PDF Downloads 477
8685 An Autonomous Passive Acoustic System for Detection, Tracking and Classification of Motorboats in Portofino Sea

Authors: A. Casale, J. Alessi, C. N. Bianchi, G. Bozzini, M. Brunoldi, V. Cappanera, P. Corvisiero, G. Fanciulli, D. Grosso, N. Magnoli, A. Mandich, C. Melchiorre, C. Morri, P. Povero, N. Stasi, M. Taiuti, G. Viano, M. Wurtz

Abstract:

This work describes a real-time algorithm for detecting, tracking and classifying single motorboats, developed using the acoustic data recorded by a hydrophone array within the framework of EU LIFE + project ARION (LIFE09NAT/IT/000190). The project aims to improve the conservation status of bottlenose dolphins through a real-time simultaneous monitoring of their population and surface ship traffic. A Passive Acoustic Monitoring (PAM) system is installed on two autonomous permanent marine buoys, located close to the boundaries of the Marine Protected Area (MPA) of Portofino (Ligurian Sea- Italy). Detecting surface ships is also a necessity in many other sensible areas, such as wind farms, oil platforms, and harbours. A PAM system could be an effective alternative to the usual monitoring systems, as radar or active sonar, for localizing unauthorized ship presence or illegal activities, with the advantage of not revealing its presence. Each ARION buoy consists of a particular type of structure, named meda elastica (elastic beacon) composed of a main pole, about 30-meter length, emerging for 7 meters, anchored to a mooring of 30 tons at 90 m depth by an anti-twist steel wire. Each buoy is equipped with a floating element and a hydrophone tetrahedron array, whose raw data are send via a Wi-Fi bridge to a ground station where real-time analysis is performed. Bottlenose dolphin detection algorithm and ship monitoring algorithm are operating in parallel and in real time. Three modules were developed and commissioned for ship monitoring. The first is the detection algorithm, based on Time Difference Of Arrival (TDOA) measurements, i.e., the evaluation of angular direction of the target respect to each buoy and the triangulation for obtaining the target position. The second is the tracking algorithm, based on a Kalman filter, i.e., the estimate of the real course and speed of the target through a predictor filter. At last, the classification algorithm is based on the DEMON method, i.e., the extraction of the acoustic signature of single vessels. The following results were obtained; the detection algorithm succeeded in evaluating the bearing angle with respect to each buoy and the position of the target, with an uncertainty of 2 degrees and a maximum range of 2.5 km. The tracking algorithm succeeded in reconstructing the real vessel courses and estimating the speed with an accuracy of 20% respect to the Automatic Identification System (AIS) signals. The classification algorithm succeeded in isolating the acoustic signature of single vessels, demonstrating its temporal stability and the consistency of both buoys results. As reference, the results were compared with the Hilbert transform of single channel signals. The algorithm for tracking multiple targets is ready to be developed, thanks to the modularity of the single ship algorithm: the classification module will enumerate and identify all targets present in the study area; for each of them, the detection module and the tracking module will be applied to monitor their course.

Keywords: acoustic-noise, bottlenose-dolphin, hydrophone, motorboat

Procedia PDF Downloads 173
8684 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 78
8683 An Investigation of Interdisciplinary Techniques for Assessment of Water Quality in an Industrial Area

Authors: Priti Saha, Biswajit Paul

Abstract:

Rapid urbanization and industrialization have increased the demand of groundwater. However, the present era has evident an enormous level of groundwater pollution. Therefore, water quality assessment is paramount importance to evaluate its suitability for drinking, irrigation and industrial use. This study focus to evaluate the groundwater quality of an industrial city in eastern India through interdisciplinary techniques. The multi-purpose Water Quality Index (WQI) assess the suitability for drinking as well as irrigation of forty sampling locations, where 2.5% and 15% of sampling locations have excellent water quality (WQI:0-25) as well as 15% and 40% have good quality (WQI:25-50), which represents its suitability for drinking and irrigation respectively. However, the industrial water quality was assessed through Ryznar Stability Index (LSI), which affirmed that only 2.5% of sampling locations have neither corrosive nor scale forming properties (RSI: 6.2-6.8). These techniques with the integration of geographical information system (GIS) for spatial assessment indorsed its effectiveness to identify the regions where the water bodies are suitable to use for drinking, irrigation as well as industrial activities. Further, the sources of these contaminants were identified through factor analysis (FA), which revealed that both the geogenic as well as anthropogenic sources were responsible for groundwater pollution. This research demonstrates the effectiveness of statistical and GIS techniques for the analysis of environmental contaminants.

Keywords: groundwater, water quality analysis, water quality index, WQI, factor analysis, FA, spatial assessment

Procedia PDF Downloads 194
8682 Influence of Optimization Method on Parameters Identification of Hyperelastic Models

Authors: Bale Baidi Blaise, Gilles Marckmann, Liman Kaoye, Talaka Dya, Moustapha Bachirou, Gambo Betchewe, Tibi Beda

Abstract:

This work highlights the capabilities of particles swarm optimization (PSO) method to identify parameters of hyperelastic models. The study compares this method with Genetic Algorithm (GA) method, Least Squares (LS) method, Pattern Search Algorithm (PSA) method, Beda-Chevalier (BC) method and the Levenberg-Marquardt (LM) method. Four classic hyperelastic models are used to test the different methods through parameters identification. Then, the study compares the ability of these models to reproduce experimental Treloar data in simple tension, biaxial tension and pure shear.

Keywords: particle swarm optimization, identification, hyperelastic, model

Procedia PDF Downloads 171
8681 Trajectory Optimization of Re-Entry Vehicle Using Evolutionary Algorithm

Authors: Muhammad Umar Kiani, Muhammad Shahbaz

Abstract:

Performance of any vehicle can be predicted by its design/modeling and optimization. Design optimization leads to efficient performance. Followed by horizontal launch, the air launch re-entry vehicle undergoes a launch maneuver by introducing a carefully selected angle of attack profile. This angle of attack profile is the basic element to complete a specified mission. Flight program of said vehicle is optimized under the constraints of the maximum allowed angle of attack, lateral and axial loads and with the objective of reaching maximum altitude. The main focus of this study is the endo-atmospheric phase of the ascent trajectory. A three degrees of freedom trajectory model is simulated in MATLAB. The optimization process uses evolutionary algorithm, because of its robustness and efficient capacity to explore the design space in search of the global optimum. Evolutionary Algorithm based trajectory optimization also offers the added benefit of being a generalized method that may work with continuous, discontinuous, linear, and non-linear performance matrix. It also eliminates the requirement of a starting solution. Optimization is particularly beneficial to achieve maximum advantage without increasing the computational cost and affecting the output of the system. For the case of launch vehicles we are immensely anxious to achieve maximum performance and efficiency under different constraints. In a launch vehicle, flight program means the prescribed variation of vehicle pitching angle during the flight which has substantial influence reachable altitude and accuracy of orbit insertion and aerodynamic loading. Results reveal that the angle of attack profile significantly affects the performance of the vehicle.

Keywords: endo-atmospheric, evolutionary algorithm, efficient performance, optimization process

Procedia PDF Downloads 405
8680 Anomaly Detection Based on System Log Data

Authors: M. Kamel, A. Hoayek, M. Batton-Hubert

Abstract:

With the increase of network virtualization and the disparity of vendors, the continuous monitoring and detection of anomalies cannot rely on static rules. An advanced analytical methodology is needed to discriminate between ordinary events and unusual anomalies. In this paper, we focus on log data (textual data), which is a crucial source of information for network performance. Then, we introduce an algorithm used as a pipeline to help with the pretreatment of such data, group it into patterns, and dynamically label each pattern as an anomaly or not. Such tools will provide users and experts with continuous real-time logs monitoring capability to detect anomalies and failures in the underlying system that can affect performance. An application of real-world data illustrates the algorithm.

Keywords: logs, anomaly detection, ML, scoring, NLP

Procedia PDF Downloads 94
8679 The Study of Security Techniques on Information System for Decision Making

Authors: Tejinder Singh

Abstract:

Information system is the flow of data from different levels to different directions for decision making and data operations in information system (IS). Data can be violated by different manner like manual or technical errors, data tampering or loss of integrity. Security system called firewall of IS is effected by such type of violations. The flow of data among various levels of Information System is done by networking system. The flow of data on network is in form of packets or frames. To protect these packets from unauthorized access, virus attacks, and to maintain the integrity level, network security is an important factor. To protect the data to get pirated, various security techniques are used. This paper represents the various security techniques and signifies different harmful attacks with the help of detailed data analysis. This paper will be beneficial for the organizations to make the system more secure, effective, and beneficial for future decisions making.

Keywords: information systems, data integrity, TCP/IP network, vulnerability, decision, data

Procedia PDF Downloads 307
8678 Cuckoo Search Optimization for Black Scholes Option Pricing

Authors: Manas Shah

Abstract:

Black Scholes option pricing model is one of the most important concepts in modern world of computational finance. However, its practical use can be challenging as one of the input parameters must be estimated; implied volatility of the underlying security. The more precisely these values are estimated, the more accurate their corresponding estimates of theoretical option prices would be. Here, we present a novel model based on Cuckoo Search Optimization (CS) which finds more precise estimates of implied volatility than Particle Swarm Optimization (PSO) and Genetic Algorithm (GA).

Keywords: black scholes model, cuckoo search optimization, particle swarm optimization, genetic algorithm

Procedia PDF Downloads 453
8677 Hybrid Algorithm for Non-Negative Matrix Factorization Based on Symmetric Kullback-Leibler Divergence for Signal Dependent Noise: A Case Study

Authors: Ana Serafimovic, Karthik Devarajan

Abstract:

Non-negative matrix factorization approximates a high dimensional non-negative matrix V as the product of two non-negative matrices, W and H, and allows only additive linear combinations of data, enabling it to learn parts with representations in reality. It has been successfully applied in the analysis and interpretation of high dimensional data arising in neuroscience, computational biology, and natural language processing, to name a few. The objective of this paper is to assess a hybrid algorithm for non-negative matrix factorization with multiplicative updates. The method aims to minimize the symmetric version of Kullback-Leibler divergence known as intrinsic information and assumes that the noise is signal-dependent and that it originates from an arbitrary distribution from the exponential family. It is a generalization of currently available algorithms for Gaussian, Poisson, gamma and inverse Gaussian noise. We demonstrate the potential usefulness of the new generalized algorithm by comparing its performance to the baseline methods which also aim to minimize symmetric divergence measures.

Keywords: non-negative matrix factorization, dimension reduction, clustering, intrinsic information, symmetric information divergence, signal-dependent noise, exponential family, generalized Kullback-Leibler divergence, dual divergence

Procedia PDF Downloads 246
8676 A Comparative Study between FEM and Meshless Methods

Authors: Jay N. Vyas, Sachin Daxini

Abstract:

Numerical simulation techniques are widely used now in product development and testing instead of expensive, time-consuming and sometimes dangerous laboratory experiments. Numerous numerical methods are available for performing simulation of physical problems of different engineering fields. Grid based methods, like Finite Element Method, are extensively used in performing various kinds of static, dynamic, structural and non-structural analysis during product development phase. Drawbacks of grid based methods in terms of discontinuous secondary field variable, dealing fracture mechanics and large deformation problems led to development of a relatively a new class of numerical simulation techniques in last few years, which are popular as Meshless methods or Meshfree Methods. Meshless Methods are expected to be more adaptive and flexible than Finite Element Method because domain descretization in Meshless Method requires only nodes. Present paper introduces Meshless Methods and differentiates it with Finite Element Method in terms of following aspects: Shape functions used, role of weight function, techniques to impose essential boundary conditions, integration techniques for discrete system equations, convergence rate, accuracy of solution and computational effort. Capabilities, benefits and limitations of Meshless Methods are discussed and concluded at the end of paper.

Keywords: numerical simulation, Grid-based methods, Finite Element Method, Meshless Methods

Procedia PDF Downloads 389
8675 Optimization of Vertical Axis Wind Turbine Based on Artificial Neural Network

Authors: Mohammed Affanuddin H. Siddique, Jayesh S. Shukla, Chetan B. Meshram

Abstract:

The neural networks are one of the power tools of machine learning. After the invention of perceptron in early 1980's, the neural networks and its application have grown rapidly. Neural networks are a technique originally developed for pattern investigation. The structure of a neural network consists of neurons connected through synapse. Here, we have investigated the different algorithms and cost function reduction techniques for optimization of vertical axis wind turbine (VAWT) rotor blades. The aerodynamic force coefficients corresponding to the airfoils are stored in a database along with the airfoil coordinates. A forward propagation neural network is created with the input as aerodynamic coefficients and output as the airfoil co-ordinates. In the proposed algorithm, the hidden layer is incorporated into cost function having linear and non-linear error terms. In this article, it is observed that the ANNs (Artificial Neural Network) can be used for the VAWT’s optimization.

Keywords: VAWT, ANN, optimization, inverse design

Procedia PDF Downloads 324
8674 Angular-Coordinate Driven Radial Tree Drawing

Authors: Farshad Ghassemi Toosi, Nikola S. Nikolov

Abstract:

We present a visualization technique for radial drawing of trees consisting of two slightly different algorithms. Both of them make use of node-link diagrams for visual encoding. This visualization creates clear drawings without edge crossing. One of the algorithms is suitable for real-time visualization of large trees, as it requires minimal recalculation of the layout if leaves are inserted or removed from the tree; while the other algorithm makes better utilization of the drawing space. The algorithms are very similar and follow almost the same procedure but with different parameters. Both algorithms assign angular coordinates for all nodes which are then converted into 2D Cartesian coordinates for visualization. We present both algorithms and discuss how they compare to each other.

Keywords: Radial drawing, Visualization, Algorithm, Use of node-link diagrams

Procedia PDF Downloads 338
8673 Deciding Graph Non-Hamiltonicity via a Closure Algorithm

Authors: E. R. Swart, S. J. Gismondi, N. R. Swart, C. E. Bell

Abstract:

We present an heuristic algorithm that decides graph non-Hamiltonicity. All graphs are directed, each undirected edge regarded as a pair of counter directed arcs. Each of the n! Hamilton cycles in a complete graph on n+1 vertices is mapped to an n-permutation matrix P where p(u,i)=1 if and only if the ith arc in a cycle enters vertex u, starting and ending at vertex n+1. We first create exclusion set E by noting all arcs (u, v) not in G, sufficient to code precisely all cycles excluded from G i.e. cycles not in G use at least one arc not in G. Members are pairs of components of P, {p(u,i),p(v,i+1)}, i=1, n-1. A doubly stochastic-like relaxed LP formulation of the Hamilton cycle decision problem is constructed. Each {p(u,i),p(v,i+1)} in E is coded as variable q(u,i,v,i+1)=0 i.e. shrinks the feasible region. We then implement the Weak Closure Algorithm (WCA) that tests necessary conditions of a matching, together with Boolean closure to decide 0/1 variable assignments. Each {p(u,i),p(v,j)} not in E is tested for membership in E, and if possible, added to E (q(u,i,v,j)=0) to iteratively maximize |E|. If the WCA constructs E to be maximal, the set of all {p(u,i),p(v,j)}, then G is decided non-Hamiltonian. Only non-Hamiltonian G share this maximal property. Ten non-Hamiltonian graphs (10 through 104 vertices) and 2000 randomized 31 vertex non-Hamiltonian graphs are tested and correctly decided non-Hamiltonian. For Hamiltonian G, the complement of E covers a matching, perhaps useful in searching for cycles. We also present an example where the WCA fails.

Keywords: Hamilton cycle decision problem, computational complexity theory, graph theory, theoretical computer science

Procedia PDF Downloads 373
8672 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction

Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan

Abstract:

Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.

Keywords: decision trees, neural network, myocardial infarction, Data Mining

Procedia PDF Downloads 429
8671 Analytical Comparison of Conventional Algorithms with Vedic Algorithm for Digital Multiplier

Authors: Akhilesh G. Naik, Dipankar Pal

Abstract:

In today’s scenario, the complexity of digital signal processing (DSP) applications and various microcontroller architectures have been increasing to such an extent that the traditional approaches to multiplier design in most processors are becoming outdated for being comparatively slow. Modern processing applications require suitable pipelined approaches, and therefore, algorithms that are friendlier with pipelined architectures. Traditional algorithms like Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda architectures have been proven to be comparatively slow for pipelined architectures. These architectures, therefore, need to be optimized or combined with other architectures amongst them to enhance its performances and to be made suitable for pipelined hardware/architectures. Recently, Vedic algorithm mathematically has proven to be efficient by appearing to be less complex and with fewer steps for its output establishment and have assumed renewed importance. This paper describes and shows how the Vedic algorithm can be better suited for pipelined architectures and also can be combined with traditional architectures and algorithms for enhancing its ability even further. In this paper, we also established that for complex applications on DSP and other microcontroller architectures, using Vedic approach for multiplication proves to be the best available and efficient option.

Keywords: Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda, Vedic, Single-Stage Karatsuba (SSK), Looped Karatsuba (LK)

Procedia PDF Downloads 169
8670 Medical Imaging Fusion: A Teaching-Learning Simulation Environment

Authors: Cristina Maria Ribeiro Martins Pereira Caridade, Ana Rita Ferreira Morais

Abstract:

The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with healthcare facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool developed in MATLAB using a graphical user interface for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing you to view original images and fusion images, compare processed and original images, adjust parameters, and save images. The tool proposed in an innovative teaching and learning environment consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques and necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies.

Keywords: image fusion, image processing, teaching-learning simulation tool, biomedical engineering education

Procedia PDF Downloads 132
8669 Monte Carlo Methods and Statistical Inference of Multitype Branching Processes

Authors: Ana Staneva, Vessela Stoimenova

Abstract:

A parametric estimation of the MBP with Power Series offspring distribution family is considered in this paper. The MLE for the parameters is obtained in the case when the observable data are incomplete and consist only with the generation sizes of the family tree of MBP. The parameter estimation is calculated by using the Monte Carlo EM algorithm. The estimation for the posterior distribution and for the offspring distribution parameters are calculated by using the Bayesian approach and the Gibbs sampler. The article proposes various examples with bivariate branching processes together with computational results, simulation and an implementation using R.

Keywords: Bayesian, branching processes, EM algorithm, Gibbs sampler, Monte Carlo methods, statistical estimation

Procedia PDF Downloads 421
8668 Masquerade and “What Comes Behind Six Is More Than Seven”: Thoughts on Art History and Visual Culture Research Methods

Authors: Osa D Egonwa

Abstract:

In the 21st century, the disciplinary boundaries of past centuries that we often create through mainstream art historical classification, techniques and sources may have been eroded by visual culture, which seems to provide a more inclusive umbrella for the new ways artists go about the creative process and its resultant commodities. Over the past four decades, artists in Africa have resorted to new materials, techniques and themes which have affected our ways of research on these artists and their art. Frontline artists such as El Anatsui, Yinka Shonibare, Erasmus Onyishi are demonstrating that any material is just suitable for artistic expression. Most of times, these materials come with their own techniques/effects and visual syntax: a combination of materials compounds techniques, formal aesthetic indexes, halo effects, and iconography. This tends to challenge the categories and we lean on to view, think and talk about them. This renders our main stream art historical research methods inadequate, thus suggesting new discursive concepts, terms and theories. This paper proposed the Africanist eclectic methods derived from the dual framework of Masquerade Theory and What Comes Behind Six is More Than Seven. This paper shares thoughts/research on art historical methods, terminological re-alignments on classification/source data, presentational format and interpretation arising from the emergent trends in our subject. The outcome provides useful tools to mediate new thoughts and experiences in recent African art and visual culture.

Keywords: art historical methods, classifications, concepts, re-alignment

Procedia PDF Downloads 110
8667 Facial Pose Classification Using Hilbert Space Filling Curve and Multidimensional Scaling

Authors: Mekamı Hayet, Bounoua Nacer, Benabderrahmane Sidahmed, Taleb Ahmed

Abstract:

Pose estimation is an important task in computer vision. Though the majority of the existing solutions provide good accuracy results, they are often overly complex and computationally expensive. In this perspective, we propose the use of dimensionality reduction techniques to address the problem of facial pose estimation. Firstly, a face image is converted into one-dimensional time series using Hilbert space filling curve, then the approach converts these time series data to a symbolic representation. Furthermore, a distance matrix is calculated between symbolic series of an input learning dataset of images, to generate classifiers of frontal vs. profile face pose. The proposed method is evaluated with three public datasets. Experimental results have shown that our approach is able to achieve a correct classification rate exceeding 97% with K-NN algorithm.

Keywords: machine learning, pattern recognition, facial pose classification, time series

Procedia PDF Downloads 350
8666 Nanoparticles in Diagnosis and Treatment of Cancer, and Medical Imaging Techniques Using Nano-Technology

Authors: Rao Muhammad Afzal Khan

Abstract:

Nano technology is emerging as a useful technology in nearly all areas of Science and Technology. Its role in medical imaging is attracting the researchers towards existing and new imaging modalities and techniques. This presentation gives an overview of the development of the work done throughout the world. Furthermore, it lays an idea into the scope of the future use of this technology for diagnosing different diseases. A comparative analysis has also been discussed with an emphasis to detect diseases, in general, and cancer, in particular.

Keywords: medical imaging, cancer detection, diagnosis, nano-imaging, nanotechnology

Procedia PDF Downloads 479