Search results for: efficient crow search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9347

Search results for: efficient crow search algorithm

9017 An Optimization Algorithm Based on Dynamic Schema with Dissimilarities and Similarities of Chromosomes

Authors: Radhwan Yousif Sedik Al-Jawadi

Abstract:

Optimization is necessary for finding appropriate solutions to a range of real-life problems. In particular, genetic (or more generally, evolutionary) algorithms have proved very useful in solving many problems for which analytical solutions are not available. In this paper, we present an optimization algorithm called Dynamic Schema with Dissimilarity and Similarity of Chromosomes (DSDSC) which is a variant of the classical genetic algorithm. This approach constructs new chromosomes from a schema and pairs of existing ones by exploring their dissimilarities and similarities. To show the effectiveness of the algorithm, it is tested and compared with the classical GA, on 15 two-dimensional optimization problems taken from literature. We have found that, in most cases, our method is better than the classical genetic algorithm.

Keywords: chromosome injection, dynamic schema, genetic algorithm, similarity and dissimilarity

Procedia PDF Downloads 320
9016 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor

Authors: Feng Tao, Han Ye, Shaoyi Liao

Abstract:

City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.

Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI

Procedia PDF Downloads 270
9015 Large-Capacity Image Information Reduction Based on Single-Cue Saliency Map for Retinal Prosthesis System

Authors: Yili Chen, Xiaokun Liang, Zhicheng Zhang, Yaoqin Xie

Abstract:

In an effort to restore visual perception in retinal diseases, an electronic retinal prosthesis with thousands of electrodes has been developed. The image processing strategies of retinal prosthesis system converts the original images from the camera to the stimulus pattern which can be interpreted by the brain. Practically, the original images are with more high resolution (256x256) than that of the stimulus pattern (such as 25x25), which causes a technical image processing challenge to do large-capacity image information reduction. In this paper, we focus on developing an efficient image processing stimulus pattern extraction algorithm by using a single cue saliency map for extracting salient objects in the image with an optimal trimming threshold. Experimental results showed that the proposed stimulus pattern extraction algorithm performs quite well for different scenes in terms of the stimulus pattern. In the algorithm performance experiment, our proposed SCSPE algorithm have almost five times of the score compared with Boyle’s algorithm. Through experiment s we suggested that when there are salient objects in the scene (such as the blind meet people or talking with people), the trimming threshold should be set around 0.4max, in other situations, the trimming threshold values can be set between 0.2max-0.4max to give the satisfied stimulus pattern.

Keywords: retinal prosthesis, image processing, region of interest, saliency map, trimming threshold selection

Procedia PDF Downloads 222
9014 Intelligent Minimal Allocation of Capacitors in Distribution Networks Using Genetic Algorithm

Authors: S. Neelima, P. S. Subramanyam

Abstract:

A distribution system is an interface between the bulk power system and the consumers. Among these systems, radial distributions system is popular because of low cost and simple design. In distribution systems, the voltages at buses reduces when moved away from the substation, also the losses are high. The reason for a decrease in voltage and high losses is the insufficient amount of reactive power, which can be provided by the shunt capacitors. But the placement of the capacitor with an appropriate size is always a challenge. Thus, the optimal capacitor placement problem is to determine the location and size of capacitors to be placed in distribution networks in an efficient way to reduce the power losses and improve the voltage profile of the system. For this purpose, in this paper, two stage methodologies are used. In the first stage, the load flow of pre-compensated distribution system is carried out using ‘dimension reducing distribution load flow algorithm (DRDLFA)’. On the basis of this load flow the potential locations of compensation are computed. In the second stage, Genetic Algorithm (GA) technique is used to determine the optimal location and size of the capacitors such that the cost of the energy loss and capacitor cost to be a minimum. The above method is tested on IEEE 9 and 34 bus system and compared with other methods in the literature.

Keywords: dimension reducing distribution load flow algorithm, DRDLFA, genetic algorithm, electrical distribution network, optimal capacitors placement, voltage profile improvement, loss reduction

Procedia PDF Downloads 366
9013 Simulation Approach for a Comparison of Linked Cluster Algorithm and Clusterhead Size Algorithm in Ad Hoc Networks

Authors: Ameen Jameel Alawneh

Abstract:

A Mobile ad-hoc network (MANET) is a collection of wireless mobile hosts that dynamically form a temporary network without the aid of a system administrator. It has neither fixed infrastructure nor wireless ad hoc sessions. It inherently reaches several nodes with a single transmission, and each node functions as both a host and a router. The network maybe represented as a set of clusters each managed by clusterhead. The cluster size is not fixed and it depends on the movement of nodes. We proposed a clusterhead size algorithm (CHSize). This clustering algorithm can be used by several routing algorithms for ad hoc networks. An elected clusterhead is assigned for communication with all other clusters. Analysis and simulation of the algorithm has been implemented using GloMoSim networks simulator, MATLAB and MAPL11 proved that the proposed algorithm achieves the goals.

Keywords: simulation, MANET, Ad-hoc, cluster head size, linked cluster algorithm, loss and dropped packets

Procedia PDF Downloads 368
9012 FE Analysis of Blade-Disc Dovetail Joints Using Mortar Base Frictional Contact Formulation

Authors: Abbas Moradi, Mohsen Safajoy, Reza Yazdanparast

Abstract:

Analysis of blade-disc dovetail joints is one of the biggest challenges facing designers of aero-engines. To avoid comparatively expensive experimental full-scale tests, numerical methods can be used to simulate loaded disc-blades assembly. Mortar method provides a powerful and flexible tool for solving frictional contact problems. In this study, 2D frictional contact in dovetail has been analysed based on the mortar algorithm. In order to model the friction, the classical law of coulomb and moving friction cone algorithm is applied. The solution is then obtained by solving the resulting set of non-linear equations using an efficient numerical algorithm based on Newton–Raphson Method. The numerical results show that this approach has better convergence rate and accuracy than other proposed numerical methods.

Keywords: computational contact mechanics, dovetail joints, nonlinear FEM, mortar approach

Procedia PDF Downloads 322
9011 Hybrid Algorithm for Frequency Channel Selection in Wi-Fi Networks

Authors: Cesar Hernández, Diego Giral, Ingrid Páez

Abstract:

This article proposes a hybrid algorithm for spectrum allocation in cognitive radio networks based on the algorithms Analytical Hierarchical Process (AHP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to improve the performance of the spectrum mobility of secondary users in cognitive radio networks. To calculate the level of performance of the proposed algorithm a comparative analysis between the proposed AHP-TOPSIS, Grey Relational Analysis (GRA) and Multiplicative Exponent Weighting (MEW) algorithm is performed. Four evaluation metrics is used. These metrics are the accumulative average of failed handoffs, the accumulative average of handoffs performed, the accumulative average of transmission bandwidth, and the accumulative average of the transmission delay. The results of the comparison show that AHP-TOPSIS Algorithm provides 2.4 times better performance compared to a GRA Algorithm and, 1.5 times better than the MEW Algorithm.

Keywords: cognitive radio, decision making, hybrid algorithm, spectrum handoff, wireless networks

Procedia PDF Downloads 511
9010 M-Machine Assembly Scheduling Problem to Minimize Total Tardiness with Non-Zero Setup Times

Authors: Harun Aydilek, Asiye Aydilek, Ali Allahverdi

Abstract:

Our objective is to minimize the total tardiness in an m-machine two-stage assembly flowshop scheduling problem. The objective is an important performance measure because of the fact that the fulfillment of due dates of customers has to be taken into account while making scheduling decisions. In the literature, the problem is considered with zero setup times which may not be realistic and appropriate for some scheduling environments. Considering separate setup times from processing times increases machine utilization by decreasing the idle time and reduces total tardiness. We propose two new algorithms and adapt four existing algorithms in the literature which are different versions of simulated annealing and genetic algorithms. Moreover, a dominance relation is developed based on the mathematical formulation of the problem. The developed dominance relation is incorporated in our proposed algorithms. Computational experiments are conducted to investigate the performance of the newly proposed algorithms. We find that one of the proposed algorithms performs significantly better than the others, i.e., the error of the best algorithm is less than those of the other algorithms by minimum 50%. The newly proposed algorithm is also efficient for the case of zero setup times and performs better than the best existing algorithm in the literature.

Keywords: algorithm, assembly flowshop, scheduling, simulation, total tardiness

Procedia PDF Downloads 302
9009 Left to Right-Right Most Parsing Algorithm with Lookahead

Authors: Jamil Ahmed

Abstract:

Left to Right-Right Most (LR) parsing algorithm is a widely used algorithm of syntax analysis. It is contingent on a parsing table, whereas the parsing tables are extracted from the grammar. The parsing table specifies the actions to be taken during parsing. It requires that the parsing table should have no action conflicts for the same input symbol. This requirement imposes a condition on the class of grammars over which the LR algorithms work. However, there are grammars for which the parsing tables hold action conflicts. In such cases, the algorithm needs a capability of scanning (looking-ahead) next input symbols ahead of the current input symbol. In this paper, a ‘Left to Right’-‘Right Most’ parsing algorithm with lookahead capability is introduced. The 'look-ahead' capability in the LR parsing algorithm is the major contribution of this paper. The practicality of the proposed algorithm is substantiated by the parser implementation of the Context Free Grammar (CFG) of an already proposed programming language 'State Controlled Object Oriented Programming' (SCOOP). SCOOP’s Context Free Grammar has 125 productions and 192 item sets. This algorithm parses SCOOP while the grammar requires to ‘look ahead’ the input symbols due to action conflicts in its parsing table. Proposed LR parsing algorithm with lookahead capability can be viewed as an optimization of ‘Simple Left to Right’-‘Right Most’ (SLR) parsing algorithm.

Keywords: left to right-right most parsing, syntax analysis, bottom-up parsing algorithm

Procedia PDF Downloads 96
9008 Improved Whale Algorithm Based on Information Entropy and Its Application in Truss Structure Optimization Design

Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong, Gan Lei, Xu Liqun

Abstract:

Given the limitations of the original whale optimization algorithm (WAO) in local optimum and low convergence accuracy in truss structure optimization problems, based on the fundamental whale algorithm, an improved whale optimization algorithm (SWAO) based on information entropy is proposed. The information entropy itself is an uncertain measure. It is used to control the range of whale searches in path selection. It can overcome the shortcomings of the basic whale optimization algorithm (WAO) and can improve the global convergence speed of the algorithm. Taking truss structure as the optimization research object, the mathematical model of truss structure optimization is established; the cross-sectional area of truss is taken as the design variable; the objective function is the weight of truss structure; and an improved whale optimization algorithm (SWAO) is used for optimization design, which provides a new idea and means for its application in large and complex engineering structure optimization design.

Keywords: information entropy, structural optimization, truss structure, whale algorithm

Procedia PDF Downloads 216
9007 Bi-Criteria Objective Network Design Model for Multi Period Multi Product Green Supply Chain

Authors: Shahul Hamid Khan, S. Santhosh, Abhinav Kumar Sharma

Abstract:

Environmental performance along with social performance is becoming vital factors for industries to achieve global standards. With a good environmental policy global industries are differentiating them from their competitors. This paper concentrates on multi stage, multi product and multi period manufacturing network. Bi-objective mathematical models for total cost and total emission for the entire forward supply chain are considered. Here five different problems are considered by varying the number of suppliers, manufacturers, and environmental levels, for illustrating the taken mathematical model. GA, and Random search are used for finding the optimal solution. The input parameters of the optimal solution are used to find the tradeoff between the initial investment by the industry and the long term benefit of the environment.

Keywords: closed loop supply chain, genetic algorithm, random search, green supply chain

Procedia PDF Downloads 526
9006 Hybrid Adaptive Modeling to Enhance Robustness of Real-Time Optimization

Authors: Hussain Syed Asad, Richard Kwok Kit Yuen, Gongsheng Huang

Abstract:

Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.

Keywords: energy performance, hybrid adaptive modeling, hybrid genetic algorithms, real-time optimization, heating, ventilation, and air-conditioning

Procedia PDF Downloads 389
9005 A Filtering Algorithm for a Nonlinear State-Space Model

Authors: Abdullah Eqal Al Mazrooei

Abstract:

Kalman filter is a famous algorithm that utilizes to estimate the state in the linear systems. It has numerous applications in technology and science. Since of the most of applications in real life can be described by nonlinear systems. So, Kalman filter does not work with the nonlinear systems because it is suitable to linear systems only. In this work, a nonlinear filtering algorithm is presented which is suitable to use with the special kinds of nonlinear systems. This filter generalizes the Kalman filter. This means that this filter also can be used for the linear systems. Our algorithm depends on a special linearization of the second degree. We introduced the nonlinear algorithm with a bilinear state-space model. A simulation example is presented to illustrate the efficiency of the algorithm.

Keywords: Kalman filter, filtering algorithm, nonlinear systems, state-space model

Procedia PDF Downloads 351
9004 Chemical Reaction Algorithm for Expectation Maximization Clustering

Authors: Li Ni, Pen ManMan, Li KenLi

Abstract:

Clustering is an intensive research for some years because of its multifaceted applications, such as biology, information retrieval, medicine, business and so on. The expectation maximization (EM) is a kind of algorithm framework in clustering methods, one of the ten algorithms of machine learning. Traditionally, optimization of objective function has been the standard approach in EM. Hence, research has investigated the utility of evolutionary computing and related techniques in the regard. Chemical Reaction Optimization (CRO) is a recently established method. So the property embedded in CRO is used to solve optimization problems. This paper presents an algorithm framework (EM-CRO) with modified CRO operators based on EM cluster problems. The hybrid algorithm is mainly to solve the problem of initial value sensitivity of the objective function optimization clustering algorithm. Our experiments mainly take the EM classic algorithm:k-means and fuzzy k-means as an example, through the CRO algorithm to optimize its initial value, get K-means-CRO and FKM-CRO algorithm. The experimental results of them show that there is improved efficiency for solving objective function optimization clustering problems.

Keywords: chemical reaction optimization, expection maimization, initia, objective function clustering

Procedia PDF Downloads 687
9003 Indexing and Incremental Approach Using Map Reduce Bipartite Graph (MRBG) for Mining Evolving Big Data

Authors: Adarsh Shroff

Abstract:

Big data is a collection of dataset so large and complex that it becomes difficult to process using data base management tools. To perform operations like search, analysis, visualization on big data by using data mining; which is the process of extraction of patterns or knowledge from large data set. In recent years, the data mining applications become stale and obsolete over time. Incremental processing is a promising approach to refreshing mining results. It utilizes previously saved states to avoid the expense of re-computation from scratch. This project uses i2MapReduce, an incremental processing extension to Map Reduce, the most widely used framework for mining big data. I2MapReduce performs key-value pair level incremental processing rather than task level re-computation, supports not only one-step computation but also more sophisticated iterative computation, which is widely used in data mining applications, and incorporates a set of novel techniques to reduce I/O overhead for accessing preserved fine-grain computation states. To optimize the mining results, evaluate i2MapReduce using a one-step algorithm and three iterative algorithms with diverse computation characteristics for efficient mining.

Keywords: big data, map reduce, incremental processing, iterative computation

Procedia PDF Downloads 322
9002 A New Class of Conjugate Gradient Methods Based on a Modified Search Direction for Unconstrained Optimization

Authors: Belloufi Mohammed, Sellami Badreddine

Abstract:

Conjugate gradient methods have played a special role for solving large scale optimization problems due to the simplicity of their iteration, convergence properties and their low memory requirements. In this work, we propose a new class of conjugate gradient methods which ensures sufficient descent. Moreover, we propose a new search direction with the Wolfe line search technique for solving unconstrained optimization problems, a global convergence result for general functions is established provided that the line search satisfies the Wolfe conditions. Our numerical experiments indicate that our proposed methods are preferable and in general superior to the classical conjugate gradient methods in terms of efficiency and robustness.

Keywords: unconstrained optimization, conjugate gradient method, sufficient descent property, numerical comparisons

Procedia PDF Downloads 378
9001 Methaheuristic Bat Algorithm in Training of Feed-Forward Neural Network for Stock Price Prediction

Authors: Marjan Golmaryami, Marzieh Behzadi

Abstract:

Recent developments in stock exchange highlight the need for an efficient and accurate method that helps stockholders make better decision. Since stock markets have lots of fluctuations during the time and different effective parameters, it is difficult to make good decisions. The purpose of this study is to employ artificial neural network (ANN) which can deal with time series data and nonlinear relation among variables to forecast next day stock price. Unlike other evolutionary algorithms which were utilized in stock exchange prediction, we trained our proposed neural network with metaheuristic bat algorithm, with fast and powerful convergence and applied it in stock price prediction for the first time. In order to prove the performance of the proposed method, this research selected a 7 year dataset from Parsian Bank stocks and after imposing data preprocessing, used 3 types of ANN (back propagation-ANN, particle swarm optimization-ANN and bat-ANN) to predict the closed price of stocks. Afterwards, this study engaged MATLAB to simulate 3 types of ANN, with the scoring target of mean absolute percentage error (MAPE). The results may be adapted to other companies stocks too.

Keywords: artificial neural network (ANN), bat algorithm, particle swarm optimization algorithm (PSO), stock exchange

Procedia PDF Downloads 525
9000 An Improved Many Worlds Quantum Genetic Algorithm

Authors: Li Dan, Zhao Junsuo, Zhang Wenjun

Abstract:

Aiming at the shortcomings of the Quantum Genetic Algorithm such as the multimodal function optimization problems easily falling into the local optimum, and vulnerable to premature convergence due to no closely relationship between individuals, the paper presents an Improved Many Worlds Quantum Genetic Algorithm (IMWQGA). The paper using the concept of Many Worlds; using the derivative way of parallel worlds’ parallel evolution; putting forward the thought which updating the population according to the main body; adopting the transition methods such as parallel transition, backtracking, travel forth. In addition, the algorithm in the paper also proposes the quantum training operator and the combinatorial optimization operator as new operators of quantum genetic algorithm.

Keywords: quantum genetic algorithm, many worlds, quantum training operator, combinatorial optimization operator

Procedia PDF Downloads 714
8999 A Location-Based Search Approach According to Users’ Application Scenario

Authors: Shih-Ting Yang, Chih-Yun Lin, Ming-Yu Li, Jhong-Ting Syue, Wei-Ming Huang

Abstract:

Global positioning system (GPS) has become increasing precise in recent years, and the location-based service (LBS) has developed rapidly. Take the example of finding a parking lot (such as Parking apps). The location-based service can offer immediate information about a nearby parking lot, including the information about remaining parking spaces. However, it cannot provide expected search results according to the requirement situations of users. For that reason, this paper develops a “Location-based Search Approach according to Users’ Application Scenario” according to the location-based search and demand determination to help users obtain the information consistent with their requirements. The “Location-based Search Approach based on Users’ Application Scenario” of this paper consists of one mechanism and three kernel modules. First, in the Information Pre-processing Mechanism (IPM), this paper uses the cosine theorem to categorize the locations of users. Then, in the Information Category Evaluation Module (ICEM), the kNN (k-Nearest Neighbor) is employed to classify the browsing records of users. After that, in the Information Volume Level Determination Module (IVLDM), this paper makes a comparison between the number of users’ clicking the information at different locations and the average number of users’ clicking the information at a specific location, so as to evaluate the urgency of demand; then, the two-dimensional space is used to estimate the application situations of users. For the last step, in the Location-based Search Module (LBSM), this paper compares all search results and the average number of characters of the search results, categorizes the search results with the Manhattan Distance, and selects the results according to the application scenario of users. Additionally, this paper develops a Web-based system according to the methodology to demonstrate practical application of this paper. The application scenario-based estimate and the location-based search are used to evaluate the type and abundance of the information expected by the public at specific location, so that information demanders can obtain the information consistent with their application situations at specific location.

Keywords: data mining, knowledge management, location-based service, user application scenario

Procedia PDF Downloads 92
8998 Discrete Swarm with Passive Congregation for Cost Minimization of the Multiple Vehicle Routing Problem

Authors: Tarek Aboueldahab, Hanan Farag

Abstract:

Cost minimization of Multiple Vehicle Routing Problem becomes a critical issue in the field of transportation because it is NP-hard optimization problem and the search space is complex. Many researches use the hybridization of artificial intelligence (AI) models to solve this problem; however, it can not guarantee to reach the best solution due to the difficulty of searching the whole search space. To overcome this problem, we introduce the hybrid model of Discrete Particle Swarm Optimization (DPSO) with a passive congregation which enable searching the whole search space to compromise between both local and global search. The practical experiment shows that our model obviously outperforms other hybrid models in cost minimization.

Keywords: cost minimization, multi-vehicle routing problem, passive congregation, discrete swarm, passive congregation

Procedia PDF Downloads 71
8997 A Genetic Algorithm Based Sleep-Wake up Protocol for Area Coverage in WSNs

Authors: Seyed Mahdi Jameii, Arash Nikdel, Seyed Mohsen Jameii

Abstract:

Energy efficiency is an important issue in the field of Wireless Sensor Networks (WSNs). So, minimizing the energy consumption in this kind of networks should be an essential consideration. Sleep/wake scheduling mechanism is an efficient approach to handling this issue. In this paper, we propose a Genetic Algorithm-based Sleep-Wake up Area Coverage protocol called GA-SWAC. The proposed protocol puts the minimum of nodes in active mode and adjusts the sensing radius of each active node to decrease the energy consumption while maintaining the network’s coverage. The proposed protocol is simulated. The results demonstrate the efficiency of the proposed protocol in terms of coverage ratio, number of active nodes and energy consumption.

Keywords: wireless sensor networks, genetic algorithm, coverage, connectivity

Procedia PDF Downloads 489
8996 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees

Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.

Keywords: cloud storage, decision trees, diagnostic image, search, telemedicine

Procedia PDF Downloads 182
8995 A Rapid Code Acquisition Scheme in OOC-Based CDMA Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

We propose a code acquisition scheme called improved multiple-shift (IMS) for optical code division multiple access systems, where the optical orthogonal code is used instead of the pseudo noise code. Although the IMS algorithm has a similar process to that of the conventional MS algorithm, it has a better code acquisition performance than the conventional MS algorithm. We analyze the code acquisition performance of the IMS algorithm and compare the code acquisition performances of the MS and the IMS algorithms in single-user and multi-user environments.

Keywords: code acquisition, optical CDMA, optical orthogonal code, serial algorithm

Procedia PDF Downloads 508
8994 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms

Authors: Bliss Singhal

Abstract:

Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.

Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression

Procedia PDF Downloads 57
8993 Web Search Engine Based Naming Procedure for Independent Topic

Authors: Takahiro Nishigaki, Takashi Onoda

Abstract:

In recent years, the number of document data has been increasing since the spread of the Internet. Many methods have been studied for extracting topics from large document data. We proposed Independent Topic Analysis (ITA) to extract topics independent of each other from large document data such as newspaper data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis. The topic represented by ITA is represented by a set of words. However, the set of words is quite different from the topics the user imagines. For example, the top five words with high independence of a topic are as follows. Topic1 = {"scor", "game", "lead", "quarter", "rebound"}. This Topic 1 is considered to represent the topic of "SPORTS". This topic name "SPORTS" has to be attached by the user. ITA cannot name topics. Therefore, in this research, we propose a method to obtain topics easy for people to understand by using the web search engine, topics given by the set of words given by independent topic analysis. In particular, we search a set of topical words, and the title of the homepage of the search result is taken as the topic name. And we also use the proposed method for some data and verify its effectiveness.

Keywords: independent topic analysis, topic extraction, topic naming, web search engine

Procedia PDF Downloads 100
8992 Towards a Framework for Embedded Weight Comparison Algorithm with Business Intelligence in the Plantation Domain

Authors: M. Pushparani, A. Sagaya

Abstract:

Embedded systems have emerged as important elements in various domains with extensive applications in automotive, commercial, consumer, healthcare and transportation markets, as there is emphasis on intelligent devices. On the other hand, Business Intelligence (BI) has also been extensively used in a range of applications, especially in the agriculture domain which is the area of this research. The aim of this research is to create a framework for Embedded Weight Comparison Algorithm with Business Intelligence (EWCA-BI). The weight comparison algorithm will be embedded within the plantation management system and the weighbridge system. This algorithm will be used to estimate the weight at the site and will be compared with the actual weight at the plantation. The algorithm will be used to build the necessary alerts when there is a discrepancy in the weight, thus enabling better decision making. In the current practice, data are collected from various locations in various forms. It is a challenge to consolidate data to obtain timely and accurate information for effective decision making. Adding to this, the unstable network connection leads to difficulty in getting timely accurate information. To overcome the challenges embedding is done on a portable device that will have the embedded weight comparison algorithm to also assist in data capture and synchronize data at various locations overcoming the network short comings at collection points. The EWCA-BI will provide real-time information at any given point of time, thus enabling non-latent BI reports that will provide crucial information to enable efficient operational decision making. This research has a high potential in bringing embedded system into the agriculture industry. EWCA-BI will provide BI reports with accurate information with uncompromised data using an embedded system and provide alerts, therefore, enabling effective operation management decision-making at the site.

Keywords: embedded business intelligence, weight comparison algorithm, oil palm plantation, embedded systems

Procedia PDF Downloads 260
8991 Concept for Determining the Focus of Technology Monitoring Activities

Authors: Guenther Schuh, Christina Koenig, Nico Schoen, Markus Wellensiek

Abstract:

Identification and selection of appropriate product and manufacturing technologies are key factors for competitiveness and market success of technology-based companies. Therefore many companies perform technology intelligence (TI) activities to ensure the identification of evolving technologies at the right time. Technology monitoring is one of the three base activities of TI, besides scanning and scouting. As the technological progress is accelerating, more and more technologies are being developed. Against the background of limited resources it is therefore necessary to focus TI activities. In this paper, we propose a concept for defining appropriate search fields for technology monitoring. This limitation of search space leads to more concentrated monitoring activities. The concept will be introduced and demonstrated through an anonymized case study conducted within an industry project at the Fraunhofer Institute for Production Technology. The described concept provides a customized monitoring approach, which is suitable for use in technology-oriented companies especially those that have not yet defined an explicit technology strategy. It is shown in this paper that the definition of search fields and search tasks are suitable methods to define topics of interest and thus to direct monitoring activities. Current as well as planned product, production and material technologies as well as existing skills, capabilities and resources form the basis of the described derivation of relevant search areas. To further improve the concept of technology monitoring the proposed concept should be extended during future research e.g. by the definition of relevant monitoring parameters.

Keywords: monitoring radar, search field, technology intelligence, technology monitoring

Procedia PDF Downloads 446
8990 Top-K Shortest Distance as a Similarity Measure

Authors: Andrey Lebedev, Ilya Dmitrenok, JooYoung Lee, Leonard Johard

Abstract:

Top-k shortest path routing problem is an extension of finding the shortest path in a given network. Shortest path is one of the most essential measures as it reveals the relations between two nodes in a network. However, in many real world networks, whose diameters are small, top-k shortest path is more interesting as it contains more information about the network topology. Many variations to compute top-k shortest paths have been studied. In this paper, we apply an efficient top-k shortest distance routing algorithm to the link prediction problem and test its efficacy. We compare the results with other base line and state-of-the-art methods as well as with the shortest path. Then, we also propose a top-k distance based graph matching algorithm.

Keywords: graph matching, link prediction, shortest path, similarity

Procedia PDF Downloads 336
8989 Indian Road Traffic Flow Analysis Using Blob Tracking from Video Sequences

Authors: Balaji Ganesh Rajagopal, Subramanian Appavu alias Balamurugan, Ayyalraj Midhun Kumar, Krishnan Nallaperumal

Abstract:

Intelligent Transportation System is an Emerging area to solve multiple transportation problems. Several forms of inputs are needed in order to solve ITS problems. Advanced Traveler Information System (ATIS) is a core and important ITS area of this modern era. This involves travel time forecasting, efficient road map analysis and cost based path selection, Detection of the vehicle in the dynamic conditions and Traffic congestion state forecasting. This Article designs and provides an algorithm for traffic data generation which can be used for the above said ATIS application. By inputting the real world traffic situation in the form of video sequences, the algorithm determines the Traffic density in terms of congestion, number of vehicles in a given path which can be fed for various ATIS applications. The Algorithm deduces the key frame from the video sequences and follows the Blob detection, Identification and Tracking using connected components algorithm to determine the correlation between the vehicles moving in the real road scene.

Keywords: traffic transportation, traffic density estimation, blob identification and tracking, relative velocity of vehicles, correlation between vehicles

Procedia PDF Downloads 488
8988 Multi-Criteria Evolutionary Algorithm to Develop Efficient Schedules for Complex Maintenance Problems

Authors: Sven Tackenberg, Sönke Duckwitz, Andreas Petz, Christopher M. Schlick

Abstract:

This paper introduces an extension to the well-established Resource-Constrained Project Scheduling Problem (RCPSP) to apply it to complex maintenance problems. The problem is to assign technicians to a team which has to process several tasks with multi-level skill requirements during a work shift. Here, several alternative activities for a task allow both, the temporal shift of activities or the reallocation of technicians and tools. As a result, switches from one valid work process variant to another can be considered and may be selected by the developed evolutionary algorithm based on the present skill level of technicians or the available tools. An additional complication of the observed scheduling problem is that the locations of the construction sites are only temporarily accessible during a day. Due to intensive rail traffic, the available time slots for maintenance and repair works are extremely short and are often distributed throughout the day. To identify efficient working periods, a first concept of a Bayesian network is introduced and is integrated into the extended RCPSP with pre-emptive and non-pre-emptive tasks. Thereby, the Bayesian network is used to calculate the probability of a maintenance task to be processed during a specific period of the shift. Focusing on the domain of maintenance of the railway infrastructure in metropolitan areas as the most unproductive implementation process at construction site, the paper illustrates how the extended RCPSP can be applied for maintenance planning support. A multi-criteria evolutionary algorithm with a problem representation is introduced which is capable of revising technician-task allocations, whereas the duration of the task may be stochastic. The approach uses a novel activity list representation to ensure easily describable and modifiable elements which can be converted into detailed shift schedules. Thereby, the main objective is to develop a shift plan which maximizes the utilization of each technician due to a minimization of the waiting times caused by rail traffic. The results of the already implemented core algorithm illustrate a fast convergence towards an optimal team composition for a shift, an efficient sequence of tasks and a high probability of the subsequent implementation due to the stochastic durations of the tasks. In the paper, the algorithm for the extended RCPSP is analyzed in experimental evaluation using real-world example problems with various size, resource complexity, tightness and so forth.

Keywords: maintenance management, scheduling, resource constrained project scheduling problem, genetic algorithms

Procedia PDF Downloads 209