Search results for: K-means clustering algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4006

Search results for: K-means clustering algorithm

2656 Unsupervised Learning of Spatiotemporally Coherent Metrics

Authors: Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, Yann LeCun

Abstract:

Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric.

Keywords: machine learning, pattern clustering, pooling, classification

Procedia PDF Downloads 456
2655 A Novel Search Pattern for Motion Estimation in High Efficiency Video Coding

Authors: Phong Nguyen, Phap Nguyen, Thang Nguyen

Abstract:

High Efficiency Video Coding (HEVC) or H.265 Standard fulfills the demand of high resolution video storage and transmission since it achieves high compression ratio. However, it requires a huge amount of calculation. Since Motion Estimation (ME) block composes about 80 % of calculation load of HEVC, there are a lot of researches to reduce the computation cost. In this paper, we propose a new algorithm to lower the number of Motion Estimation’s searching points. The number of computing points in search pattern is down from 77 for Diamond Pattern and 81 for Square Pattern to only 31. Meanwhile, the Peak Signal to Noise Ratio (PSNR) and bit rate are almost equal to those of conventional patterns. The motion estimation time of new algorithm reduces by at 68.23%, 65.83%compared to the recommended search pattern of diamond pattern, square pattern, respectively.

Keywords: motion estimation, wide diamond, search pattern, H.265, test zone search, HM software

Procedia PDF Downloads 614
2654 Scheduling Algorithm Based on Load-Aware Queue Partitioning in Heterogeneous Multi-Core Systems

Authors: Hong Kai, Zhong Jun Jie, Chen Lin Qi, Wang Chen Guang

Abstract:

There are inefficient global scheduling parallelism and local scheduling parallelism prone to processor starvation in current scheduling algorithms. Regarding this issue, this paper proposed a load-aware queue partitioning scheduling strategy by first allocating the queues according to the number of processor cores, calculating the load factor to specify the load queue capacity, and it assigned the awaiting nodes to the appropriate perceptual queues through the precursor nodes and the communication computation overhead. At the same time, real-time computation of the load factor could effectively prevent the processor from being starved for a long time. Experimental comparison with two classical algorithms shows that there is a certain improvement in both performance metrics of scheduling length and task speedup ratio.

Keywords: load-aware, scheduling algorithm, perceptual queue, heterogeneous multi-core

Procedia PDF Downloads 148
2653 An Integrated Approach for Optimal Selection of Machining Parameters in Laser Micro-Machining Process

Authors: A. Gopala Krishna, M. Lakshmi Chaitanya, V. Kalyana Manohar

Abstract:

In the existent analysis, laser micro machining (LMM) of Silicon carbide (SiCp) reinforced Aluminum 7075 Metal Matrix Composite (Al7075/SiCp MMC) was studied. While machining, Because of the intense heat generated, A layer gets formed on the work piece surface which is called recast layer and this layer is detrimental to the surface quality of the component. The recast layer needs to be as small as possible for precise applications. Therefore, The height of recast layer and the depth of groove which are conflicting in nature were considered as the significant manufacturing criteria, Which determines the pursuit of a machining process obtained in LMM of Al7075/10%SiCp composite. The present work formulates the depth of groove and height of recast layer in relation to the machining parameters using the Response Surface Methodology (RSM) and correspondingly, The formulated mathematical models were put to use for optimization. Since the effect of machining parameters on the depth of groove and height of recast layer was contradictory, The problem was explicated as a multi objective optimization problem. Moreover, An evolutionary Non-dominated sorting genetic algorithm (NSGA-II) was employed to optimize the model established by RSM. Subsequently this algorithm was also adapted to achieve the Pareto optimal set of solutions that provide a detailed illustration for making the optimal solutions. Eventually experiments were conducted to affirm the results obtained from RSM and NSGA-II.

Keywords: Laser Micro Machining (LMM), depth of groove, Height of recast layer, Response Surface Methodology (RSM), non-dominated sorting genetic algorithm

Procedia PDF Downloads 347
2652 Soil Parameters Identification around PMT Test by Inverse Analysis

Authors: I. Toumi, Y. Abed, A. Bouafia

Abstract:

This paper presents a methodology for identifying the cohesive soil parameters that takes into account different constitutive equations. The procedure, applied to identify the parameters of generalized Prager model associated to the Drucker & Prager failure criterion from a pressuremeter expansion curve, is based on an inverse analysis approach, which consists of minimizing the function representing the difference between the experimental curve and the simulated curve using a simplex algorithm. The model response on pressuremeter path and its identification from experimental data lead to the determination of the friction angle, the cohesion and the Young modulus. Some parameters effects on the simulated curves and stresses path around pressuremeter probe are presented. Comparisons between the parameters determined with the proposed method and those obtained by other means are also presented.

Keywords: cohesive soils, cavity expansion, pressuremeter test, finite element method, optimization procedure, simplex algorithm

Procedia PDF Downloads 294
2651 Identification Algorithm of Critical Interface, Modelling Perils on Critical Infrastructure Subjects

Authors: Jiří. J. Urbánek, Hana Malachová, Josef Krahulec, Jitka Johanidisová

Abstract:

The paper deals with crisis situations investigation and modelling within the organizations of critical infrastructure. Every crisis situation has an origin in the emergency event occurrence in the organizations of energetic critical infrastructure especially. Here, the emergency events can be both the expected events, then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping or the unexpected event (Black Swan effect) – without pre-prepared scenario, but it needs operational coping of crisis situations as well. The forms, characteristics, behaviour and utilization of crisis scenarios have various qualities, depending on real critical infrastructure organization prevention and training processes. An aim is always better organizational security and continuity obtainment. This paper objective is to find and investigate critical/ crisis zones and functions in critical situations models of critical infrastructure organization. The DYVELOP (Dynamic Vector Logistics of Processes) method is able to identify problematic critical zones and functions, displaying critical interfaces among actors of crisis situations on the DYVELOP maps named Blazons. Firstly, for realization of this ability is necessary to derive and create identification algorithm of critical interfaces. The locations of critical interfaces are the flags of crisis situation in real organization of critical infrastructure. Conclusive, the model of critical interface will be displayed at real organization of Czech energetic crisis infrastructure subject in Black Out peril environment. The Blazons need live power Point presentation for better comprehension of this paper mission.

Keywords: algorithm, crisis, DYVELOP, infrastructure

Procedia PDF Downloads 410
2650 Knowledge Discovery and Data Mining Techniques in Textile Industry

Authors: Filiz Ersoz, Taner Ersoz, Erkin Guler

Abstract:

This paper addresses the issues and technique for textile industry using data mining techniques. Data mining has been applied to the stitching of garments products that were obtained from a textile company. Data mining techniques were applied to the data obtained from the CHAID algorithm, CART algorithm, Regression Analysis and, Artificial Neural Networks. Classification technique based analyses were used while data mining and decision model about the production per person and variables affecting about production were found by this method. In the study, the results show that as the daily working time increases, the production per person also decreases. In addition, the relationship between total daily working and production per person shows a negative result and the production per person show the highest and negative relationship.

Keywords: data mining, textile production, decision trees, classification

Procedia PDF Downloads 354
2649 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition

Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can

Abstract:

To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.

Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning

Procedia PDF Downloads 87
2648 Comparing Community Detection Algorithms in Bipartite Networks

Authors: Ehsan Khademi, Mahdi Jalili

Abstract:

Despite the special features of bipartite networks, they are common in many systems. Real-world bipartite networks may show community structure, similar to what one can find in one-mode networks. However, the interpretation of the community structure in bipartite networks is different as compared to one-mode networks. In this manuscript, we compare a number of available methods that are frequently used to discover community structure of bipartite networks. These networks are categorized into two broad classes. One class is the methods that, first, transfer the network into a one-mode network, and then apply community detection algorithms. The other class is the algorithms that have been developed specifically for bipartite networks. These algorithms are applied on a model network with prescribed community structure.

Keywords: community detection, bipartite networks, co-clustering, modularity, network projection, complex networks

Procedia PDF Downloads 628
2647 Integrated Intensity and Spatial Enhancement Technique for Color Images

Authors: Evan W. Krieger, Vijayan K. Asari, Saibabu Arigela

Abstract:

Video imagery captured for real-time security and surveillance applications is typically captured in complex lighting conditions. These less than ideal conditions can result in imagery that can have underexposed or overexposed regions. It is also typical that the video is too low in resolution for certain applications. The purpose of security and surveillance video is that we should be able to make accurate conclusions based on the images seen in the video. Therefore, if poor lighting and low resolution conditions occur in the captured video, the ability to make accurate conclusions based on the received information will be reduced. We propose a solution to this problem by using image preprocessing to improve these images before use in a particular application. The proposed algorithm will integrate an intensity enhancement algorithm with a super resolution technique. The intensity enhancement portion consists of a nonlinear inverse sign transformation and an adaptive contrast enhancement. The super resolution section is a single image super resolution technique is a Fourier phase feature based method that uses a machine learning approach with kernel regression. The proposed technique intelligently integrates these algorithms to be able to produce a high quality output while also being more efficient than the sequential use of these algorithms. This integration is accomplished by performing the proposed algorithm on the intensity image produced from the original color image. After enhancement and super resolution, a color restoration technique is employed to obtain an improved visibility color image.

Keywords: dynamic range compression, multi-level Fourier features, nonlinear enhancement, super resolution

Procedia PDF Downloads 555
2646 Optical Flow Localisation and Appearance Mapping (OFLAAM) for Long-Term Navigation

Authors: Daniel Pastor, Hyo-Sang Shin

Abstract:

This paper presents a novel method to use optical flow navigation for long-term navigation. Unlike standard SLAM approaches for augmented reality, OFLAAM is designed for Micro Air Vehicles (MAV). It uses an optical flow camera pointing downwards, an IMU and a monocular camera pointing frontwards. That configuration avoids the expensive mapping and tracking of the 3D features. It only maps these features in a vocabulary list by a localization module to tackle the loss of the navigation estimation. That module, based on the well-established algorithm DBoW2, will be also used to close the loop and allow long-term navigation in confined areas. That combination of high-speed optical flow navigation with a low rate localization algorithm allows fully autonomous navigation for MAV, at the same time it reduces the overall computational load. This framework is implemented in ROS (Robot Operating System) and tested attached to a laptop. A representative scenarios is used to analyse the performance of the system.

Keywords: vision, UAV, navigation, SLAM

Procedia PDF Downloads 608
2645 Earphone Style Wearable Device for Automatic Guidance Service with Position Sensing

Authors: Dawei Cai

Abstract:

This paper describes a design of earphone style wearable device that may provide an automatic guidance service for visitors. With both position information and orientation information obtained from NFC and terrestrial magnetism sensor, a high level automatic guide service may be realized. To realize the service, a algorithm for position detection using the packet from NFC tags, and developed an algorithm to calculate the device orientation based on the data from acceleration and terrestrial magnetism sensors called as MEMS. If visitors want to know some explanation about an exhibit in front of him, what he has to do is only move to the object and stands for a moment. The identification program will automatically recognize the status based on the information from NFC and MEMS, and start playing explanation content about the exhibit. This service should be useful for improving the understanding of the exhibition items and bring more satisfactory visiting experience without less burden.

Keywords: wearable device, MEMS sensor, ubiquitous computing, NFC

Procedia PDF Downloads 241
2644 A Practical Protection Method for Parallel Transmission-Lines Based on the Fault Travelling-Waves

Authors: Mohammad Reza Ebrahimi

Abstract:

In new restructured power systems, swift fault detection is very important. The parallel transmission-lines are vastly used in this kind of power systems because of high amount of energy transferring. In this paper, a method based on the comparison of two schemes, i.e., i) maximum magnitude of travelling-wave (TW) energy ii) the instants of maximum energy occurrence at the circuits of parallel transmission-line is proposed. Using the travelling-wave of fault in order to faulted line identification this method has noticeable operation time. Moreover, the algorithm can cover for identification of faults as external or internal faults. For an internal fault, the exact location of the fault can be estimated confidently. A lot of simulations have been done with PSCAD/EMTDC to verify the performance of the proposed algorithm.

Keywords: travelling-wave, maximum energy, parallel transmission-line, fault location

Procedia PDF Downloads 187
2643 Avoiding Packet Drop for Improved through Put in the Multi-Hop Wireless N/W

Authors: Manish Kumar Rajak, Sanjay Gupta

Abstract:

Mobile ad hoc networks (MANETs) are infrastructure less and intercommunicate using single-hop and multi-hop paths. Network based congestion avoidance which involves managing the queues in the network devices is an integral part of any network. QoS: A set of service requirements that are met by the network while transferring a packet stream from a source to a destination. Especially in MANETs, packet loss results in increased overheads. This paper presents a new algorithm to avoid congestion using one or more queue on nodes and corresponding flow rate decided in advance for each node. When any node attains an initial value of queue then it sends this status to its downstream nodes which in turn uses the pre-decided flow rate of packet transfer to its upstream nodes. The flow rate on each node is adjusted according to the status received from its upstream nodes. This proposed algorithm uses the existing infrastructure to inform to other nodes about its current queue status.

Keywords: mesh networks, MANET, packet count, threshold, throughput

Procedia PDF Downloads 476
2642 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 158
2641 An Algorithm of Set-Based Particle Swarm Optimization with Status Memory for Traveling Salesman Problem

Authors: Takahiro Hino, Michiharu Maeda

Abstract:

Particle swarm optimization (PSO) is an optimization approach that achieves the social model of bird flocking and fish schooling. PSO works in continuous space and can solve continuous optimization problem with high quality. Set-based particle swarm optimization (SPSO) functions in discrete space by using a set. SPSO can solve combinatorial optimization problem with high quality and is successful to apply to the large-scale problem. In this paper, we present an algorithm of SPSO with status memory to decide the position based on the previous position for solving traveling salesman problem (TSP). In order to show the effectiveness of our approach. We examine SPSOSM for TSP compared to the existing algorithms.

Keywords: combinatorial optimization problems, particle swarm optimization, set-based particle swarm optimization, traveling salesman problem

Procedia PDF Downloads 554
2640 Research and Development of Intelligent Cooling Channels Design System

Authors: Q. Niu, X. H. Zhou, W. Liu

Abstract:

The cooling channels of injection mould play a crucial role in determining the productivity of moulding process and the product quality. It’s not a simple task to design high quality cooling channels. In this paper, an intelligent cooling channels design system including automatic layout of cooling channels, interference checking and assembly of accessories is studied. Automatic layout of cooling channels using genetic algorithm is analyzed. Through integrating experience criteria of designing cooling channels, considering the factors such as the mould temperature and interference checking, the automatic layout of cooling channels is implemented. The method of checking interference based on distance constraint algorithm and the function of automatic and continuous assembly of accessories are developed and integrated into the system. Case studies demonstrate the feasibility and practicality of the intelligent design system.

Keywords: injection mould, cooling channel, intelligent design, automatic layout, interference checking

Procedia PDF Downloads 441
2639 Integrating Geographic Information into Diabetes Disease Management

Authors: Tsu-Yun Chiu, Tsung-Hsueh Lu, Tain-Junn Cheng

Abstract:

Background: Traditional chronic disease management did not pay attention to effects of geographic factors on the compliance of treatment regime, which resulted in geographic inequality in outcomes of chronic disease management. This study aims to examine the geographic distribution and clustering of quality indicators of diabetes care. Method: We first extracted address, demographic information and quality of care indicators (number of visits, complications, prescription and laboratory records) of patients with diabetes for 2014 from medical information system in a medical center in Tainan City, Taiwan, and the patients’ addresses were transformed into district- and village-level data. We then compared the differences of geographic distribution and clustering of quality of care indicators between districts and villages. Despite the descriptive results, rate ratios and 95% confidence intervals (CI) were estimated for indices of care in order to compare the quality of diabetes care among different areas. Results: A total of 23,588 patients with diabetes were extracted from the hospital data system; whereas 12,716 patients’ information and medical records were included to the following analysis. More than half of the subjects in this study were male and between 60-79 years old. Furthermore, the quality of diabetes care did indeed vary by geographical levels. Thru the smaller level, we could point out clustered areas more specifically. Fuguo Village (of Yongkang District) and Zhiyi Village (of Sinhua District) were found to be “hotspots” for nephropathy and cerebrovascular disease; while Wangliau Village and Erwang Village (of Yongkang District) would be “coldspots” for lowest proportion of ≥80% compliance to blood lipids examination. On the other hand, Yuping Village (in Anping District) was the area with the lowest proportion of ≥80% compliance to all laboratory examination. Conclusion: In spite of examining the geographic distribution, calculating rate ratios and their 95% CI could also be a useful and consistent method to test the association. This information is useful for health planners, diabetes case managers and other affiliate practitioners to organize care resources to the areas most needed.

Keywords: catchment area of healthcare, chronic disease management, Geographic information system, quality of diabetes care

Procedia PDF Downloads 285
2638 Design and Implementation of Low-code Model-building Methods

Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu

Abstract:

This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.

Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment

Procedia PDF Downloads 31
2637 Optimal Power Exchange of Multi-Microgrids with Hierarchical Coordination

Authors: Beom-Ryeol Choi, Won-Poong Lee, Jin-Young Choi, Young-Hak Shin, Dong-Jun Won

Abstract:

A Microgrid (MG) has a major role in power system. There are numerous benefits, such as ability to reduce environmental impact and enhance the reliability of a power system. Hence, Multi-MG (MMG) consisted of multiple MGs is being studied intensively. This paper proposes the optimal power exchange of MMG with hierarchical coordination. The whole system architecture consists of two layers: 1) upper layer including MG of MG Center (MoMC) which is in charge of the overall management and coordination and 2) lower layer comprised of several Microgrid-Energy Management Systems (MG-EMSs) which make a decision for own schedule. In order to accomplish the optimal power exchange, the proposed coordination algorithm is applied to MMG system. The objective of this process is to achieve optimal operation for improving economics under the grid-connected operation. The simulation results show how the output of each MG can be changed through coordination algorithm.

Keywords: microgrids, multi-microgrids, power exchange, hierarchical coordination

Procedia PDF Downloads 373
2636 Speedup Breadth-First Search by Graph Ordering

Authors: Qiuyi Lyu, Bin Gong

Abstract:

Breadth-First Search(BFS) is a core graph algorithm that is widely used for graph analysis. As it is frequently used in many graph applications, improve the BFS performance is essential. In this paper, we present a graph ordering method that could reorder the graph nodes to achieve better data locality, thus, improving the BFS performance. Our method is based on an observation that the sibling relationships will dominate the cache access pattern during the BFS traversal. Therefore, we propose a frequency-based model to construct the graph order. First, we optimize the graph order according to the nodes’ visit frequency. Nodes with high visit frequency will be processed in priority. Second, we try to maximize the child nodes overlap layer by layer. As it is proved to be NP-hard, we propose a heuristic method that could greatly reduce the preprocessing overheads. We conduct extensive experiments on 16 real-world datasets. The result shows that our method could achieve comparable performance with the state-of-the-art methods while the graph ordering overheads are only about 1/15.

Keywords: breadth-first search, BFS, graph ordering, graph algorithm

Procedia PDF Downloads 139
2635 Fast Tumor Extraction Method Based on Nl-Means Filter and Expectation Maximization

Authors: Sandabad Sara, Sayd Tahri Yassine, Hammouch Ahmed

Abstract:

The development of science has allowed computer scientists to touch the medicine and bring aid to radiologists as we are presenting it in our article. Our work focuses on the detection and localization of tumors areas in the human brain; this will be a completely automatic without any human intervention. In front of the huge volume of MRI to be treated per day, the radiologist can spend hours and hours providing a tremendous effort. This burden has become less heavy with the automation of this step. In this article we present an automatic and effective tumor detection, this work consists of two steps: the first is the image filtering using the filter Nl-means, then applying the expectation maximization algorithm (EM) for retrieving the tumor mask from the brain MRI and extracting the tumor area using the mask obtained from the second step. To prove the effectiveness of this method multiple evaluation criteria will be used, so that we can compare our method to frequently extraction methods used in the literature.

Keywords: MRI, Em algorithm, brain, tumor, Nl-means

Procedia PDF Downloads 338
2634 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: data augmentation, mutex task generation, meta-learning, text classification.

Procedia PDF Downloads 95
2633 Relay Node Selection Algorithm for Cooperative Communications in Wireless Networks

Authors: Sunmyeng Kim

Abstract:

IEEE 802.11a/b/g standards support multiple transmission rates. Even though the use of multiple transmission rates increase the WLAN capacity, this feature leads to the performance anomaly problem. Cooperative communication was introduced to relieve the performance anomaly problem. Data packets are delivered to the destination much faster through a relay node with high rate than through direct transmission to the destination at low rate. In the legacy cooperative protocols, a source node chooses a relay node only based on the transmission rate. Therefore, they are not so feasible in multi-flow environments since they do not consider the effect of other flows. To alleviate the effect, we propose a new relay node selection algorithm based on the transmission rate and channel contention level. Performance evaluation is conducted using simulation, and shows that the proposed protocol significantly outperforms the previous protocol in terms of throughput and delay.

Keywords: cooperative communications, MAC protocol, relay node, WLAN

Procedia PDF Downloads 334
2632 Study of the Use of Artificial Neural Networks in Islamic Finance

Authors: Kaoutar Abbahaddou, Mohammed Salah Chiadmi

Abstract:

The need to find a relevant way to predict the next-day price of a stock index is a real concern for many financial stakeholders and researchers. We have known across years the proliferation of several methods. Nevertheless, among all these methods, the most controversial one is a machine learning algorithm that claims to be reliable, namely neural networks. Thus, the purpose of this article is to study the prediction power of neural networks in the particular case of Islamic finance as it is an under-looked area. In this article, we will first briefly present a review of the literature regarding neural networks and Islamic finance. Next, we present the architecture and principles of artificial neural networks most commonly used in finance. Then, we will show its empirical application on two Islamic stock indexes. The accuracy rate would be used to measure the performance of the algorithm in predicting the right price the next day. As a result, we can conclude that artificial neural networks are a reliable method to predict the next-day price for Islamic indices as it is claimed for conventional ones.

Keywords: Islamic finance, stock price prediction, artificial neural networks, machine learning

Procedia PDF Downloads 240
2631 Sentiment Analysis on the East Timor Accession Process to the ASEAN

Authors: Marcelino Caetano Noronha, Vosco Pereira, Jose Soares Pinto, Ferdinando Da C. Saores

Abstract:

One particularly popular social media platform is Youtube. It’s a video-sharing platform where users can submit videos, and other users can like, dislike or comment on the videos. In this study, we conduct a binary classification task on YouTube’s video comments and review from the users regarding the accession process of Timor Leste to become the eleventh member of the Association of South East Asian Nations (ASEAN). We scrape the data directly from the public YouTube video and apply several pre-processing and weighting techniques. Before conducting the classification, we categorized the data into two classes, namely positive and negative. In the classification part, we apply Support Vector Machine (SVM) algorithm. By comparing with Naïve Bayes Algorithm, the experiment showed SVM achieved 84.1% of Accuracy, 94.5% of Precision, and Recall 73.8% simultaneously.

Keywords: classification, YouTube, sentiment analysis, support sector machine

Procedia PDF Downloads 111
2630 Utilizing Grid Computing to Enhance Power Systems Performance

Authors: Rafid A. Al-Khannak, Fawzi M. Al-Naima

Abstract:

Power load is one of the most important controlling keys which decide power demands and illustrate power usage to shape power market. Hence, power load forecasting is the parameter which facilitates understanding and analyzing all these aspects. In this paper, power load forecasting is solved under MATLAB environment by constructing a neural network for the power load to find an accurate simulated solution with the minimum error. A developed algorithm to achieve load forecasting application with faster technique is the aim for this paper. The algorithm is used to enable MATLAB power application to be implemented by multi machines in the Grid computing system, and to accomplish it within much less time, cost and with high accuracy and quality. Grid Computing, the modern computational distributing technology, has been used to enhance the performance of power applications by utilizing idle and desired Grid contributor(s) by sharing computational power resources.

Keywords: DeskGrid, Grid Server, idle contributor(s), grid computing, load forecasting

Procedia PDF Downloads 477
2629 A Review of Intelligent Fire Management Systems to Reduce Wildfires

Authors: Nomfundo Ngombane, Topside E. Mathonsi

Abstract:

Remote sensing and satellite imaging have been widely used to detect wildfires; nevertheless, the technologies present some limitations in terms of early wildfire detection as the technologies are greatly influenced by weather conditions and can miss small fires. The fires need to have spread a few kilometers for the technologies to provide accurate detection. The South African Advanced Fire Information System uses MODIS (Moderate Resolution Imaging Spectroradiometer) as satellite imaging. MODIS has limitations as it can exclude small fires and can fall short in validating fire vulnerability. Thus in the future, a Machine Learning algorithm will be designed and implemented for the early detection of wildfires. A simulator will be used to evaluate the effectiveness of the proposed solution, and the results of the simulation will be presented.

Keywords: moderate resolution imaging spectroradiometer, advanced fire information system, machine learning algorithm, detection of wildfires

Procedia PDF Downloads 81
2628 X-Corner Detection for Camera Calibration Using Saddle Points

Authors: Abdulrahman S. Alturki, John S. Loomis

Abstract:

This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations.

Keywords: camera calibration, corner detector, edge detector, saddle points

Procedia PDF Downloads 411
2627 Integrated Model for Enhancing Data Security Processing Time in Cloud Computing

Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali

Abstract:

Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a simple user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.

Keywords: cloud computing, data security, SAAS, PAAS, IAAS, Blowfish

Procedia PDF Downloads 360