Search results for: small baseline subset algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9254

Search results for: small baseline subset algorithm

9074 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach

Authors: Mohammad H. Almomani

Abstract:

In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.

Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization

Procedia PDF Downloads 354
9073 Selecting the Best RBF Neural Network Using PSO Algorithm for ECG Signal Prediction

Authors: Najmeh Mohsenifar, Narjes Mohsenifar, Abbas Kargar

Abstract:

In this paper, has been presented a stable method for predicting the ECG signals through the RBF neural networks, by the PSO algorithm. In spite of quasi-periodic ECG signal from a healthy person, there are distortions in electro cardiographic data for a patient. Therefore, there is no precise mathematical model for prediction. Here, we have exploited neural networks that are capable of complicated nonlinear mapping. Although the architecture and spread of RBF networks are usually selected through trial and error, the PSO algorithm has been used for choosing the best neural network. In this way, 2 second of a recorded ECG signal is employed to predict duration of 20 second in advance. Our simulations show that PSO algorithm can find the RBF neural network with minimum MSE and the accuracy of the predicted ECG signal is 97 %.

Keywords: electrocardiogram, RBF artificial neural network, PSO algorithm, predict, accuracy

Procedia PDF Downloads 624
9072 FlexPoints: Efficient Algorithm for Detection of Electrocardiogram Characteristic Points

Authors: Daniel Bulanda, Janusz A. Starzyk, Adrian Horzyk

Abstract:

The electrocardiogram (ECG) is one of the most commonly used medical tests, essential for correct diagnosis and treatment of the patient. While ECG devices generate a huge amount of data, only a small part of them carries valuable medical information. To deal with this problem, many compression algorithms and filters have been developed over the past years. However, the rapid development of new machine learning techniques poses new challenges. To address this class of problems, we created the FlexPoints algorithm that searches for characteristic points on the ECG signal and ignores all other points that do not carry relevant medical information. The conducted experiments proved that the presented algorithm can significantly reduce the number of data points which represents ECG signal without losing valuable medical information. These sparse but essential characteristic points (flex points) can be a perfect input for some modern machine learning models, which works much better using flex points as an input instead of raw data or data compressed by many popular algorithms.

Keywords: characteristic points, electrocardiogram, ECG, machine learning, signal compression

Procedia PDF Downloads 160
9071 Small Traditional Retailers in Emerging Markets

Authors: Y. Boulaksil, J. C. Fransoo, E.E. Blanco, S. Koubida

Abstract:

In this paper, we study the small traditional retailers that are located in the neighborhoods of big cities in emerging markets. Although modern retailing has grown in the last two decades in these markets, the number of small retailers is still increasing and serving a substantial part of the daily demand for many basic products, such as bread, milk, and cooking oil. We conduct an empirical study to understand the business environment of these small traditional retailers in emerging markets by collecting data from 333 small retailers, spread over 8 large cities in Morocco. We analyze the data and describe their business environment with a focus on the informal credits they offer to their customers. We find that smaller small retailers that are funded from personal savings and managed by the owner himself offer relatively the most credits. Our study also provides interesting insights about these small retailers that will help FMCG manufacturers that are (planning to be) active in Morocco and other emerging markets. We also discuss a number opportunities to improve the efficiency of the supply chains that serve them.

Keywords: small retailers, big cities, emerging markets, empirical study, supply chain management, Morocco

Procedia PDF Downloads 578
9070 Schema Therapy as Treatment for Adults with Autism Spectrum Disorder and Comorbid Personality Disorder: A Multiple Baseline Case Series Study Testing Cognitive-Behavioral and Experiential Interventions

Authors: Richard Vuijk, Arnoud Arntz

Abstract:

Rationale: To our knowledge treatment of personality disorder comorbidity in adults with autism spectrum disorder (ASD) is understudied and is still in its infancy: We do not know if treatment of personality disorders may be applicable to adults with ASD. In particular, it is unknown whether patients with ASD benefit from experiential techniques that are part of schema therapy developed for the treatment of personality disorders. Objective: The aim of the study is to investigate the efficacy of a schema mode focused treatment with adult clients with ASD and comorbid personality pathology (i.e. at least one personality disorder). Specifically, we investigate if they can benefit from both cognitive-behavioral, and experiential interventions. Study design: A multiple baseline case series study. Study population: Adult individuals (age > 21 years) with ASD and at least one personality disorder. Participants will be recruited from Sarr expertise center for autism in Rotterdam. The study requires 12 participants. Intervention: The treatment protocol consists of 35 weekly offered sessions, followed by 10 monthly booster sessions. A multiple baseline design will be used with baseline varying from 5 to 10 weeks, with weekly supportive sessions. After baseline, a 5-week exploration phase follows with weekly sessions during which current and past functioning, psychological symptoms, schema modes are explored, and information about the treatment will be given. Then 15 weekly sessions with cognitive-behavioral interventions and 15 weekly sessions with experiential interventions will be given. Finally, there will be a 10-month follow-up phase with monthly booster sessions. Participants are randomly assigned to baseline length, and respond weekly during treatment and monthly at follow-up on Belief Strength of negative core beliefs (by VAS), and fill out SMI, SCL-90 and SRS-A 7 times during screening procedure (i.e. before baseline), after baseline, after exploration, after cognitive and behavioral interventions, after experiential interventions, and after 5- and 10- month follow-up. The SCID-II will be administered during screening procedure (i.e. before baseline), at 5- and at 10-month follow-up. Main study parameters: The primary study parameter is negative core beliefs. Secondary study parameters include schema modes, personality disorder manifestations, psychological symptoms, and social interaction and communication. Discussion: To the best of author’s knowledge so far no study has been published on the application of schema mode focused interventions in adult patients with ASD and comorbid PD(s). This study offers the first systematic test of application of schema therapy for adults with ASD. The results of this study will provide initial evidence for the effectiveness of schema therapy in treating adults with both ASD and PD(s). The study intends to provide valuable information for future development and implementation of therapeutic interventions for adults with both ASD and PD(s).

Keywords: adults, autism spectrum disorder, personality disorder, schema therapy

Procedia PDF Downloads 238
9069 Optimal Emergency Shipment Policy for a Single-Echelon Periodic Review Inventory System

Authors: Saeed Poormoaied, Zumbul Atan

Abstract:

Emergency shipments provide a powerful mechanism to alleviate the risk of imminent stock-outs and can result in substantial benefits in an inventory system. Customer satisfaction and high service level are immediate consequences of utilizing emergency shipments. In this paper, we consider a single-echelon periodic review inventory system consisting of a single local warehouse, being replenished from a central warehouse with ample capacity in an infinite horizon setting. Since the structure of the optimal policy appears to be complicated, we analyze this problem under an order-up-to-S inventory control policy framework, the (S, T) policy, with the emergency shipment consideration. In each period of the periodic review policy, there is a single opportunity at any point of time for the emergency shipment so that in case of stock-outs, an emergency shipment is requested. The goal is to determine the timing and amount of the emergency shipment during a period (emergency shipment policy) as well as the base stock periodic review policy parameters (replenishment policy). We show that how taking advantage of having an emergency shipment during periods improves the performance of the classical (S, T) policy, especially when fixed and unit emergency shipment costs are small. Investigating the structure of the objective function, we develop an exact algorithm for finding the optimal solution. We also provide a heuristic and an approximation algorithm for the periodic review inventory system problem. The experimental analyses indicate that the heuristic algorithm is computationally more efficient than the approximation algorithm, but in terms of the solution efficiency, the approximation algorithm performs very well. We achieve up to 13% cost savings in the (S, T) policy if we apply the proposed emergency shipment policy. Moreover, our computational results reveal that the approximated solution is often within 0.21% of the globally optimal solution.

Keywords: emergency shipment, inventory, periodic review policy, approximation algorithm.

Procedia PDF Downloads 140
9068 Application of a New Efficient Normal Parameter Reduction Algorithm of Soft Sets in Online Shopping

Authors: Xiuqin Ma, Hongwu Qin

Abstract:

A new efficient normal parameter reduction algorithm of soft set in decision making was proposed. However, up to the present, few documents have focused on real-life applications of this algorithm. Accordingly, we apply a New Efficient Normal Parameter Reduction algorithm into real-life datasets of online shopping, such as Blackberry Mobile Phone Dataset. Experimental results show that this algorithm is not only suitable but feasible for dealing with the online shopping.

Keywords: soft sets, parameter reduction, normal parameter reduction, online shopping

Procedia PDF Downloads 508
9067 Effects of Oral L-Carnitine on Liver Functions after Trans arterial Chemoembolization in Hepatocellular Carcinoma Patients

Authors: Ali Kassem, Aly Taha, Abeer Hassan, Kazuhide Higuchi

Abstract:

Introduction: Trans arterial chemoembolization (TACE) for hepatocellular carcinoma (HCC) is usually followed by hepatic dysfunction that limits its efficacy. L-carnitine is recently studied as hepatoprotective agent. Our aim is to evaluate the L-carnitine effects against the deterioration of liver functions after TACE. Method: 53 patients with intermediate stage HCC were assigned into two groups; L-carnitine group (26 patients) who received L-carnitine 300 mg tablet twice daily from 2 weeks before to 12 weeks after TACE and control group (27 patients) without L-carnitine therapy. 28 of studied patients received branched chain amino acids granules. Results: There were significant differences between L-carnitine Vs. control group in mean serum albumin change from baseline to 1 week and 4 weeks after TACE (p < 0.05). L-Carnitine maintained Child-Pugh score at 1 week after TACE and exhibited improvement at 4 weeks after TACE (p < 0.01 Vs 1 week after TACE). Control group has significant Child-Pugh score deterioration from baseline to 1 week after TACE (p < 0.05) and 12 weeks after TACE (p < 0.05). There were significant differences between L-carnitine and control groups in mean Child-Pugh score change from baseline to 4 weeks (p < 0.05) and 12 weeks after TACE (p < 0.05). L-carnitine displayed improvement in (PT) from baseline to 1 week, 4 w (p < 0.05) and 12 weeks after TACE. PT in control group declined less than baseline along all follow up intervals. Total bilirubin in L-carnitine group decreased at 1 week post TACE while in control group, it significantly increased at 1 week (p = 0.01). ALT and C-reactive protein elevation were suppressed at 1 week after TACE in Lcarnitine group. The hepatoprotective effects of L-carnitine were enhanced by concomitant use of branched chain amino acids. Conclusion: L-carnitine and BCAA combination therapy offer a novel supportive strategy after TACE in HCC patients.

Keywords: hepatocellular carcinoma, L-carnitine, liver functions , trans-arterial embolization

Procedia PDF Downloads 153
9066 Dietary Modification and Its Effects in Overweight or Obese Saudi Women with or without Type 2 Diabetes Mellitus

Authors: Nasiruddin Khan, Nasser M. Al-Daghri, Dara A. Al-Disi, Asim Al-Fadda, Mohamed Al-Seif, Gyanendra Tripathi, A. L. Harte, Philip G. Mcternan

Abstract:

For the last few decades, the prevalence of type 2 diabetes mellitus (T2DM) in the Kingdom of Saudi Arabia (KSA) is increasing alarmingly high and is unprecedented at 31.6 %. Preventive measures should be taken to curb down the increasing incidence. In this prospective, 3-month study, we aimed to determine whether dietary modification program would confer favorable affects among overweight and obese adult Saudi women with or without T2DM. A total of 92 Saudi women [18 healthy controls, 24 overweight subjects and 50 overweight or obese patients with early onset T2DM were included in this prospective study. Baseline anthropometrics and fasting blood samples were taken at baseline and after 3 months. Fasting blood sugar and lipid profile were measured routinely. A 500 Kcal deficit energy diet less than their daily recommended dietary allowances were prescribed to all participants. After 3 months of follow-up visit, significant improvements were observed in both the overweight and DMT2 group as compared to baseline with decreased mean BMI [Overweight Group 28.54±1.49 versus 27.95±2.25, p<0.05; DMT2 group 35.24±7.67 versus 35.04±8.07, p<0.05] and hip circumference [Overweight group 109.67±5.01 versus 108.07±4.07, p<0.05; DMT2 group 112.3±13.43 versus 109.21±12.71, p<0.01]. Moreover, in the overweight group, baseline HDL-cholesterol was significantly associated with protein intake and inversely associated with carbohydrate intake in controls. In the DMT2 group, carbohydrate intake at baseline was significantly associated with BMI. A 3-month 500kcal/day deficit dietary modification alone is probably effective among adult overweight or obese Saudi females without or with T2DM. Longer prospective studies are to determine whether the dietary intervention alone can reduce progression of T2DM among high-risk adult Arabs.

Keywords: diet, lipid, obesity, T2DM

Procedia PDF Downloads 474
9065 Discretization of Cuckoo Optimization Algorithm for Solving Quadratic Assignment Problems

Authors: Elham Kazemi

Abstract:

Quadratic Assignment Problem (QAP) is one the combinatorial optimization problems about which research has been done in many companies for allocating some facilities to some locations. The issue of particular importance in this process is the costs of this allocation and the attempt in this problem is to minimize this group of costs. Since the QAP’s are from NP-hard problem, they cannot be solved by exact solution methods. Cuckoo Optimization Algorithm is a Meta-heuristicmethod which has higher capability to find the global optimal points. It is an algorithm which is basically raised to search a continuous space. The Quadratic Assignment Problem is the issue which can be solved in the discrete space, thus the standard arithmetic operators of Cuckoo Optimization Algorithm need to be redefined on the discrete space in order to apply the Cuckoo Optimization Algorithm on the discrete searching space. This paper represents the way of discretizing the Cuckoo optimization algorithm for solving the quadratic assignment problem.

Keywords: Quadratic Assignment Problem (QAP), Discrete Cuckoo Optimization Algorithm (DCOA), meta-heuristic algorithms, optimization algorithms

Procedia PDF Downloads 515
9064 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 218
9063 Novel Adaptive Radial Basis Function Neural Networks Based Approach for Short-Term Load Forecasting of Jordanian Power Grid

Authors: Eyad Almaita

Abstract:

In this paper, a novel adaptive Radial Basis Function Neural Networks (RBFNN) algorithm is used to forecast the hour by hour electrical load demand in Jordan. A small and effective RBFNN model is used to forecast the hourly total load demand based on a small number of features. These features are; the load in the previous day, the load in the same day in the previous week, the temperature in the same hour, the hour number, the day number, and the day type. The proposed adaptive RBFNN model can enhance the reliability of the conventional RBFNN after embedding the network in the system. This is achieved by introducing an adaptive algorithm that allows the change of the weights of the RBFNN after the training process is completed, which will eliminates the need to retrain the RBFNN model again. The data used in this paper is real data measured by National Electrical Power co. (Jordan). The data for the period Jan./2012-April/2013 is used train the RBFNN models and the data for the period May/2013- Sep. /2013 is used to validate the models effectiveness.

Keywords: load forecasting, adaptive neural network, radial basis function, short-term, electricity consumption

Procedia PDF Downloads 343
9062 Efficient Reconstruction of DNA Distance Matrices Using an Inverse Problem Approach

Authors: Boris Melnikov, Ye Zhang, Dmitrii Chaikovskii

Abstract:

We continue to consider one of the cybernetic methods in computational biology related to the study of DNA chains. Namely, we are considering the problem of reconstructing the not fully filled distance matrix of DNA chains. When applied in a programming context, it is revealed that with a modern computer of average capabilities, creating even a small-sized distance matrix for mitochondrial DNA sequences is quite time-consuming with standard algorithms. As the size of the matrix grows larger, the computational effort required increases significantly, potentially spanning several weeks to months of non-stop computer processing. Hence, calculating the distance matrix on conventional computers is hardly feasible, and supercomputers are usually not available. Therefore, we started publishing our variants of the algorithms for calculating the distance between two DNA chains; then, we published algorithms for restoring partially filled matrices, i.e., the inverse problem of matrix processing. In this paper, we propose an algorithm for restoring the distance matrix for DNA chains, and the primary focus is on enhancing the algorithms that shape the greedy function within the branches and boundaries method framework.

Keywords: DNA chains, distance matrix, optimization problem, restoring algorithm, greedy algorithm, heuristics

Procedia PDF Downloads 116
9061 Constant Factor Approximation Algorithm for p-Median Network Design Problem with Multiple Cable Types

Authors: Chaghoub Soraya, Zhang Xiaoyan

Abstract:

This research presents the first constant approximation algorithm to the p-median network design problem with multiple cable types. This problem was addressed with a single cable type and there is a bifactor approximation algorithm for the problem. To the best of our knowledge, the algorithm proposed in this paper is the first constant approximation algorithm for the p-median network design with multiple cable types. The addressed problem is a combination of two well studied problems which are p-median problem and network design problem. The introduced algorithm is a random sampling approximation algorithm of constant factor which is conceived by using some random sampling techniques form the literature. It is based on a redistribution Lemma from the literature and a steiner tree problem as a subproblem. This algorithm is simple, and it relies on the notions of random sampling and probability. The proposed approach gives an approximation solution with one constant ratio without violating any of the constraints, in contrast to the one proposed in the literature. This paper provides a (21 + 2)-approximation algorithm for the p-median network design problem with multiple cable types using random sampling techniques.

Keywords: approximation algorithms, buy-at-bulk, combinatorial optimization, network design, p-median

Procedia PDF Downloads 201
9060 Application of the Discrete-Event Simulation When Optimizing of Business Processes in Trading Companies

Authors: Maxat Bokambayev, Bella Tussupova, Aisha Mamyrova, Erlan Izbasarov

Abstract:

Optimization of business processes in trading companies is reviewed in the report. There is the presentation of the “Wholesale Customer Order Handling Process” business process model applicable for small and medium businesses. It is proposed to apply the algorithm for automation of the customer order processing which will significantly reduce labor costs and time expenditures and increase the profitability of companies. An optimized business process is an element of the information system of accounting of spare parts trading network activity. The considered algorithm may find application in the trading industry as well.

Keywords: business processes, discrete-event simulation, management, trading industry

Procedia PDF Downloads 342
9059 Probabilistic Gathering of Agents with Simple Sensors: Distributed Algorithm for Aggregation of Robots Equipped with Binary On-Board Detectors

Authors: Ariel Barel, Rotem Manor, Alfred M. Bruckstein

Abstract:

We present a probabilistic gathering algorithm for agents that can only detect the presence of other agents in front of or behind them. The agents act in the plane and are identical and indistinguishable, oblivious, and lack any means of direct communication. They do not have a common frame of reference in the plane and choose their orientation (direction of possible motion) at random. The analysis of the gathering process assumes that the agents act synchronously in selecting random orientations that remain fixed during each unit time-interval. Two algorithms are discussed. The first one assumes discrete jumps based on the sensing results given the randomly selected motion direction, and in this case, extensive experimental results exhibit probabilistic clustering into a circular region with radius equal to the step-size in time proportional to the number of agents. The second algorithm assumes agents with continuous sensing and motion, and in this case, we can prove gathering into a very small circular region in finite expected time.

Keywords: control, decentralized, gathering, multi-agent, simple sensors

Procedia PDF Downloads 162
9058 A Minimum Spanning Tree-Based Method for Initializing the K-Means Clustering Algorithm

Authors: J. Yang, Y. Ma, X. Zhang, S. Li, Y. Zhang

Abstract:

The traditional k-means algorithm has been widely used as a simple and efficient clustering method. However, the algorithm often converges to local minima for the reason that it is sensitive to the initial cluster centers. In this paper, an algorithm for selecting initial cluster centers on the basis of minimum spanning tree (MST) is presented. The set of vertices in MST with same degree are regarded as a whole which is used to find the skeleton data points. Furthermore, a distance measure between the skeleton data points with consideration of degree and Euclidean distance is presented. Finally, MST-based initialization method for the k-means algorithm is presented, and the corresponding time complexity is analyzed as well. The presented algorithm is tested on five data sets from the UCI Machine Learning Repository. The experimental results illustrate the effectiveness of the presented algorithm compared to three existing initialization methods.

Keywords: degree, initial cluster center, k-means, minimum spanning tree

Procedia PDF Downloads 409
9057 An Optimized Association Rule Mining Algorithm

Authors: Archana Singh, Jyoti Agarwal, Ajay Rana

Abstract:

Data Mining is an efficient technology to discover patterns in large databases. Association Rule Mining techniques are used to find the correlation between the various item sets in a database, and this co-relation between various item sets are used in decision making and pattern analysis. In recent years, the problem of finding association rules from large datasets has been proposed by many researchers. Various research papers on association rule mining (ARM) are studied and analyzed first to understand the existing algorithms. Apriori algorithm is the basic ARM algorithm, but it requires so many database scans. In DIC algorithm, less amount of database scan is needed but complex data structure lattice is used. The main focus of this paper is to propose a new optimized algorithm (Friendly Algorithm) and compare its performance with the existing algorithms A data set is used to find out frequent itemsets and association rules with the help of existing and proposed (Friendly Algorithm) and it has been observed that the proposed algorithm also finds all the frequent itemsets and essential association rules from databases as compared to existing algorithms in less amount of database scan. In the proposed algorithm, an optimized data structure is used i.e. Graph and Adjacency Matrix.

Keywords: association rules, data mining, dynamic item set counting, FP-growth, friendly algorithm, graph

Procedia PDF Downloads 419
9056 Improved K-Means Clustering Algorithm Using RHadoop with Combiner

Authors: Ji Eun Shin, Dong Hoon Lim

Abstract:

Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases.

Keywords: big data, combiner, K-means clustering, RHadoop

Procedia PDF Downloads 437
9055 A Mini Radar System for Low Altitude Targets Detection

Authors: Kangkang Wu, Kaizhi Wang, Zhijun Yuan

Abstract:

This paper deals with a mini radar system aimed at detecting small targets at the low latitude. The radar operates at Ku-band in the frequency modulated continuous wave (FMCW) mode with two receiving channels. The radar system has the characteristics of compactness, mobility, and low power consumption. This paper focuses on the implementation of the radar system, and the Block least mean square (Block LMS) algorithm is applied to minimize the fortuitous distortion. It is validated from a series of experiments that the track of the unmanned aerial vehicle (UAV) can be easily distinguished with the radar system.

Keywords: unmanned aerial vehicle (UAV), interference, Block Least Mean Square (Block LMS) Algorithm, Frequency Modulated Continuous Wave (FMCW)

Procedia PDF Downloads 319
9054 Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database

Authors: Yosep Chong, Yejin Kim, Jingyun Choi, Hwanjo Yu, Eun Jung Lee, Chang Suk Kang

Abstract:

For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.

Keywords: database, expert supporting system, immunohistochemistry, probabilistic decision tree

Procedia PDF Downloads 223
9053 A Genetic Algorithm to Schedule the Flow Shop Problem under Preventive Maintenance Activities

Authors: J. Kaabi, Y. Harrath

Abstract:

This paper studied the flow shop scheduling problem under machine availability constraints. The machines are subject to flexible preventive maintenance activities. The nonresumable scenario for the jobs was considered. That is, when a job is interrupted by an unavailability period of a machine it should be restarted from the beginning. The objective is to minimize the total tardiness time for the jobs and the advance/tardiness for the maintenance activities. To solve the problem, a genetic algorithm was developed and successfully tested and validated on many problem instances. The computational results showed that the new genetic algorithm outperforms another earlier proposed algorithm.

Keywords: flow shop scheduling, genetic algorithm, maintenance, priority rules

Procedia PDF Downloads 470
9052 Efficacy of Heart Failure Reversal Treatment Followed by 90 Days Follow up in Chronic Heart Failure Patients with Low Ejection Fraction

Authors: Rohit Sane, Snehal Dongre, Pravin Ghadigaonkar, Rahul Mandole

Abstract:

The present study was designed to evaluate efficacy of heart failure reversal therapy (HFRT) that uses herbal procedure (panchakarma) and allied therapies, in chronic heart failure (CHF) patients with low ejection fraction. Methods: This efficacy study was conducted in CHF patients (aged: 25-65 years, ejection fraction (EF) < 30%) wherein HFRT (60-75 minutes) consisting of snehana (external oleation), swedana (passive heat therapy), hrudaydhara(concoction dripping treatment) and basti(enema) was administered twice daily for 7 days. During this therapy and next 30 days, patients followed the study dinarcharya and were prescribed ARJ kadha in addition to their conventional treatment. The primary endpoint of this study was evaluation of maximum aerobic capacity uptake (MAC) as assessed by 6-minute walk distance (6MWD) using Cahalins equation from baseline, at end of 7 day treatment, follow-up after 30 days and 90 days. EF was assessed by 2D Echo at baseline and after 30 days of follow-up. Results: CHF patients with < 30% EF (N=52, mean [SD] age: 58.8 [10.8], 85% men) were enrolled in the study. There was a 100% compliance to study therapy. A significant improvement was observed in MAC levels (7.11%, p =0.029), at end of 7 day therapy as compared to baseline. This improvement was maintained at two follow-up visits. Moreover, ejection fraction was observed to be increased by 6.38%, p=0,012 as compared to baseline at day 7 of the therapy. Conclusions: This 90 day follow up study highlights benefit of HFRT, as a part of maintenance treatment for CHF patients with reduced ejection fraction.

Keywords: chronic heart failure, functional capacity, heart failure reversal therapy, oxygen uptake, panchakarma

Procedia PDF Downloads 231
9051 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 288
9050 Memetic Algorithm for Solving the One-To-One Shortest Path Problem

Authors: Omar Dib, Alexandre Caminada, Marie-Ange Manier

Abstract:

The purpose of this study is to introduce a novel approach to solve the one-to-one shortest path problem. A directed connected graph is assumed in which all edges’ weights are positive. Our method is based on a memetic algorithm in which we combine a genetic algorithm (GA) and a variable neighborhood search method (VNS). We compare our approximate method with two exact algorithms Dijkstra and Integer Programming (IP). We made experimentations using random generated, complete and real graph instances. In most case studies, numerical results show that our method outperforms exact methods with 5% average gap to the optimality. Our algorithm’s average speed is 20-times faster than Dijkstra and more than 1000-times compared to IP. The details of the experimental results are also discussed and presented in the paper.

Keywords: shortest path problem, Dijkstra’s algorithm, integer programming, memetic algorithm

Procedia PDF Downloads 464
9049 Real-Time Web Map Service Based on Solar-Powered Unmanned Aerial Vehicle

Authors: Sunghun Jung

Abstract:

The existing web map service providers contract with the satellite operators to update their maps by paying an astronomical amount of money, but the cost could be minimized by operating a cheap and small UAV. In contrast to the satellites, we only need to replace aged battery packs from time to time for the usage of UAVs. Utilizing both a regular camera and an infrared camera mounted on a small, solar-powered, long-endurance, and hoverable UAV, daytime ground surface photographs, and nighttime infrared photographs will be continuously and repeatedly uploaded to the web map server and overlapped with the existing ground surface photographs in real-time. The real-time web map service using a small, solar-powered, long-endurance, and hoverable UAV can also be applied to the surveillance missions, in particular, to detect border area intruders. The improved real-time image stitching algorithm is developed for the graphic map data overlapping. Also, a small home server will be developed to manage the huge size of incoming map data. The map photographs taken at tens or hundreds of kilometers by a UAV would improve the map graphic resolution compared to the map photographs taken at thousands of kilometers by satellites since the satellite photographs are limited by weather conditions.

Keywords: long-endurance, real-time web map service (RWMS), solar-powered, unmanned aerial vehicle (UAV)

Procedia PDF Downloads 272
9048 Rosuvastatin Improves Endothelial Progenitor Cells in Rheumatoid Arthritis

Authors: Ashit Syngle, Nidhi Garg, Pawan Krishan

Abstract:

Background: Endothelial Progenitor Cells (EPCs) are depleted and contribute to increased cardiovascular (CV) risk in rheumatoid arthritis (RA). Statins exert a protective effect in CAD partly by promoting EPC mobilization. This vasculoprotective effect of statin has not yet been investigated in RA. We aimed to investigate the effect of rosuvastatin on EPCs in RA. Methods: 50 RA patients were randomized to receive 6 months of treatment with rosuvastatin (10 mg/day, n=25) and placebo (n=25) as an adjunct to existing stable antirheumatic drugs. EPCs (CD34+/CD133+) were quantified by Flow Cytometry. Inflammatory measures included DAS28, CRP and ESR were measured at baseline and after treatment. Lipids and pro-inflammatory cytokines (TNF-α, IL-6, and IL-1) were estimated at baseline and after treatment. Results: At baseline, inflammatory measures and pro-inflammatory cytokines were elevated and EPCs depleted among both groups. At baseline, EPCs inversely correlated with DAS28 and TNF-α in both groups. EPCs increased significantly (p < 0.01) after treatment with rosuvastatin but did not show significant change with placebo. Rosuvastatin exerted positive effect on lipid spectrum: lowering total cholesterol, LDL, non HDL and elevation of HDL as compared with placebo. At 6 months, DAS28, ESR, CRP, TNF-α and IL-6 improved significantly in rosuvastatin group. Significant negative correlation was observed between EPCs and DAS28, CRP, TNF-α, and IL-6 after treatment with rosuvastatin. Conclusion: First study to show that rosuvastatin improves inflammation and EPC biology in RA possibly through its anti-inflammatory and lipid lowering effect. This beneficial effect of rosuvastatin may provide a novel strategy to prevent cardiovascular events in RA.

Keywords: RA, Endothelial Progenitor Cells, rosuvastatin, cytokines

Procedia PDF Downloads 256
9047 Constructing the Density of States from the Parallel Wang Landau Algorithm Overlapping Data

Authors: Arman S. Kussainov, Altynbek K. Beisekov

Abstract:

This work focuses on building an efficient universal procedure to construct a single density of states from the multiple pieces of data provided by the parallel implementation of the Wang Landau Monte Carlo based algorithm. The Ising and Pott models were used as the examples of the two-dimensional spin lattices to construct their densities of states. Sampled energy space was distributed between the individual walkers with certain overlaps. This was made to include the latest development of the algorithm as the density of states replica exchange technique. Several factors of immediate importance for the seamless stitching process have being considered. These include but not limited to the speed and universality of the initial parallel algorithm implementation as well as the data post-processing to produce the expected smooth density of states.

Keywords: density of states, Monte Carlo, parallel algorithm, Wang Landau algorithm

Procedia PDF Downloads 410
9046 A Preliminary Study for Design of Automatic Block Reallocation Algorithm with Genetic Algorithm Method in the Land Consolidation Projects

Authors: Tayfun Çay, Yasar İnceyol, Abdurrahman Özbeyaz

Abstract:

Land reallocation is one of the most important steps in land consolidation projects. Many different models were proposed for land reallocation in the literature such as Fuzzy Logic, block priority based land reallocation and Spatial Decision Support Systems. A model including four parts is considered for automatic block reallocation with genetic algorithm method in land consolidation projects. These stages are preparing data tables for a project land, determining conditions and constraints of land reallocation, designing command steps and logical flow chart of reallocation algorithm and finally writing program codes of Genetic Algorithm respectively. In this study, we designed the first three steps of the considered model comprising four steps.

Keywords: land consolidation, landholding, land reallocation, optimization, genetic algorithm

Procedia PDF Downloads 430
9045 Flywheel Energy Storage Control Using SVPWM for Small Satellites Application

Authors: Noha El-Gohary, Thanaa El-Shater, A. A. Mahfouz, M. M. Sakr

Abstract:

Searching for high power conversion efficiency and long lifetime are important goals when designing a power supply subsystem for satellite applications. To fulfill these goals, this paper presents a power supply subsystem for small satellites in which flywheel energy storage system is used as a secondary power source instead of chemical battery. In this paper, the model of flywheel energy storage system is introduced; a DC bus regulation control algorithm for charging and discharging of flywheel based on space vector pulse width modulation technique and motor current control is also introduced. Simulation results showed the operation of the flywheel for charging and discharging mode during illumination and shadowed period. The advantages of the proposed system are confirmed by the simulation results of the power supply system.

Keywords: small-satellites, flywheel energy storage system, space vector pulse width modulation, power conversion

Procedia PDF Downloads 398