Search results for: small baseline subset algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9084

Search results for: small baseline subset algorithm

9024 Challenge of Baseline Hydrology Estimation at Large-Scale Watersheds

Authors: Can Liu, Graham Markowitz, John Balay, Ben Pratt

Abstract:

Baseline or natural hydrology is commonly employed for hydrologic modeling and quantification of hydrologic alteration due to manmade activities. It can inform planning and policy related efforts for various state and federal water resource agencies to restore natural streamflow flow regimes. A common challenge faced by hydrologists is how to replicate unaltered streamflow conditions, particularly in large watershed settings prone to development and regulation. Three different methods were employed to estimate baseline streamflow conditions for 6 major subbasins the Susquehanna River Basin; those being: 1) incorporation of consumptive water use and reservoir operations back into regulated gaged records; 2) using a map correlation method and flow duration (exceedance probability) regression equations; 3) extending the pre-regulation streamflow records based on the relationship between concurrent streamflows at unregulated and regulated gage locations. Parallel analyses were perform among the three methods and limitations associated with each are presented. Results from these analyses indicate that generating baseline streamflow records at large-scale watersheds remain challenging, even with long-term continuous stream gage records available.

Keywords: baseline hydrology, streamflow gage, subbasin, regression

Procedia PDF Downloads 306
9023 A Clustering Algorithm for Massive Texts

Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen

Abstract:

Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.

Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process

Procedia PDF Downloads 411
9022 Quantum Decision Making with Small Sample for Network Monitoring and Control

Authors: Tatsuya Otoshi, Masayuki Murata

Abstract:

With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.

Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm

Procedia PDF Downloads 58
9021 A Coordinate-Based Heuristic Route Search Algorithm for Delivery Truck Routing Problem

Authors: Ahmed Tarek, Ahmed Alveed

Abstract:

Vehicle routing problem is a well-known re-search avenue in computing. Modern vehicle routing is more focused with the GPS-based coordinate system, as the state-of-the-art vehicle, and trucking systems are equipped with digital navigation. In this paper, a new two dimensional coordinate-based algorithm for addressing the vehicle routing problem for a supply chain network is proposed and explored, and the algorithm is compared with other available, and recently devised heuristics. For the algorithms discussed, which includes the pro-posed coordinate-based search heuristic as well, the advantages and the disadvantages associated with the heuristics are explored. The proposed algorithm is studied from the stand point of a small supermarket chain delivery network that supplies to its stores in four different states around the East Coast area, and is trying to optimize its trucking delivery cost. Minimizing the delivery cost for the supply network of a supermarket chain is important to ensure its business success.

Keywords: coordinate-based optimal routing, Hamiltonian Circuit, heuristic algorithm, traveling salesman problem, vehicle routing problem

Procedia PDF Downloads 131
9020 Small Entrepreneurship Supporting Economic Policy in Georgia

Authors: G. Erkomaishvili

Abstract:

This paper discusses small entrepreneurship development strategy in Georgia and the tools and regulations that will encourage development of small entrepreneurship. The current situation in the small entrepreneurship sector, as well as factors affecting growth and decline in the sector and the priorities of state support, are studied and analyzed. The objective of this research is to assess the current situation of the sector to highlight opportunities and reveal the gaps. State support of small entrepreneurship should become a key priority in the country’s economic policy, as development of the sector will ensure social, economic and political stability. Based on the research, a small entrepreneurship development strategy is presented; corresponding conclusions are made and recommendations are developed.

Keywords: economic policy for small entrepreneurship development, small entrepreneurship, regulations, small entrepreneurship development strategy

Procedia PDF Downloads 459
9019 Improving the Performance of Back-Propagation Training Algorithm by Using ANN

Authors: Vishnu Pratap Singh Kirar

Abstract:

Artificial Neural Network (ANN) can be trained using backpropagation (BP). It is the most widely used algorithm for supervised learning with multi-layered feed-forward networks. Efficient learning by the BP algorithm is required for many practical applications. The BP algorithm calculates the weight changes of artificial neural networks, and a common approach is to use a two-term algorithm consisting of a learning rate (LR) and a momentum factor (MF). The major drawbacks of the two-term BP learning algorithm are the problems of local minima and slow convergence speeds, which limit the scope for real-time applications. Recently the addition of an extra term, called a proportional factor (PF), to the two-term BP algorithm was proposed. The third increases the speed of the BP algorithm. However, the PF term also reduces the convergence of the BP algorithm, and criteria for evaluating convergence are required to facilitate the application of the three terms BP algorithm. Although these two seem to be closely related, as described later, we summarize various improvements to overcome the drawbacks. Here we compare the different methods of convergence of the new three-term BP algorithm.

Keywords: neural network, backpropagation, local minima, fast convergence rate

Procedia PDF Downloads 479
9018 Integral Domains and Their Algebras: Topological Aspects

Authors: Shai Sarussi

Abstract:

Let S be an integral domain with field of fractions F and let A be an F-algebra. An S-subalgebra R of A is called S-nice if R∩F = S and the localization of R with respect to S \{0} is A. Denoting by W the set of all S-nice subalgebras of A, and defining a notion of open sets on W, one can view W as a T0-Alexandroff space. Thus, the algebraic structure of W can be viewed from the point of view of topology. It is shown that every nonempty open subset of W has a maximal element in it, which is also a maximal element of W. Moreover, a supremum of an irreducible subset of W always exists. As a notable connection with valuation theory, one considers the case in which S is a valuation domain and A is an algebraic field extension of F; if S is indecomposed in A, then W is an irreducible topological space, and W contains a greatest element.

Keywords: integral domains, Alexandroff topology, prime spectrum of a ring, valuation domains

Procedia PDF Downloads 109
9017 Rumination Time and Reticuloruminal Temperature around Calving in Eutocic and Dystocic Dairy Cows

Authors: Levente Kovács, Fruzsina Luca Kézér, Ottó Szenci

Abstract:

Prediction of the onset of calving and recognizing difficulties at calving has great importance in decreasing neonatal losses and reducing the risk of health problems in the early postpartum period. In this study, changes of rumination time, reticuloruminal pH and temperature were investigated in eutocic (EUT, n = 10) and dystocic (DYS, n = 8) dairy cows around parturition. Rumination time was continuously recorded using an acoustic biotelemetry system, whereas reticuloruminal pH and temperature were recorded using an indwelling and wireless data transmitting system. The recording period lasted from 3 d before calving until 7 days in milk. For the comparison of rumination time and reticuloruminal characteristics between groups, time to return to baseline (the time interval required to return to baseline from the delivery of the calf) and area under the curve (AUC, both for prepartum and postpartum periods) were calculated for each parameter. Rumination time decreased from baseline 28 h before calving both for EUT and DYS cows (P = 0.023 and P = 0.017, respectively). After 20 h before calving, it decreased onwards to reach 32.4 ± 2.3 and 13.2 ± 2.0 min/4 h between 8 and 4 h before delivery in EUT and DYS cows, respectively, and then it decreased below 10 and 5 min during the last 4 h before calving (P = 0.003 and P = 0.008, respectively). Until 12 h after delivery rumination time reached 42.6 ± 2.7 and 51.0 ± 3.1 min/4 h in DYS and EUT dams, respectively, however, AUC and time to return to baseline suggested lower rumination activity in DYS cows than in EUT dams for the 168-h postpartum observational period (P = 0.012 and P = 0.002, respectively). Reticuloruminal pH decreased from baseline 56 h before calving both for EUT and DYS cows (P = 0.012 and P = 0.016, respectively), but did not differ between groups before delivery. In DYS cows, reticuloruminal temperature decreased from baseline 32 h before calving by 0.23 ± 0.02 °C (P = 0.012), whereas in EUT cows such a decrease was found only 20 h before delivery (0.48 ± 0.05 °C, P < 0.01). AUC of reticuloruminal temperature calculated for the prepartum period was greater in EUT cows than in DYS cows (P = 0.042). During the first 4 h after calving, it decreased from 39.7 ± 0.1 to 39.00 ± 0.1 °C and from 39.8 ± 0.1 to 38.8 ± 0.1 °C in EUT and DYS cows, respectively (P < 0.01 for both groups) and reached baseline levels after 35.4 ± 3.4 and 37.8 ± 4.2 h after calving in EUT and DYS cows, respectively. Based on our results, continuous monitoring of changes in rumination time and reticuloruminal temperature seems to be promising in the early detection of cows with a higher risk of dystocia. Depressed postpartum rumination time of DYS cows highlights the importance of the monitoring of cows experiencing difficulties at calving.

Keywords: reticuloruminal pH, reticuloruminal temperature, rumination time, dairy cows, dystocia

Procedia PDF Downloads 301
9016 A Review of Intelligent Fire Management Systems to Reduce Wildfires

Authors: Nomfundo Ngombane, Topside E. Mathonsi

Abstract:

Remote sensing and satellite imaging have been widely used to detect wildfires; nevertheless, the technologies present some limitations in terms of early wildfire detection as the technologies are greatly influenced by weather conditions and can miss small fires. The fires need to have spread a few kilometers for the technologies to provide accurate detection. The South African Advanced Fire Information System uses MODIS (Moderate Resolution Imaging Spectroradiometer) as satellite imaging. MODIS has limitations as it can exclude small fires and can fall short in validating fire vulnerability. Thus in the future, a Machine Learning algorithm will be designed and implemented for the early detection of wildfires. A simulator will be used to evaluate the effectiveness of the proposed solution, and the results of the simulation will be presented.

Keywords: moderate resolution imaging spectroradiometer, advanced fire information system, machine learning algorithm, detection of wildfires

Procedia PDF Downloads 68
9015 Fast and Scale-Adaptive Target Tracking via PCA-SIFT

Authors: Yawen Wang, Hongchang Chen, Shaomei Li, Chao Gao, Jiangpeng Zhang

Abstract:

As the main challenge for target tracking is accounting for target scale change and real-time, we combine Mean-Shift and PCA-SIFT algorithm together to solve the problem. We introduce similarity comparison method to determine how the target scale changes, and taking different strategies according to different situation. For target scale getting larger will cause location error, we employ backward tracking to reduce the error. Mean-Shift algorithm has poor performance when tracking scale-changing target due to the fixed bandwidth of its kernel function. In order to overcome this problem, we introduce PCA-SIFT matching. Through key point matching between target and template, that adjusting the scale of tracking window adaptively can be achieved. Because this algorithm is sensitive to wrong match, we introduce RANSAC to reduce mismatch as far as possible. Furthermore target relocating will trigger when number of match is too small. In addition we take comprehensive consideration about target deformation and error accumulation to put forward a new template update method. Experiments on five image sequences and comparison with 6 kinds of other algorithm demonstrate favorable performance of the proposed tracking algorithm.

Keywords: target tracking, PCA-SIFT, mean-shift, scale-adaptive

Procedia PDF Downloads 419
9014 Space Time Adaptive Algorithm in Bi-Static Passive Radar Systems for Clutter Mitigation

Authors: D. Venu, N. V. Koteswara Rao

Abstract:

Space – time adaptive processing (STAP) is an effective tool for detecting a moving target in spaceborne or airborne radar systems. Since airborne passive radar systems utilize broadcast, navigation and excellent communication signals to perform various surveillance tasks and also has attracted significant interest from the distinct past, therefore the need of the hour is to have cost effective systems as compared to conventional active radar systems. Moreover, requirements of small number of secondary samples for effective clutter suppression in bi-static passive radar offer abundant illuminator resources for passive surveillance radar systems. This paper presents a framework for incorporating knowledge sources directly in the space-time beam former of airborne adaptive radars. STAP algorithm for clutter mitigation for passive bi-static radar has better quantitation of the reduction in sample size thereby amalgamating the earlier data bank with existing radar data sets. Also, we proposed a novel method to estimate the clutter matrix and perform STAP for efficient clutter suppression based on small sample size. Furthermore, the effectiveness of the proposed algorithm is verified using MATLAB simulations in order to validate STAP algorithm for passive bi-static radar. In conclusion, this study highlights the importance for various applications which augments traditional active radars using cost-effective measures.

Keywords: bistatic radar, clutter, covariance matrix passive radar, STAP

Procedia PDF Downloads 282
9013 Accuracy of Small Field of View CBCT in Determining Endodontic Working Length

Authors: N. L. S. Ahmad, Y. L. Thong, P. Nambiar

Abstract:

An in vitro study was carried out to evaluate the feasibility of small field of view (FOV) cone beam computed tomography (CBCT) in determining endodontic working length. The objectives were to determine the accuracy of CBCT in measuring the estimated preoperative working lengths (EPWL), endodontic working lengths (EWL) and file lengths. Access cavities were prepared in 27 molars. For each root canal, the baseline electronic working length was determined using an EAL (Raypex 5). The teeth were then divided into overextended, non-modified and underextended groups and the lengths were adjusted accordingly. Imaging and measurements were made using the respective software of the RVG (Kodak RVG 6100) and CBCT units (Kodak 9000 3D). Root apices were then shaved and the apical constrictions viewed under magnification to measure the control working lengths. The paired t-test showed a statistically significant difference between CBCT EPWL and control length but the difference was too small to be clinically significant. From the Bland Altman analysis, the CBCT method had the widest range of 95% limits of agreement, reflecting its greater potential of error. In measuring file lengths, RVG had a bigger window of 95% limits of agreement compared to CBCT. Conclusions: (1) The clinically insignificant underestimation of the preoperative working length using small FOV CBCT showed that it is acceptable for use in the estimation of preoperative working length. (2) Small FOV CBCT may be used in working length determination but it is not as accurate as the currently practiced method of using the EAL. (3) It is also more accurate than RVG in measuring file lengths.

Keywords: accuracy, CBCT, endodontics, measurement

Procedia PDF Downloads 291
9012 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm

Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn

Abstract:

Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.

Keywords: binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct

Procedia PDF Downloads 198
9011 The Effect of Acute Rejection and Delayed Graft Function on Renal Transplant Fibrosis in Live Donor Renal Transplantation

Authors: Wisam Ismail, Sarah Hosgood, Michael Nicholson

Abstract:

The research hypothesis is that early post-transplant allograft fibrosis will be linked to donor factors and that acute rejection and/or delayed graft function in the recipient will be independent risk factors for the development of fibrosis. This research hypothesis is to explore whether acute rejection/delay graft function has an effect on the renal transplant fibrosis within the first year post live donor kidney transplant between 1998 and 2009. Methods: The study has been designed to identify five time points of the renal transplant biopsies [0 (pre-transplant), 1 month, 3 months, 6 months and 12 months] for 300 live donor renal transplant patients over 12 years period between March 1997 – August 2009. Paraffin fixed slides were collected from Leicester General Hospital and Leicester Royal Infirmary. These were routinely sectioned at a thickness of 4 Micro millimetres for standardization. Conclusions: Fibrosis at 1 month after the transplant was found significantly associated with baseline fibrosis (p<0.001) and HTN in the transplant recipient (p<0.001). Dialysis after the transplant showed a weak association with fibrosis at 1 month (p=0.07). The negative coefficient for HTN (-0.05) suggests a reduction in fibrosis in the absence of HTN. Fibrosis at 1 month was significantly associated with fibrosis at baseline (p 0.01 and 95%CI 0.11 to 0.67). Fibrosis at 3, 6 or 12 months was not found to be associated with fibrosis at baseline (p=0.70. 0.65 and 0.50 respectively). The amount of fibrosis at 1 month is significantly associated with graft survival (p=0.01 and 95%CI 0.02 to 0.14). Rejection and severity of rejection were not found to be associated with fibrosis at 1 month. The amount of fibrosis at 1 month was significantly associated with graft survival (p=0.02) after adjusting for baseline fibrosis (p=0.01). Both baseline fibrosis and graft survival were significant predictive factors. The amount of fibrosis at 1 month was not found to be significantly associated with rejection (p=0.64) after adjusting for baseline fibrosis (p=0.01). The amount of fibrosis at 1 month was not found to be significantly associated with rejection severity (p=0.29) after adjusting for baseline fibrosis (p=0.04). Fibrosis at baseline and HTN in the recipient were found to be predictive factors of fibrosis at 1 month. (p 0.02, p <0.001 respectively). Age of the donor, their relation to the patient, the pre-op Creatinine, artery, kidney weight and warm time were not found to be significantly associated with fibrosis at 1 month. In this complex model baseline fibrosis, HTN in the recipient and cold time were found to be predictive factors of fibrosis at 1 month (p=0.01,<0.001 and 0.03 respectively). Donor age was found to be a predictive factor of fibrosis at 6 months. The above analysis was repeated for 3, 6 and 12 months. No associations were detected between fibrosis and any of the explanatory variables with the exception of the donor age which was found to be a predictive factor of fibrosis at 6 months.

Keywords: fibrosis, transplant, renal, rejection

Procedia PDF Downloads 212
9010 Hybrid Bee Ant Colony Algorithm for Effective Load Balancing and Job Scheduling in Cloud Computing

Authors: Thomas Yeboah

Abstract:

Cloud Computing is newly paradigm in computing that promises a delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). As Cloud Computing is a newly style of computing on the internet. It has many merits along with some crucial issues that need to be resolved in order to improve reliability of cloud environment. These issues are related with the load balancing, fault tolerance and different security issues in cloud environment.In this paper the main concern is to develop an effective load balancing algorithm that gives satisfactory performance to both, cloud users and providers. This proposed algorithm (hybrid Bee Ant Colony algorithm) is a combination of two dynamic algorithms: Ant Colony Optimization and Bees Life algorithm. Ant Colony algorithm is used in this hybrid Bee Ant Colony algorithm to solve load balancing issues whiles the Bees Life algorithm is used for optimization of job scheduling in cloud environment. The results of the proposed algorithm shows that the hybrid Bee Ant Colony algorithm outperforms the performances of both Ant Colony algorithm and Bees Life algorithm when evaluated the proposed algorithm performances in terms of Waiting time and Response time on a simulator called CloudSim.

Keywords: ant colony optimization algorithm, bees life algorithm, scheduling algorithm, performance, cloud computing, load balancing

Procedia PDF Downloads 609
9009 Evolution of Multimodulus Algorithm Blind Equalization Based on Recursive Least Square Algorithm

Authors: Sardar Ameer Akram Khan, Shahzad Amin Sheikh

Abstract:

Blind equalization is an important technique amongst equalization family. Multimodulus algorithms based on blind equalization removes the undesirable effects of ISI and cater ups the phase issues, saving the cost of rotator at the receiver end. In this paper a new algorithm combination of recursive least square and Multimodulus algorithm named as RLSMMA is proposed by providing few assumption, fast convergence and minimum Mean Square Error (MSE) is achieved. The excellence of this technique is shown in the simulations presenting MSE plots and the resulting filter results.

Keywords: blind equalizations, constant modulus algorithm, multi-modulus algorithm, recursive least square algorithm, quadrature amplitude modulation (QAM)

Procedia PDF Downloads 627
9008 A Comparative Study of GTC and PSP Algorithms for Mining Sequential Patterns Embedded in Database with Time Constraints

Authors: Safa Adi

Abstract:

This paper will consider the problem of sequential mining patterns embedded in a database by handling the time constraints as defined in the GSP algorithm (level wise algorithms). We will compare two previous approaches GTC and PSP, that resumes the general principles of GSP. Furthermore this paper will discuss PG-hybrid algorithm, that using PSP and GTC. The results show that PSP and GTC are more efficient than GSP. On the other hand, the GTC algorithm performs better than PSP. The PG-hybrid algorithm use PSP algorithm for the two first passes on the database, and GTC approach for the following scans. Experiments show that the hybrid approach is very efficient for short, frequent sequences.

Keywords: database, GTC algorithm, PSP algorithm, sequential patterns, time constraints

Procedia PDF Downloads 367
9007 A Genetic Based Algorithm to Generate Random Simple Polygons Using a New Polygon Merge Algorithm

Authors: Ali Nourollah, Mohsen Movahedinejad

Abstract:

In this paper a new algorithm to generate random simple polygons from a given set of points in a two dimensional plane is designed. The proposed algorithm uses a genetic algorithm to generate polygons with few vertices. A new merge algorithm is presented which converts any two polygons into a simple polygon. This algorithm at first changes two polygons into a polygonal chain and then the polygonal chain is converted into a simple polygon. The process of converting a polygonal chain into a simple polygon is based on the removal of intersecting edges. The merge algorithm has the time complexity of O ((r+s) *l) where r and s are the size of merging polygons and l shows the number of intersecting edges removed from the polygonal chain. It will be shown that 1 < l < r+s. The experiments results show that the proposed algorithm has the ability to generate a great number of different simple polygons and has better performance in comparison to celebrated algorithms such as space partitioning and steady growth.

Keywords: Divide and conquer, genetic algorithm, merge polygons, Random simple polygon generation.

Procedia PDF Downloads 516
9006 Study of Natural Patterns on Digital Image Correlation Using Simulation Method

Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish

Abstract:

Digital image correlation (DIC) is a contactless full-field displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.

Keywords: Digital Image Correlation (DIC), deformation simulation, natural pattern, subset size

Procedia PDF Downloads 402
9005 Multi-Criteria Test Case Selection Using Ant Colony Optimization

Authors: Niranjana Devi N.

Abstract:

Test case selection is to select the subset of only the fit test cases and remove the unfit, ambiguous, redundant, unnecessary test cases which in turn improve the quality and reduce the cost of software testing. Test cases optimization is the problem of finding the best subset of test cases from a pool of the test cases to be audited. It will meet all the objectives of testing concurrently. But most of the research have evaluated the fitness of test cases only on single parameter fault detecting capability and optimize the test cases using a single objective. In the proposed approach, nine parameters are considered for test case selection and the best subset of parameters for test case selection is obtained using Interval Type-2 Fuzzy Rough Set. Test case selection is done in two stages. The first stage is the fuzzy entropy-based filtration technique, used for estimating and reducing the ambiguity in test case fitness evaluation and selection. The second stage is the ant colony optimization-based wrapper technique with a forward search strategy, employed to select test cases from the reduced test suite of the first stage. The results are evaluated using the Coverage parameters, Precision, Recall, F-Measure, APSC, APDC, and SSR. The experimental evaluation demonstrates that by this approach considerable computational effort can be avoided.

Keywords: ant colony optimization, fuzzy entropy, interval type-2 fuzzy rough set, test case selection

Procedia PDF Downloads 646
9004 Orthogonal Basis Extreme Learning Algorithm and Function Approximation

Authors: Ying Li, Yan Li

Abstract:

A new algorithm for single hidden layer feedforward neural networks (SLFN), Orthogonal Basis Extreme Learning (OBEL) algorithm, is proposed and the algorithm derivation is given in the paper. The algorithm can decide both the NNs parameters and the neuron number of hidden layer(s) during training while providing extreme fast learning speed. It will provide a practical way to develop NNs. The simulation results of function approximation showed that the algorithm is effective and feasible with good accuracy and adaptability.

Keywords: neural network, orthogonal basis extreme learning, function approximation

Procedia PDF Downloads 518
9003 An IM-COH Algorithm Neural Network Optimization with Cuckoo Search Algorithm for Time Series Samples

Authors: Wullapa Wongsinlatam

Abstract:

Back propagation algorithm (BP) is a widely used technique in artificial neural network and has been used as a tool for solving the time series problems, such as decreasing training time, maximizing the ability to fall into local minima, and optimizing sensitivity of the initial weights and bias. This paper proposes an improvement of a BP technique which is called IM-COH algorithm (IM-COH). By combining IM-COH algorithm with cuckoo search algorithm (CS), the result is cuckoo search improved control output hidden layer algorithm (CS-IM-COH). This new algorithm has a better ability in optimizing sensitivity of the initial weights and bias than the original BP algorithm. In this research, the algorithm of CS-IM-COH is compared with the original BP, the IM-COH, and the original BP with CS (CS-BP). Furthermore, the selected benchmarks, four time series samples, are shown in this research for illustration. The research shows that the CS-IM-COH algorithm give the best forecasting results compared with the selected samples.

Keywords: artificial neural networks, back propagation algorithm, time series, local minima problem, metaheuristic optimization

Procedia PDF Downloads 132
9002 De-Novo Structural Elucidation from Mass/NMR Spectra

Authors: Ismael Zamora, Elisabeth Ortega, Tatiana Radchenko, Guillem Plasencia

Abstract:

The structure elucidation based on Mass Spectra (MS) data of unknown substances is an unresolved problem that affects many different fields of application. The recent overview of software available for structure elucidation of small molecules has shown the demand for efficient computational tool that will be able to perform structure elucidation of unknown small molecules and peptides. We developed an algorithm for De-Novo fragment analysis based on MS data that proposes a set of scored and ranked structures that are compatible with the MS and MSMS spectra. Several different algorithms were developed depending on the initial set of fragments and the structure building processes. Also, in all cases, several scores for the final molecule ranking were computed. They were validated with small and middle databases (DB) with the eleven test set compounds. Similar results were obtained from any of the databases that contained the fragments of the expected compound. We presented an algorithm. Or De-Novo fragment analysis based on only mass spectrometry (MS) data only that proposed a set of scored/ranked structures that was validated on different types of databases and showed good results as proof of concept. Moreover, the solutions proposed by Mass Spectrometry were submitted to the prediction of NMR spectra in order to elucidate which of the proposed structures was compatible with the NMR spectra collected.

Keywords: De Novo, structure elucidation, mass spectrometry, NMR

Procedia PDF Downloads 269
9001 Rituximab Therapy for Musculoskeletal Involvement in Systemic Sclerosis

Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva

Abstract:

Objectives. There is very few data on changes of the musculoskeletal manifestations (artritis, arthralgia, muscle weakness, etc.) in systemic sclerosis (SSc) on rituximab (RTX) therapy. The aim of our study was to assess the severity of the musculoskeletal involvement in SSc patients (pts) and its changes during RTX therapy. Methods. Our study included 103 pts with SSc. The mean followup period was 12.6±10.7 months. The mean age was 47±12.9 years, female-87 pts (84%), the diffuse cutaneous subset of the disease had 55 pts (53%). The mean disease duration was 6.2±5.5 years. All pts had interstitial lung disease (ILD) and were positive for ANA, 67% of them were positive for antitopoisomerase-1. All patients received prednisolone at a dose of 11.3±4.5 mg/day, immunosuppressants at inclusion received 47% of them. Pts received RTX due to the ineffectiveness of previous therapy for ILD. The cumulative mean dose of RTX was 1.7±0.6 grams. Arthritis was observed in 22 pts (21%), arthralgias in 47 pts (46%). Muscle weakness was observed in 17 pts (17%). Tendon friction rubs was established in 7 pts (7%). The results at baseline and at the end of the follow up are presented in the form of mean values. Results. There was an improvement of all outcome parameters and musculoskeletal manifestations on RTX therapy. There was a decrease in the number of pts with arthritis from 22 (21%) to 10 (9%), a decrease in the number of pts with arthralgias from 47 (46%) to 31 (30%). The number of pts with muscle weakness decreased from 17 (17%) to 7 (7%). The number of pts with tendon friction rubs decreased from 7 (7%) to 3 (3%). The creatine phosphokinase decreased from 365.5±186 to 70.8±50.4 (p=0.00006). The C-reactive protein (CRP) decreased from 23.2±31.3 to 8.62±7.4 (p=0.001). The dose of prednisolone was reduced from 11.3±4.5 to 9.8±3.5 mg/day (p=0.0004). Conclusion. In our study, musculoskeletal involvement was detected in almost half of the patients with SSc-ILD. There was an improvement of musculoskeletal manifestations despite a small cumulative dose of RTX. We also managed to reduce the dose of glucocorticosteroids. The improvement of musculoskeletal manifestations was accompanied by a decrease in laboratory parameters - creatine phosphokinase and CRP. RTX is effective option for treatment of musculoskeletal manifestations in SSc.

Keywords: arthritis, musculoskeletal involvement, systemic sclerosis, rituximab

Procedia PDF Downloads 66
9000 An Optimized RDP Algorithm for Curve Approximation

Authors: Jean-Pierre Lomaliza, Kwang-Seok Moon, Hanhoon Park

Abstract:

It is well-known that Ramer Douglas Peucker (RDP) algorithm greatly depends on the method of choosing starting points. Therefore, this paper focuses on finding such starting points that will optimize the results of RDP algorithm. Specifically, this paper proposes a curve approximation algorithm that finds flat points, called essential points, of an input curve, divides the curve into corner-like sub-curves using the essential points, and applies the RDP algorithm to the sub-curves. The number of essential points play a role on optimizing the approximation results by balancing the degree of shape information loss and the amount of data reduction. Through experiments with curves of various types and complexities of shape, we compared the performance of the proposed algorithm with three other methods, i.e., the RDP algorithm itself and its variants. As a result, the proposed algorithm outperformed the others in term of maintaining the original shapes of the input curve, which is important in various applications like pattern recognition.

Keywords: curve approximation, essential point, RDP algorithm

Procedia PDF Downloads 514
8999 A New Dual Forward Affine Projection Adaptive Algorithm for Speech Enhancement in Airplane Cockpits

Authors: Djendi Mohmaed

Abstract:

In this paper, we propose a dual adaptive algorithm, which is based on the combination between the forward blind source separation (FBSS) structure and the affine projection algorithm (APA). This proposed algorithm combines the advantages of the source separation properties of the FBSS structure and the fast convergence characteristics of the APA algorithm. The proposed algorithm needs two noisy observations to provide an enhanced speech signal. This process is done in a blind manner without the need for ant priori information about the source signals. The proposed dual forward blind source separation affine projection algorithm is denoted (DFAPA) and used for the first time in an airplane cockpit context to enhance the communication from- and to- the airplane. Intensive experiments were carried out in this sense to evaluate the performance of the proposed DFAPA algorithm.

Keywords: adaptive algorithm, speech enhancement, system mismatch, SNR

Procedia PDF Downloads 119
8998 Solving a Micromouse Maze Using an Ant-Inspired Algorithm

Authors: Rolando Barradas, Salviano Soares, António Valente, José Alberto Lencastre, Paulo Oliveira

Abstract:

This article reviews the Ant Colony Optimization, a nature-inspired algorithm, and its implementation in the Scratch/m-Block programming environment. The Ant Colony Optimization is a part of Swarm Intelligence-based algorithms and is a subset of biological-inspired algorithms. Starting with a problem in which one has a maze and needs to find its path to the center and return to the starting position. This is similar to an ant looking for a path to a food source and returning to its nest. Starting with the implementation of a simple wall follower simulator, the proposed solution uses a dynamic graphical interface that allows young students to observe the ants’ movement while the algorithm optimizes the routes to the maze’s center. Things like interface usability, Data structures, and the conversion of algorithmic language to Scratch syntax were some of the details addressed during this implementation. This gives young students an easier way to understand the computational concepts of sequences, loops, parallelism, data, events, and conditionals, as they are used through all the implemented algorithms. Future work includes the simulation results with real contest mazes and two different pheromone update methods and the comparison with the optimized results of the winners of each one of the editions of the contest. It will also include the creation of a Digital Twin relating the virtual simulator with a real micromouse in a full-size maze. The first test results show that the algorithm found the same optimized solutions that were found by the winners of each one of the editions of the Micromouse contest making this a good solution for maze pathfinding.

Keywords: nature inspired algorithms, scratch, micromouse, problem-solving, computational thinking

Procedia PDF Downloads 104
8997 A High-Level Co-Evolutionary Hybrid Algorithm for the Multi-Objective Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a hybrid distributed algorithm has been suggested for the multi-objective job shop scheduling problem. Many new approaches are used at design steps of the distributed algorithm. Co-evolutionary structure of the algorithm and competition between different communicated hybrid algorithms, which are executed simultaneously, causes to efficient search. Using several machines for distributing the algorithms, at the iteration and solution levels, increases computational speed. The proposed algorithm is able to find the Pareto solutions of the big problems in shorter time than other algorithm in the literature. Apache Spark and Hadoop platforms have been used for the distribution of the algorithm. The suggested algorithm and implementations have been compared with results of the successful algorithms in the literature. Results prove the efficiency and high speed of the algorithm.

Keywords: distributed algorithms, Apache Spark, Hadoop, job shop scheduling, multi-objective optimization

Procedia PDF Downloads 350
8996 A Transform Domain Function Controlled VSSLMS Algorithm for Sparse System Identification

Authors: Cemil Turan, Mohammad Shukri Salman

Abstract:

The convergence rate of the least-mean-square (LMS) algorithm deteriorates if the input signal to the filter is correlated. In a system identification problem, this convergence rate can be improved if the signal is white and/or if the system is sparse. We recently proposed a sparse transform domain LMS-type algorithm that uses a variable step-size for a sparse system identification. The proposed algorithm provided high performance even if the input signal is highly correlated. In this work, we investigate the performance of the proposed TD-LMS algorithm for a large number of filter tap which is also a critical issue for standard LMS algorithm. Additionally, the optimum value of the most important parameter is calculated for all experiments. Moreover, the convergence analysis of the proposed algorithm is provided. The performance of the proposed algorithm has been compared to different algorithms in a sparse system identification setting of different sparsity levels and different number of filter taps. Simulations have shown that the proposed algorithm has prominent performance compared to the other algorithms.

Keywords: adaptive filtering, sparse system identification, TD-LMS algorithm, VSSLMS algorithm

Procedia PDF Downloads 337
8995 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform

Authors: Omaima N. Ahmad AL-Allaf

Abstract:

Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.

Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform

Procedia PDF Downloads 210