Search results for: small baseline subset algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9001

Search results for: small baseline subset algorithm

9001 InSAR Times-Series Phase Unwrapping for Urban Areas

Authors: Hui Luo, Zhenhong Li, Zhen Dong

Abstract:

The analysis of multi-temporal InSAR (MTInSAR) such as persistent scatterer (PS) and small baseline subset (SBAS) techniques usually relies on temporal/spatial phase unwrapping (PU). Unfortunately, it always fails to unwrap the phase for two reasons: 1) spatial phase jump between adjacent pixels larger than π, such as layover and high discontinuous terrain; 2) temporal phase discontinuities such as time varied atmospheric delay. To overcome these limitations, a least-square based PU method is introduced in this paper, which incorporates baseline-combination interferograms and adjacent phase gradient network. Firstly, permanent scatterers (PS) are selected for study. Starting with the linear baseline-combination method, we obtain equivalent 'small baseline inteferograms' to limit the spatial phase difference. Then, phase different has been conducted between connected PSs (connected by a specific networking rule) to suppress the spatial correlated phase errors such as atmospheric artifact. After that, interval phase difference along arcs can be computed by least square method and followed by an outlier detector to remove the arcs with phase ambiguities. Then, the unwrapped phase can be obtained by spatial integration. The proposed method is tested on real data of TerraSAR-X, and the results are also compared with the ones obtained by StaMPS(a software package with 3D PU capabilities). By comparison, it shows that the proposed method can successfully unwrap the interferograms in urban areas even when high discontinuities exist, while StaMPS fails. At last, precise DEM errors can be got according to the unwrapped interferograms.

Keywords: phase unwrapping, time series, InSAR, urban areas

Procedia PDF Downloads 117
9000 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 126
8999 A Comparative Analysis on QRS Peak Detection Using BIOPAC and MATLAB Software

Authors: Chandra Mukherjee

Abstract:

The present paper is a representation of the work done in the field of ECG signal analysis using MATLAB 7.1 Platform. An accurate and simple ECG feature extraction algorithm is presented in this paper and developed algorithm is validated using BIOPAC software. To detect the QRS peak, ECG signal is processed by following mentioned stages- First Derivative, Second Derivative and then squaring of that second derivative. Efficiency of developed algorithm is tested on ECG samples from different database and real time ECG signals acquired using BIOPAC system. Firstly we have lead wise specified threshold value the samples above that value is marked and in the original signal, where these marked samples face change of slope are spotted as R-peak. On the left and right side of the R-peak, faces change of slope identified as Q and S peak, respectively. Now the inbuilt Detection algorithm of BIOPAC software is performed on same output sample and both outputs are compared. ECG baseline modulation correction is done after detecting characteristics points. The efficiency of the algorithm is tested using some validation parameters like Sensitivity, Positive Predictivity and we got satisfied value of these parameters.

Keywords: first derivative, variable threshold, slope reversal, baseline modulation correction

Procedia PDF Downloads 375
8998 Accelerated Structural Reliability Analysis under Earthquake-Induced Tsunamis by Advanced Stochastic Simulation

Authors: Sai Hung Cheung, Zhe Shao

Abstract:

Recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 brought huge losses of lives and properties. Maintaining vertical evacuation systems is the most crucial strategy to effectively reduce casualty during the tsunami event. Thus, it is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability (or its complement failure probability) of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of the Subset Simulation algorithm and a recently proposed moving least squares response surface approach for stochastic sampling is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.

Keywords: response surface model, subset simulation, structural reliability, Tsunami risk

Procedia PDF Downloads 337
8997 Efficacy and Safety by Baseline A1c with Once-Weekly Dulaglutide in the AWARD Program

Authors: Alaa Mostafa, Samuel Dagogo-Jack, Vivian Thieu, Maria Yu, Nan Zhang, Dara Schuster, Luis-Emilio Garcia-Perez

Abstract:

Dulaglutide (DU), a once-weekly glucagon-like peptide-1 receptor agonist, was studied in the AWARD clinical trial program in adult patients with type 2 diabetes (T2D) and demonstrated significant hemoglobin A1c (A1c) reduction and potential for weight loss. To evaluate the efficacy and safety of DU 1.5 mg and DU 0.75 mg in patients with T2D by baseline A1c <8.5% or ≥8.5%, a post-hoc analysis was conducted on AWARD-1 to -6 and -8 at 6 months. Across 7 studies, 55% to 82% of the DU-treated patients had a baseline A1c <8.5%, and 18% to 45% had a baseline A1c ≥8.5%. The ranges of A1c reductions with baseline A1c <8.5% and ≥8.5%, respectively, were: DU 1.5 mg: -0.67% to -1.25% and -1.22% to -2.37%; DU 0.75 mg: -0.53% to -1.07% and -1.37% to -2.19%. The A1c reduction from the pooled analysis was greater in patients with baseline A1c ≥8.5% than patients with baseline A1c <8.5%, respectively: DU 1.5 mg: -1.86% and -1.02%; DU 0.75 mg: -1.75% and -0.83%. DU treatments were well tolerated among baseline A1c subgroups. Across the AWARD program, DU 1.5 mg and DU 0.75 mg demonstrated significant A1c reduction in both subgroups with an acceptable safety profile. Compared to patients with baseline A1c <8.5%, patients with baseline A1c ≥8.5% had greater A1c reduction. Disclosures: This study was supported and conducted by Eli Lilly and Company, Indianapolis, IN, USA.

Keywords: A1c reduction, dulaglutide, type 2 diabetes, weight loss

Procedia PDF Downloads 369
8996 A Discrete Logit Survival Model with a Smooth Baseline Hazard for Age at First Alcohol Intake among Students at Tertiary Institutions in Thohoyandou, South Africa

Authors: A. Bere, H. G. Sithuba, K. Kyei, C. Sigauke

Abstract:

We employ a discrete logit survival model to investigate the risk factors for early alcohol intake among students at two tertiary institutions in Thohoyandou, South Africa. Data were collected from a sample of 744 students using a self-administered questionnaire. Significant covariates were arrived at through a regularization algorithm implemented using the glmmLasso package. The tuning parameter was determined using a five-fold cross-validation algorithm. The baseline hazard was modelled as a smooth function of time through the use of spline functions. The results show that the hazard of initial alcohol intake peaks at the age of about 16 years and that at any given time, being of a male gender, prior use of other drugs, having drinking peers, having experienced negative life events and physical abuse are associated with a higher risk of alcohol intake debut.

Keywords: cross-validation, discrete hazard model, LASSO, smooth baseline hazard

Procedia PDF Downloads 156
8995 The Different Ways to Describe Regular Languages by Using Finite Automata and the Changing Algorithm Implementation

Authors: Abdulmajid Mukhtar Afat

Abstract:

This paper aims at introducing finite automata theory, the different ways to describe regular languages and create a program to implement the subset construction algorithms to convert nondeterministic finite automata (NFA) to deterministic finite automata (DFA). This program is written in c++ programming language. The program reads FA 5tuples from text file and then classifies it into either DFA or NFA. For DFA, the program will read the string w and decide whether it is acceptable or not. If accepted, the program will save the tracking path and point it out. On the other hand, when the automation is NFA, the program will change the Automation to DFA so that it is easy to track and it can decide whether the w exists in the regular language or not.

Keywords: finite automata, subset construction, DFA, NFA

Procedia PDF Downloads 402
8994 Investigation of the Speckle Pattern Effect for Displacement Assessments by Digital Image Correlation

Authors: Salim Çalışkan, Hakan Akyüz

Abstract:

Digital image correlation has been accustomed as a versatile and efficient method for measuring displacements on the article surfaces by comparing reference subsets in undeformed images with the define target subset in the distorted image. The theoretical model points out that the accuracy of the digital image correlation displacement data can be exactly anticipated based on the divergence of the image noise and the sum of the squares of the subset intensity gradients. The digital image correlation procedure locates each subset of the original image in the distorted image. The software then determines the displacement values of the centers of the subassemblies, providing the complete displacement measures. In this paper, the effect of the speckle distribution and its effect on displacements measured out plane displacement data as a function of the size of the subset was investigated. Nine groups of speckle patterns were used in this study: samples are sprayed randomly by pre-manufactured patterns of three different hole diameters, each with three coverage ratios, on a computer numerical control punch press. The resulting displacement values, referenced at the center of the subset, are evaluated based on the average of the displacements of the pixel’s interior the subset.

Keywords: digital image correlation, speckle pattern, experimental mechanics, tensile test, aluminum alloy

Procedia PDF Downloads 38
8993 Spatial Analysis in the Impact of Aquifer Capacity Reduction on Land Subsidence Rate in Semarang City between 2014-2017

Authors: Yudo Prasetyo, Hana Sugiastu Firdaus, Diyanah Diyanah

Abstract:

The phenomenon of the lack of clean water supply in several big cities in Indonesia is a major problem in the development of urban areas. Moreover, in the city of Semarang, the population density and growth of physical development is very high. Continuous and large amounts of underground water (aquifer) exposure can result in a drastically aquifer supply declining in year by year. Especially, the intensity of aquifer use in the fulfilment of household needs and industrial activities. This is worsening by the land subsidence phenomenon in some areas in the Semarang city. Therefore, special research is needed to know the spatial correlation of the impact of decreasing aquifer capacity on the land subsidence phenomenon. This is necessary to give approve that the occurrence of land subsidence can be caused by loss of balance of pressure on below the land surface. One method to observe the correlation pattern between the two phenomena is the application of remote sensing technology based on radar and optical satellites. Implementation of Differential Interferometric Synthetic Aperture Radar (DINSAR) or Small Baseline Area Subset (SBAS) method in SENTINEL-1A satellite image acquisition in 2014-2017 period will give a proper pattern of land subsidence. These results will be spatially correlated with the aquifer-declining pattern in the same time period. Utilization of survey results to 8 monitoring wells with depth in above 100 m to observe the multi-temporal pattern of aquifer change capacity. In addition, the pattern of aquifer capacity will be validated with 2 underground water cavity maps from observation of ministries of energy and natural resources (ESDM) in Semarang city. Spatial correlation studies will be conducted on the pattern of land subsidence and aquifer capacity using overlapping and statistical methods. The results of this correlation will show how big the correlation of decrease in underground water capacity in influencing the distribution and intensity of land subsidence in Semarang city. In addition, the results of this study will also be analyzed based on geological aspects related to hydrogeological parameters, soil types, aquifer species and geological structures. The results of this study will be a correlation map of the aquifer capacity on the decrease in the face of the land in the city of Semarang within the period 2014-2017. So hopefully the results can help the authorities in spatial planning and the city of Semarang in the future.

Keywords: aquifer, differential interferometric synthetic aperture radar (DINSAR), land subsidence, small baseline area subset (SBAS)

Procedia PDF Downloads 147
8992 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: computer vision, deep learning, object detection, semiconductor

Procedia PDF Downloads 103
8991 K-Means Clustering-Based Infinite Feature Selection Method

Authors: Seyyedeh Faezeh Hassani Ziabari, Sadegh Eskandari, Maziar Salahi

Abstract:

Infinite Feature Selection (IFS) algorithm is an efficient feature selection algorithm that selects a subset of features of all sizes (including infinity). In this paper, we present an improved version of it, called clustering IFS (CIFS), by clustering the dataset in advance. To do so, first, we apply the K-means algorithm to cluster the dataset, then we apply IFS. In the CIFS method, the spatial and temporal complexities are reduced compared to the IFS method. Experimental results on 6 datasets show the superiority of CIFS compared to IFS in terms of accuracy, running time, and memory consumption.

Keywords: feature selection, infinite feature selection, clustering, graph

Procedia PDF Downloads 88
8990 Accelerated Evaluation of Structural Reliability under Tsunami Loading

Authors: Sai Hung Cheung, Zhe Shao

Abstract:

It is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis in view of recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 which brought huge losses of lives and properties. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of a recently proposed moving least squares response surface approach for stochastic sampling and the Subset Simulation algorithm is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.

Keywords: response surface, stochastic simulation, structural reliability tsunami, risk

Procedia PDF Downloads 644
8989 Tabu Random Algorithm for Guiding Mobile Robots

Authors: Kevin Worrall, Euan McGookin

Abstract:

The use of optimization algorithms is common across a large number of diverse fields. This work presents the use of a hybrid optimization algorithm applied to a mobile robot tasked with carrying out a search of an unknown environment. The algorithm is then applied to the multiple robots case, which results in a reduction in the time taken to carry out the search. The hybrid algorithm is a Random Search Algorithm fused with a Tabu mechanism. The work shows that the algorithm locates the desired points in a quicker time than a brute force search. The Tabu Random algorithm is shown to work within a simulated environment using a validated mathematical model. The simulation was run using three different environments with varying numbers of targets. As an algorithm, the Tabu Random is small, clear and can be implemented with minimal resources. The power of the algorithm is the speed at which it locates points of interest and the robustness to the number of robots involved. The number of robots can vary with no changes to the algorithm resulting in a flexible algorithm.

Keywords: algorithms, control, multi-agent, search and rescue

Procedia PDF Downloads 210
8988 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 96
8987 Random Subspace Ensemble of CMAC Classifiers

Authors: Somaiyeh Dehghan, Mohammad Reza Kheirkhahan Haghighi

Abstract:

The rapid growth of domains that have data with a large number of features, while the number of samples is limited has caused difficulty in constructing strong classifiers. To reduce the dimensionality of the feature space becomes an essential step in classification task. Random subspace method (or attribute bagging) is an ensemble classifier that consists of several classifiers that each base learner in ensemble has subset of features. In the present paper, we introduce Random Subspace Ensemble of CMAC neural network (RSE-CMAC), each of which has training with subset of features. Then we use this model for classification task. For evaluation performance of our model, we compare it with bagging algorithm on 36 UCI datasets. The results reveal that the new model has better performance.

Keywords: classification, random subspace, ensemble, CMAC neural network

Procedia PDF Downloads 296
8986 An Algorithm for Herding Cows by a Swarm of Quadcopters

Authors: Jeryes Danial, Yosi Ben Asher

Abstract:

Algorithms for controlling a swarm of robots is an active research field, out of which cattle herding is one of the most complex problems to solve. In this paper, we derive an independent herding algorithm that is specifically designed for a swarm of quadcopters. The algorithm works by devising flight trajectories that cause the cows to run-away in the desired direction and hence herd cows that are distributed in a given field towards a common gathering point. Unlike previously proposed swarm herding algorithms, this algorithm does not use a flocking model but rather stars each cow separately. The effectiveness of this algorithm is verified experimentally using a simulator. We use a special set of experiments attempting to demonstrate that the herding times of this algorithm correspond to field diameter small constant regardless of the number of cows in the field. This is an optimal result indicating that the algorithm groups the cows into intermediate groups and herd them as one forming ever closing bigger groups.

Keywords: swarm, independent, distributed, algorithm

Procedia PDF Downloads 142
8985 Opportunities of Clean Development Mechanism through Hydropower in Nepal

Authors: Usha Khatiwada

Abstract:

Nepal’s overall energy baseline: It has been proposed that hydropower projects for domestic consumption can earn CDM revenue in Nepal if a new methodology is established that takes into account not only consumption in Nepal of grid electricity but also other fuels such as kerosene, diesel, and firewood, used by a vast majority of the population for their lighting and other needs. However, this would mean that we would be trying to combine grid electricity supply and consumers not supplied from the grid into one methodology. Such a sweeping baseline may have a very small chance of success with the CDM Executive Board.

Keywords: environment, clean development mechanism, hydropower, Nepal

Procedia PDF Downloads 370
8984 Aerodynamic Investigation of Baseline-IV Bird-Inspired BWB Aircraft Design: Improvements over Baseline-III BWB

Authors: C. M. Nur Syazwani, M. K. Ahmad Imran, Rizal E. M. Nasir

Abstract:

The study on BWB UV begins in UiTM since 2005 and three designs have been studied and published. The latest designs are Baseline-III and inspired by birds that have features and aerodynamics behaviour of cruising birds without flapping capability. The aircraft featuring planform and configuration are similar to the bird. Baseline-III has major flaws particularly in its low lift-to-drag ratio, stability and issues regarding limited controllability. New design known as Baseline-IV replaces straight, swept wing to delta wing and have a broader tail compares to the Baseline-III’s. The objective of the study is to investigate aerodynamics of Baseline-IV bird-inspired BWB aircraft. This will be achieved by theoretical calculation and wind tunnel experiments. The result shows that both theoretical and wind tunnel experiments of Baseline-IV graph of CL and CD versus alpha are quite similar to each other in term of pattern of graph slopes and values. Baseline-IV has higher lift coefficient values at wide range of angle of attack compares to Baseline-III. Baseline-IV also has higher maximum lift coefficient, higher maximum lift-to-drag and lower parasite drag. It has stable pitch moment versus lift slope but negative moment at zero lift for zero angle-of-attack tail setting. At high angle of attack, Baseline-IV does not have stability reversal as shown in Baseline-III. Baseline-IV is proven to have improvements over Baseline-III in terms of lift, lift-to-drag ratio and pitch moment stability at high angle-of-attack.

Keywords: blended wing-body, bird-inspired blended wing-body, aerodynamic, stability

Procedia PDF Downloads 478
8983 A Hybrid ICA-GA Algorithm for Solving Multiobjective Optimization of Production Planning Problems

Authors: Omar Ramzi Jasim, Jalal Sultan Ashour

Abstract:

Production Planning or Master Production Schedule (MPS) is a key interface between marketing and manufacturing, since it links customer service directly to efficient use of production resources. Mismanagement of the MPS is considered as one of fundamental problems in operation and it can potentially lead to poor customer satisfaction. In this paper, a hybrid evolutionary algorithm (ICA-GA) is presented, which integrates the merits of both imperialist competitive algorithm (ICA) and genetic algorithm (GA) for solving multi-objective MPS problems. In the presented algorithm, the colonies in each empire has be represented a small population and communicate with each other using genetic operators. By testing on 5 production scenarios, the numerical results of ICA-GA algorithm show the efficiency and capabilities of the hybrid algorithm in finding the optimum solutions. The ICA-GA solutions yield the lower inventory level and keep customer satisfaction high and the required overtime is also lower, compared with results of GA and SA in all production scenarios.

Keywords: master production scheduling, genetic algorithm, imperialist competitive algorithm, hybrid algorithm

Procedia PDF Downloads 439
8982 Secure Message Transmission Using Meaningful Shares

Authors: Ajish Sreedharan

Abstract:

Visual cryptography encodes a secret image into shares of random binary patterns. If the shares are exerted onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the shares, however, have no visual meaning and hinder the objectives of visual cryptography. In the Secret Message Transmission through Meaningful Shares a secret message to be transmitted is converted to grey scale image. Then (2,2) visual cryptographic shares are generated from this converted gray scale image. The shares are encrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform. Two separate color images which are of the same size of the shares, taken as cover image of the respective shares to hide the shares into them. The encrypted shares which are covered by meaningful images so that a potential eavesdropper wont know there is a message to be read. The meaningful shares are transmitted through two different transmission medium. During decoding shares are fetched from received meaningful images and decrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform. The shares are combined to regenerate the grey scale image from where the secret message is obtained.

Keywords: visual cryptography, wavelet transform, meaningful shares, grey scale image

Procedia PDF Downloads 417
8981 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 258
8980 A Blind Three-Dimensional Meshes Watermarking Using the Interquartile Range

Authors: Emad E. Abdallah, Alaa E. Abdallah, Bajes Y. Alskarnah

Abstract:

We introduce a robust three-dimensional watermarking algorithm for copyright protection and indexing. The basic idea behind our technique is to measure the interquartile range or the spread of the 3D model vertices. The algorithm starts by converting all the vertices to spherical coordinate followed by partitioning them into small groups. The proposed algorithm is slightly altering the interquartile range distribution of the small groups based on predefined watermark. The experimental results on several 3D meshes prove perceptual invisibility and the robustness of the proposed technique against the most common attacks including compression, noise, smoothing, scaling, rotation as well as combinations of these attacks.

Keywords: watermarking, three-dimensional models, perceptual invisibility, interquartile range, 3D attacks

Procedia PDF Downloads 437
8979 Theorem on Inconsistency of The Classical Logic

Authors: T. J. Stepien, L. T. Stepien

Abstract:

This abstract concerns an extremely fundamental issue. Namely, the fundamental problem of science is the issue of consistency. In this abstract, we present the theorem saying that the classical calculus of quantifiers is inconsistent in the traditional sense. At the beginning, we introduce a notation, and later we remind the definition of the consistency in the traditional sense. S1 is the set of all well-formed formulas in the calculus of quantifiers. RS1 denotes the set of all rules over the set S1. Cn(R, X) is the set of all formulas standardly provable from X by rules R, where R is a subset of RS1, and X is a subset of S1. The couple < R,X > is called a system, whenever R is a subset of RS1, and X is a subset of S1. Definition: The system < R,X > is consistent in the traditional sense if there does not exist any formula from the set S1, such that this formula and its negation are provable from X, by using rules from R. Finally, < R0+, L2 > denotes the classical calculus of quantifiers, where R0+ consists of Modus Ponens and the generalization rule. L2 is the set of all formulas valid in the classical calculus of quantifiers. The Main Result: The system < R0+, L2 > is inconsistent in the traditional sense.

Keywords: classical calculus of quantifiers, classical predicate calculus, generalization rule, consistency in the traditional sense, Modus Ponens

Procedia PDF Downloads 173
8978 Image Inpainting Model with Small-Sample Size Based on Generative Adversary Network and Genetic Algorithm

Authors: Jiawen Wang, Qijun Chen

Abstract:

The performance of most machine-learning methods for image inpainting depends on the quantity and quality of the training samples. However, it is very expensive or even impossible to obtain a great number of training samples in many scenarios. In this paper, an image inpainting model based on a generative adversary network (GAN) is constructed for the cases when the number of training samples is small. Firstly, a feature extraction network (F-net) is incorporated into the GAN network to utilize the available information of the inpainting image. The weighted sum of the extracted feature and the random noise acts as the input to the generative network (G-net). The proposed network can be trained well even when the sample size is very small. Secondly, in the phase of the completion for each damaged image, a genetic algorithm is designed to search an optimized noise input for G-net; based on this optimized input, the parameters of the G-net and F-net are further learned (Once the completion for a certain damaged image ends, the parameters restore to its original values obtained in the training phase) to generate an image patch that not only can fill the missing part of the damaged image smoothly but also has visual semantics.

Keywords: image inpainting, generative adversary nets, genetic algorithm, small-sample size

Procedia PDF Downloads 96
8977 A 5G Architecture Based to Dynamic Vehicular Clustering Enhancing VoD Services Over Vehicular Ad hoc Networks

Authors: Lamaa Sellami, Bechir Alaya

Abstract:

Nowadays, video-on-demand (VoD) applications are becoming one of the tendencies driving vehicular network users. In this paper, considering the unpredictable vehicle density, the unexpected acceleration or deceleration of the different cars included in the vehicular traffic load, and the limited radio range of the employed communication scheme, we introduce the “Dynamic Vehicular Clustering” (DVC) algorithm as a new scheme for video streaming systems over VANET. The proposed algorithm takes advantage of the concept of small cells and the introduction of wireless backhauls, inspired by the different features and the performance of the Long Term Evolution (LTE)- Advanced network. The proposed clustering algorithm considers multiple characteristics such as the vehicle’s position and acceleration to reduce latency and packet loss. Therefore, each cluster is counted as a small cell containing vehicular nodes and an access point that is elected regarding some particular specifications.

Keywords: video-on-demand, vehicular ad-hoc network, mobility, vehicular traffic load, small cell, wireless backhaul, LTE-advanced, latency, packet loss

Procedia PDF Downloads 107
8976 Assessment of Memetic and Genetic Algorithm for a Flexible Integrated Logistics Network

Authors: E. Behmanesh, J. Pannek

Abstract:

The distribution-allocation problem is known as one of the most comprehensive strategic decision. In real-world cases, it is impossible to solve a distribution-allocation problem in traditional ways with acceptable time. Hence researchers develop efficient non-traditional techniques for the large-term operation of the whole supply chain. These techniques provide near-optimal solutions particularly for large scales test problems. This paper, presents an integrated supply chain model which is flexible in the delivery path. As the solution methodology, we apply a memetic algorithm with a novelty in population presentation. To illustrate the performance of the proposed memetic algorithm, LINGO optimization software serves as a comparison basis for small size problems. In large size cases that we are dealing with in the real world, the Genetic algorithm as the second metaheuristic algorithm is considered to compare the results and show the efficiency of the memetic algorithm.

Keywords: integrated logistics network, flexible path, memetic algorithm, genetic algorithm

Procedia PDF Downloads 338
8975 A Natural Killer T Cell Subset That Protects against Airway Hyperreactivity

Authors: Ya-Ting Chuang, Krystle Leung, Ya-Jen Chang, Rosemarie H. DeKruyff, Paul B. Savage, Richard Cruse, Christophe Benoit, Dirk Elewaut, Nicole Baumgarth, Dale T. Umetsu

Abstract:

We examined characteristics of a Natural Killer T (NKT) cell subpopulation that developed during influenza infection in neonatal mice, and that suppressed the subsequent development of allergic asthma in a mouse model. This NKT cell subset expressed CD38 but not CD4, produced IFN-γ, but not IL-17, IL-4 or IL-13, and inhibited the development of airway hyperreactivity (AHR) through contact-dependent suppressive activity against helper CD4 T cells. The NKT subset expanded in the lungs of neonatal mice after infection with influenza, but also after treatment of neonatal mice with a Th1-biasing α-GalCer glycolipid analogue, Nu-α-GalCer. These results suggest that early/neonatal exposure to infection or to antigenic challenge can affect subsequent lung immunity by altering the profile of cells residing in the lung and that some subsets of NKT cells can have direct inhibitory activity against CD4+ T cells in allergic asthma. Importantly, our results also suggest a potential therapy for young children that might provide protection against the development of asthma.

Keywords: NKT subset, asthma, airway hyperreactivity, hygiene hypothesis, influenza

Procedia PDF Downloads 203
8974 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech

Authors: Monica Gonzalez Machorro

Abstract:

Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.

Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment

Procedia PDF Downloads 90
8973 Evaluating the Baseline Chatacteristics of Static Balance in Young Adults

Authors: K. Abuzayan, H. Alabed

Abstract:

The objectives of this study (baseline study, n = 20) were to implement Matlab procedures for quantifying selected static balance variables, establish baseline data of selected variables which characterize static balance activities in a population of healthy young adult males, and to examine any trial effects on these variables. The results indicated that the implementation of Matlab procedures for quantifying selected static balance variables was practical and enabled baseline data to be established for selected variables. There was no significant trial effect. Recommendations were made for suitable tests to be used in later studies. Specifically it was found that one foot-tiptoes tests either in static balance is too challenging for most participants in normal circumstances. A one foot-flat eyes open test was considered to be representative and challenging for static balance.

Keywords: static balance, base of support, baseline data, young adults

Procedia PDF Downloads 487
8972 Optical Variability of Faint Quasars

Authors: Kassa Endalamaw Rewnu

Abstract:

The variability properties of a quasar sample, spectroscopically complete to magnitude J = 22.0, are investigated on a time baseline of 2 years using three different photometric bands (U, J and F). The original sample was obtained using a combination of different selection criteria: colors, slitless spectroscopy and variability, based on a time baseline of 1 yr. The main goals of this work are two-fold: first, to derive the percentage of variable quasars on a relatively short time baseline; secondly, to search for new quasar candidates missed by the other selection criteria; and, thus, to estimate the completeness of the spectroscopic sample. In order to achieve these goals, we have extracted all the candidate variable objects from a sample of about 1800 stellar or quasi-stellar objects with limiting magnitude J = 22.50 over an area of about 0.50 deg2. We find that > 65% of all the objects selected as possible variables are either confirmed quasars or quasar candidates on the basis of their colors. This percentage increases even further if we exclude from our lists of variable candidates a number of objects equal to that expected on the basis of `contamination' induced by our photometric errors. The percentage of variable quasars in the spectroscopic sample is also high, reaching about 50%. On the basis of these results, we can estimate that the incompleteness of the original spectroscopic sample is < 12%. We conclude that variability analysis of data with small photometric errors can be successfully used as an efficient and independent (or at least auxiliary) selection method in quasar surveys, even when the time baseline is relatively short. Finally, when corrected for the different intrinsic time lags corresponding to a fixed observed time baseline, our data do not show a statistically significant correlation between variability and either absolute luminosity or redshift.

Keywords: nuclear activity, galaxies, active quasars, variability

Procedia PDF Downloads 45