Search results for: subset size
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5774

Search results for: subset size

5774 Investigation of the Speckle Pattern Effect for Displacement Assessments by Digital Image Correlation

Authors: Salim Çalışkan, Hakan Akyüz

Abstract:

Digital image correlation has been accustomed as a versatile and efficient method for measuring displacements on the article surfaces by comparing reference subsets in undeformed images with the define target subset in the distorted image. The theoretical model points out that the accuracy of the digital image correlation displacement data can be exactly anticipated based on the divergence of the image noise and the sum of the squares of the subset intensity gradients. The digital image correlation procedure locates each subset of the original image in the distorted image. The software then determines the displacement values of the centers of the subassemblies, providing the complete displacement measures. In this paper, the effect of the speckle distribution and its effect on displacements measured out plane displacement data as a function of the size of the subset was investigated. Nine groups of speckle patterns were used in this study: samples are sprayed randomly by pre-manufactured patterns of three different hole diameters, each with three coverage ratios, on a computer numerical control punch press. The resulting displacement values, referenced at the center of the subset, are evaluated based on the average of the displacements of the pixel’s interior the subset.

Keywords: digital image correlation, speckle pattern, experimental mechanics, tensile test, aluminum alloy

Procedia PDF Downloads 36
5773 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach

Authors: Mohammad H. Almomani

Abstract:

In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.

Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization

Procedia PDF Downloads 320
5772 Study of Natural Patterns on Digital Image Correlation Using Simulation Method

Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish

Abstract:

Digital image correlation (DIC) is a contactless full-field displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.

Keywords: Digital Image Correlation (DIC), deformation simulation, natural pattern, subset size

Procedia PDF Downloads 377
5771 Theorem on Inconsistency of The Classical Logic

Authors: T. J. Stepien, L. T. Stepien

Abstract:

This abstract concerns an extremely fundamental issue. Namely, the fundamental problem of science is the issue of consistency. In this abstract, we present the theorem saying that the classical calculus of quantifiers is inconsistent in the traditional sense. At the beginning, we introduce a notation, and later we remind the definition of the consistency in the traditional sense. S1 is the set of all well-formed formulas in the calculus of quantifiers. RS1 denotes the set of all rules over the set S1. Cn(R, X) is the set of all formulas standardly provable from X by rules R, where R is a subset of RS1, and X is a subset of S1. The couple < R,X > is called a system, whenever R is a subset of RS1, and X is a subset of S1. Definition: The system < R,X > is consistent in the traditional sense if there does not exist any formula from the set S1, such that this formula and its negation are provable from X, by using rules from R. Finally, < R0+, L2 > denotes the classical calculus of quantifiers, where R0+ consists of Modus Ponens and the generalization rule. L2 is the set of all formulas valid in the classical calculus of quantifiers. The Main Result: The system < R0+, L2 > is inconsistent in the traditional sense.

Keywords: classical calculus of quantifiers, classical predicate calculus, generalization rule, consistency in the traditional sense, Modus Ponens

Procedia PDF Downloads 170
5770 A Natural Killer T Cell Subset That Protects against Airway Hyperreactivity

Authors: Ya-Ting Chuang, Krystle Leung, Ya-Jen Chang, Rosemarie H. DeKruyff, Paul B. Savage, Richard Cruse, Christophe Benoit, Dirk Elewaut, Nicole Baumgarth, Dale T. Umetsu

Abstract:

We examined characteristics of a Natural Killer T (NKT) cell subpopulation that developed during influenza infection in neonatal mice, and that suppressed the subsequent development of allergic asthma in a mouse model. This NKT cell subset expressed CD38 but not CD4, produced IFN-γ, but not IL-17, IL-4 or IL-13, and inhibited the development of airway hyperreactivity (AHR) through contact-dependent suppressive activity against helper CD4 T cells. The NKT subset expanded in the lungs of neonatal mice after infection with influenza, but also after treatment of neonatal mice with a Th1-biasing α-GalCer glycolipid analogue, Nu-α-GalCer. These results suggest that early/neonatal exposure to infection or to antigenic challenge can affect subsequent lung immunity by altering the profile of cells residing in the lung and that some subsets of NKT cells can have direct inhibitory activity against CD4+ T cells in allergic asthma. Importantly, our results also suggest a potential therapy for young children that might provide protection against the development of asthma.

Keywords: NKT subset, asthma, airway hyperreactivity, hygiene hypothesis, influenza

Procedia PDF Downloads 197
5769 The Importance of including All Data in a Linear Model for the Analysis of RNAseq Data

Authors: Roxane A. Legaie, Kjiana E. Schwab, Caroline E. Gargett

Abstract:

Studies looking at the changes in gene expression from RNAseq data often make use of linear models. It is also common practice to focus on a subset of data for a comparison of interest, leaving aside the samples not involved in this particular comparison. This work shows the importance of including all observations in the modeling process to better estimate variance parameters, even when the samples included are not directly used in the comparison under test. The human endometrium is a dynamic tissue, which undergoes cycles of growth and regression with each menstrual cycle. The mesenchymal stem cells (MSCs) present in the endometrium are likely responsible for this remarkable regenerative capacity. However recent studies suggest that MSCs also plays a role in the pathogenesis of endometriosis, one of the most common medical conditions affecting the lower abdomen in women in which the endometrial tissue grows outside the womb. In this study we compared gene expression profiles between MSCs and non-stem cell counterparts (‘non-MSC’) obtained from women with (‘E’) or without (‘noE’) endometriosis from RNAseq. Raw read counts were used for differential expression analysis using a linear model with the limma-voom R package, including either all samples in the study or only the samples belonging to the subset of interest (e.g. for the comparison ‘E vs noE in MSC cells’, including only MSC samples from E and noE patients but not the non-MSC ones). Using the full dataset we identified about 100 differentially expressed (DE) genes between E and noE samples in MSC samples (adj.p-val < 0.05 and |logFC|>1) while only 9 DE genes were identified when using only the subset of data (MSC samples only). Important genes known to be involved in endometriosis such as KLF9 and RND3 were missed in the latter case. When looking at the MSC vs non-MSC cells comparison, the linear model including all samples identified 260 genes for noE samples (including the stem cell marker SUSD2) while the subset analysis did not identify any DE genes. When looking at E samples, 12 genes were identified with the first approach and only 1 with the subset approach. Although the stem cell marker RGS5 was found in both cases, the subset test missed important genes involved in stem cell differentiation such as NOTCH3 and other potentially related genes to be used for further investigation and pathway analysis.

Keywords: differential expression, endometriosis, linear model, RNAseq

Procedia PDF Downloads 400
5768 Secure Message Transmission Using Meaningful Shares

Authors: Ajish Sreedharan

Abstract:

Visual cryptography encodes a secret image into shares of random binary patterns. If the shares are exerted onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the shares, however, have no visual meaning and hinder the objectives of visual cryptography. In the Secret Message Transmission through Meaningful Shares a secret message to be transmitted is converted to grey scale image. Then (2,2) visual cryptographic shares are generated from this converted gray scale image. The shares are encrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform. Two separate color images which are of the same size of the shares, taken as cover image of the respective shares to hide the shares into them. The encrypted shares which are covered by meaningful images so that a potential eavesdropper wont know there is a message to be read. The meaningful shares are transmitted through two different transmission medium. During decoding shares are fetched from received meaningful images and decrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform. The shares are combined to regenerate the grey scale image from where the secret message is obtained.

Keywords: visual cryptography, wavelet transform, meaningful shares, grey scale image

Procedia PDF Downloads 416
5767 The Different Ways to Describe Regular Languages by Using Finite Automata and the Changing Algorithm Implementation

Authors: Abdulmajid Mukhtar Afat

Abstract:

This paper aims at introducing finite automata theory, the different ways to describe regular languages and create a program to implement the subset construction algorithms to convert nondeterministic finite automata (NFA) to deterministic finite automata (DFA). This program is written in c++ programming language. The program reads FA 5tuples from text file and then classifies it into either DFA or NFA. For DFA, the program will read the string w and decide whether it is acceptable or not. If accepted, the program will save the tracking path and point it out. On the other hand, when the automation is NFA, the program will change the Automation to DFA so that it is easy to track and it can decide whether the w exists in the regular language or not.

Keywords: finite automata, subset construction, DFA, NFA

Procedia PDF Downloads 401
5766 Rim Size Optimization Using Mathematical Modelling

Authors: M. Tan, N. N. Wan, N. Ramli, N. H. Hassan

Abstract:

Car drivers would always like to have custom wheel on their car for two reasons; to improve their car's aesthetic beauty and to improve their car handling. As the size of the rims or wheels played an important role in influencing the way of car handles around turns, this paper aims to present the optimality of rim size that drivers should have known while changing their rim. There are three factors that drivers should have considered while changing their rim: rim size, its weight and material of which they are made. Using mathematical analysis, this paper will focus on only one factor, which is rim size. Factors that are considered in calculating the optimum rim size are the vehicle rim radius, tire height and weight, and aspect ratio. This paper has found that there are limitations in percentage change in rim size from the original tire size. Failure to have the right offset size may cause problems in maneuvering the vehicle.

Keywords: mathematical analysis, optimum wheel size, percentage change, custom wheel

Procedia PDF Downloads 460
5765 Integral Domains and Their Algebras: Topological Aspects

Authors: Shai Sarussi

Abstract:

Let S be an integral domain with field of fractions F and let A be an F-algebra. An S-subalgebra R of A is called S-nice if R∩F = S and the localization of R with respect to S \{0} is A. Denoting by W the set of all S-nice subalgebras of A, and defining a notion of open sets on W, one can view W as a T0-Alexandroff space. Thus, the algebraic structure of W can be viewed from the point of view of topology. It is shown that every nonempty open subset of W has a maximal element in it, which is also a maximal element of W. Moreover, a supremum of an irreducible subset of W always exists. As a notable connection with valuation theory, one considers the case in which S is a valuation domain and A is an algebraic field extension of F; if S is indecomposed in A, then W is an irreducible topological space, and W contains a greatest element.

Keywords: integral domains, Alexandroff topology, prime spectrum of a ring, valuation domains

Procedia PDF Downloads 89
5764 Multi-Criteria Test Case Selection Using Ant Colony Optimization

Authors: Niranjana Devi N.

Abstract:

Test case selection is to select the subset of only the fit test cases and remove the unfit, ambiguous, redundant, unnecessary test cases which in turn improve the quality and reduce the cost of software testing. Test cases optimization is the problem of finding the best subset of test cases from a pool of the test cases to be audited. It will meet all the objectives of testing concurrently. But most of the research have evaluated the fitness of test cases only on single parameter fault detecting capability and optimize the test cases using a single objective. In the proposed approach, nine parameters are considered for test case selection and the best subset of parameters for test case selection is obtained using Interval Type-2 Fuzzy Rough Set. Test case selection is done in two stages. The first stage is the fuzzy entropy-based filtration technique, used for estimating and reducing the ambiguity in test case fitness evaluation and selection. The second stage is the ant colony optimization-based wrapper technique with a forward search strategy, employed to select test cases from the reduced test suite of the first stage. The results are evaluated using the Coverage parameters, Precision, Recall, F-Measure, APSC, APDC, and SSR. The experimental evaluation demonstrates that by this approach considerable computational effort can be avoided.

Keywords: ant colony optimization, fuzzy entropy, interval type-2 fuzzy rough set, test case selection

Procedia PDF Downloads 619
5763 Accelerated Structural Reliability Analysis under Earthquake-Induced Tsunamis by Advanced Stochastic Simulation

Authors: Sai Hung Cheung, Zhe Shao

Abstract:

Recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 brought huge losses of lives and properties. Maintaining vertical evacuation systems is the most crucial strategy to effectively reduce casualty during the tsunami event. Thus, it is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability (or its complement failure probability) of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of the Subset Simulation algorithm and a recently proposed moving least squares response surface approach for stochastic sampling is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.

Keywords: response surface model, subset simulation, structural reliability, Tsunami risk

Procedia PDF Downloads 336
5762 Random Subspace Ensemble of CMAC Classifiers

Authors: Somaiyeh Dehghan, Mohammad Reza Kheirkhahan Haghighi

Abstract:

The rapid growth of domains that have data with a large number of features, while the number of samples is limited has caused difficulty in constructing strong classifiers. To reduce the dimensionality of the feature space becomes an essential step in classification task. Random subspace method (or attribute bagging) is an ensemble classifier that consists of several classifiers that each base learner in ensemble has subset of features. In the present paper, we introduce Random Subspace Ensemble of CMAC neural network (RSE-CMAC), each of which has training with subset of features. Then we use this model for classification task. For evaluation performance of our model, we compare it with bagging algorithm on 36 UCI datasets. The results reveal that the new model has better performance.

Keywords: classification, random subspace, ensemble, CMAC neural network

Procedia PDF Downloads 296
5761 Regularity and Maximal Congruence in Transformation Semigroups with Fixed Sets

Authors: Chollawat Pookpienlert, Jintana Sanwong

Abstract:

An element a of a semigroup S is called left (right) regular if there exists x in S such that a=xa² (a=a²x) and said to be intra-regular if there exist u,v in such that a=ua²v. Let T(X) be the semigroup of all full transformations on a set X under the composition of maps. For a fixed nonempty subset Y of X, let Fix(X,Y)={α ™ T(X) : yα=y for all y ™ Y}, where yα is the image of y under α. Then Fix(X,Y) is a semigroup of full transformations on X which fix all elements in Y. Here, we characterize left regular, right regular and intra-regular elements of Fix(X,Y) which characterizations are shown as follows: For α ™ Fix(X,Y), (i) α is left regular if and only if Xα\Y = Xα²\Y, (ii) α is right regular if and only if πα = πα², (iii) α is intra-regular if and only if | Xα\Y | = | Xα²\Y | such that Xα = {xα : x ™ X} and πα = {xα⁻¹ : x ™ Xα} in which xα⁻¹ = {a ™ X : aα=x}. Moreover, those regularities are equivalent if Xα\Y is a finite set. In addition, we count the number of those elements of Fix(X,Y) when X is a finite set. Finally, we determine the maximal congruence ρ on Fix(X,Y) when X is finite and Y is a nonempty proper subset of X. If we let | X \Y | = n, then we obtain that ρ = (Fixn x Fixn) ∪ (H ε x H ε) where Fixn = {α ™ Fix(X,Y) : | Xα\Y | < n} and H ε is the group of units of Fix(X,Y). Furthermore, we show that the maximal congruence is unique.

Keywords: intra-regular, left regular, maximal congruence, right regular, transformation semigroup

Procedia PDF Downloads 193
5760 A Comparison Study: Infant and Children’s Clothing Size Charts in South Korea and UK

Authors: Hye-Won Lim, Tom Cassidy, Tracy Cassidy

Abstract:

Infant and children’s body shapes are changing constantly while they are growing up into adults and are also distinctive physically between countries. For this reason, optimum size charts which can represent body sizes and shapes of infants and children are required. In this study, investigations of current size charts in South Korea and UK (n=50 each) were conducted for understanding and figuring out the sizing perspectives of the clothing manufacturers. The size charts of the two countries were collected randomly from online shopping websites and those size charts’ average measurements were compared with both national sizing surveys (SizeKorea and Shape GB). The size charts were also classified by age, gender, clothing type, fitting, and other factors. In addition, the key measurement body parts of size charts of each country were determined and those will be suggested for new size charts and sizing system development.

Keywords: infant clothing, children’s clothing, body shapes, size charts

Procedia PDF Downloads 283
5759 Effect of Aggregate Size on Mechanical Behavior of Passively Confined Concrete Subjected to 3D Loading

Authors: Ibrahim Ajani Tijani, C. W. Lim

Abstract:

Limited studies have examined the effect of size on the mechanical behavior of confined concrete subjected to 3-dimensional (3D) test. With the novel 3D testing system to produce passive confinement, concrete cubes were tested to examine the effect of size on stress-strain behavior of the specimens. The effect of size on 3D stress-strain relationship was scrutinized and compared to the stress-strain relationship available in the literature. It was observed that the ultimate stress and the corresponding strain was related to the confining rigidity and size. The size shows a significant effect on the intersection stress and a new model was proposed for the intersection stress based on the conceptual design of the confining plates.

Keywords: concrete, aggregate size, size effect, 3D compression, passive confinement

Procedia PDF Downloads 173
5758 Size Reduction of Images Using Constraint Optimization Approach for Machine Communications

Authors: Chee Sun Won

Abstract:

This paper presents the size reduction of images for machine-to-machine communications. Here, the salient image regions to be preserved include the image patches of the key-points such as corners and blobs. Based on a saliency image map from the key-points and their image patches, an axis-aligned grid-size optimization is proposed for the reduction of image size. To increase the size-reduction efficiency the aspect ratio constraint is relaxed in the constraint optimization framework. The proposed method yields higher matching accuracy after the size reduction than the conventional content-aware image size-reduction methods.

Keywords: image compression, image matching, key-point detection and description, machine-to-machine communication

Procedia PDF Downloads 377
5757 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine

Procedia PDF Downloads 138
5756 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 94
5755 Bayesian Network and Feature Selection for Rank Deficient Inverse Problem

Authors: Kyugneun Lee, Ikjin Lee

Abstract:

Parameter estimation with inverse problem often suffers from unfavorable conditions in the real world. Useless data and many input parameters make the problem complicated or insoluble. Data refinement and reformulation of the problem can solve that kind of difficulties. In this research, a method to solve the rank deficient inverse problem is suggested. A multi-physics system which has rank deficiency caused by response correlation is treated. Impeditive information is removed and the problem is reformulated to sequential estimations using Bayesian network (BN) and subset groups. At first, subset grouping of the responses is performed. Feature selection with singular value decomposition (SVD) is used for the grouping. Next, BN inference is used for sequential conditional estimation according to the group hierarchy. Directed acyclic graph (DAG) structure is organized to maximize the estimation ability. Variance ratio of response to noise is used to pairing the estimable parameters by each response.

Keywords: Bayesian network, feature selection, rank deficiency, statistical inverse analysis

Procedia PDF Downloads 277
5754 Non-Local Simultaneous Sparse Unmixing for Hyperspectral Data

Authors: Fanqiang Kong, Chending Bian

Abstract:

Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed pixels of a hyperspectral image can be expressed in the form of linear combination of only a few pure spectral signatures (end members) in an available spectral library. However, the sparse unmixing problem still remains a great challenge at finding the optimal subset of endmembers for the observed data from a large standard spectral library, without considering the spatial information. Under such circumstances, a sparse unmixing algorithm termed as non-local simultaneous sparse unmixing (NLSSU) is presented. In NLSSU, the non-local simultaneous sparse representation method for endmember selection of sparse unmixing, is used to finding the optimal subset of endmembers for the similar image patch set in the hyperspectral image. And then, the non-local means method, as a regularizer for abundance estimation of sparse unmixing, is used to exploit the abundance image non-local self-similarity. Experimental results on both simulated and real data demonstrate that NLSSU outperforms the other algorithms, with a better spectral unmixing accuracy.

Keywords: hyperspectral unmixing, simultaneous sparse representation, sparse regression, non-local means

Procedia PDF Downloads 203
5753 Investigation of Droplet Size Produced in Two-Phase Gravity Separators

Authors: Kul Pun, F. A. Hamad, T. Ahmed, J. O. Ugwu, J. Eyers, G. Lawson, P. A. Russell

Abstract:

Determining droplet size and distribution is essential when determining the separation efficiency of a two/three-phase separator. This paper investigates the effect of liquid flow and oil pad thickness on the droplet size at the lab scale. The findings show that increasing the inlet flow rates of the oil and water results in size reduction of the droplets and increasing the thickness of the oil pad increases the size of the droplets. The data were fitted with a simple Gaussian model, and the parameters of mean, standard deviation, and amplitude were determined. Trends have been obtained for the fitted parameters as a function of the Reynolds number, which suggest a way forward to better predict the starting parameters for population models when simulating separation using CFD packages. The key parameter to predict to fix the position of the Gaussian distribution was found to be the mean droplet size.

Keywords: two-phase separator, average bubble droplet, bubble size distribution, liquid-liquid phase

Procedia PDF Downloads 143
5752 Location-Domination on Join of Two Graphs and Their Complements

Authors: Analen Malnegro, Gina Malacas

Abstract:

Dominating sets and related topics have been studied extensively in the past few decades. A dominating set of a graph G is a subset D of V such that every vertex not in D is adjacent to at least one member of D. The domination number γ(G) is the number of vertices in a smallest dominating set for G. Some problems involving detection devices can be modeled with graphs. Finding the minimum number of devices needed according to the type of devices and the necessity of locating the object gives rise to locating-dominating sets. A subset S of vertices of a graph G is called locating-dominating set, LD-set for short, if it is a dominating set and if every vertex v not in S is uniquely determined by the set of neighbors of v belonging to S. The location-domination number λ(G) is the minimum cardinality of an LD-set for G. The complement of a graph G is a graph Ḡ on same vertices such that two distinct vertices of Ḡ are adjacent if and only if they are not adjacent in G. An LD-set of a graph G is global if it is an LD-set of both G and its complement Ḡ. The global location-domination number λg(G) is defined as the minimum cardinality of a global LD-set of G. In this paper, global LD-sets on the join of two graphs are characterized. Global location-domination numbers of these graphs are also determined.

Keywords: dominating set, global locating-dominating set, global location-domination number, locating-dominating set, location-domination number

Procedia PDF Downloads 146
5751 Influence of Processing Parameters on the Reliability of Sieving as a Particle Size Distribution Measurements

Authors: Eseldin Keleb

Abstract:

In the pharmaceutical industry particle size distribution is an important parameter for the characterization of pharmaceutical powders. The powder flowability, reactivity and compatibility, which have a decisive impact on the final product, are determined by particle size and size distribution. Therefore, the aim of this study was to evaluate the influence of processing parameters on the particle size distribution measurements. Different Size fractions of α-lactose monohydrate and 5% polyvinylpyrrolidone were prepared by wet granulation and were used for the preparation of samples. The influence of sieve load (50, 100, 150, 200, 250, 300, and 350 g), processing time (5, 10, and 15 min), sample size ratios (high percentage of small and large particles), type of disturbances (vibration and shaking) and process reproducibility have been investigated. Results obtained showed that a sieve load of 50 g produce the best separation, a further increase in sample weight resulted in incomplete separation even after the extension of the processing time for 15 min. Performing sieving using vibration was rapider and more efficient than shaking. Meanwhile between day reproducibility showed that particle size distribution measurements are reproducible. However, for samples containing 70% fines or 70% large particles, which processed at optimized parameters, the incomplete separation was always observed. These results indicated that sieving reliability is highly influenced by the particle size distribution of the sample and care must be taken for samples with particle size distribution skewness.

Keywords: sieving, reliability, particle size distribution, processing parameters

Procedia PDF Downloads 574
5750 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech

Authors: Monica Gonzalez Machorro

Abstract:

Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.

Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment

Procedia PDF Downloads 89
5749 Accelerated Evaluation of Structural Reliability under Tsunami Loading

Authors: Sai Hung Cheung, Zhe Shao

Abstract:

It is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis in view of recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 which brought huge losses of lives and properties. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of a recently proposed moving least squares response surface approach for stochastic sampling and the Subset Simulation algorithm is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.

Keywords: response surface, stochastic simulation, structural reliability tsunami, risk

Procedia PDF Downloads 639
5748 Independence and Path Independence on Cayley Digraphs of Left Groups and Right Groups

Authors: Nuttawoot Nupo, Sayan Panma

Abstract:

A semigroup S is said to be a left (right) zero semigroup if S satisfies the equation xy=x (xy=y) for all x,y in S. In addition, the semigroup S is called a left (right) group if S is isomorphic to the direct product of a group and a left (right) zero semigroup. The Cayley digraph Cay(S,A) of a semigroup S with a connection set A is defined to be a digraph with the vertex set S and the arc set E(Cay(S,A))={(x,xa) | x∈S, a∈A} where A is any subset of S. All sets in this research are assumed to be finite. Let D be a digraph together with a vertex set V and an arc set E. Let u and v be two different vertices in V and I a nonempty subset of V. The vertices u and v are said to be independent if (u,v)∉E and (v,u)∉E. The set I is called an independent set of D if any two different vertices in I are independent. The independence number of D is the maximum cardinality of an independent set of D. Moreover, the vertices u and v are said to be path independent if there is no dipath from u to v and there is no dipath from v to u. The set I is called a path independent set of D if any two different vertices in I are path independent. The path independence number of D is the maximum cardinality of a path independent set of D. In this research, we describe a lower bound and an upper bound of the independence number of Cayley digraphs of left groups and right groups. Some examples corresponding to those bounds are illustrated here. Furthermore, the exact value of the path independence number of Cayley digraphs of left groups and right groups are also presented.

Keywords: Cayley digraphs, independence number, left groups, path independence number, right groups

Procedia PDF Downloads 202
5747 Some Factors Affecting to Farm Size of Duck Farming

Authors: Veronica Sri Lestari, Ahmad Ramadhan Siregar

Abstract:

The purpose of this research was to know some factors affecting farm size of duck farming (case study in Pinrang district, South Sulawesi). This research was conducted in 2013. Total sample was 45 duck farmers which were selected from 6 regions in Mattiro Sompe sub district, Pinrang district, South Sulawesi province through stratified random sampling. Data were collected through interviews using questionnaires and observation. Multiple regression equation was used to analyze the data. Dependent variable was duck population, while age of respondents, farming experience, land size, education, and income level as independent variables. This research revealed that R2 was 0.920. Simultaneously, age of respondents, farming experience, land size, education, and income level significantly influenced farm size of duck farming (P < 1%). Only income influenced farm size of duck farming (P < 1%).

Keywords: duck, dry system, factors, farm-size

Procedia PDF Downloads 460
5746 Synthesis and Functionalization of Gold Nanostars for ROS Production

Authors: H. D. Duong, J. I. Rhee

Abstract:

In this work, gold nanoparticles in star shape (called gold nanostars, GNS) were synthesized and coated by N-(3-aminopropyl) methacrylamide hydrochloride (PA) and mercaptopropionic acid (MPA) for functionalizing their surface by amine and carboxyl groups and then investigated for ROS production. The GNS with big size and multi-tips seem to be superior in singlet oxygen production as compared with that of small GNS and less tips. However, the functioned GNS in small size could also enhance efficiency of singlet oxygen production about double as compared with that of the intact GNS. In combination with methylene blue (MB+), the functioned GNS could enhance the singlet oxygen production of MB+ after 1h of LED750 irradiation and no difference between small size and big size in this reaction was observed. In combination with 5-aminolevulinic acid (ALA), only GNS coated PA could enhance the singlet oxygen production of ALA and the small size of GNS coated PA was a little higher effect than that of the bigger size. However, GNS coated MPA with small size had strong effect on hydroxyl radical production of ALA.

Keywords: 5-aminolevulinic acid, gold nanostars, methylene blue, ROS production

Procedia PDF Downloads 314
5745 Self-Assembled Tin Particles Made by Plasma-Induced Dewetting

Authors: Han Joo Choe, Soon-Ho Kwon, Jung-Joong Lee

Abstract:

Tin particles of various size and distribution were self-assembled by plasma treating tin film deposited on silicon oxide substrates. Plasma treatment was conducted using an inductively coupled plasma (ICP) source. A range of ICP power and topographic templated substrates were evaluated to observe changes in particle size and particle distribution. Scanning electron microscopy images of the particles were analyzed using computer software. The evolution of tin film dewetting into particles initiated from the hole nucleation in grain boundaries. Increasing ICP power during plasma treatment produced larger number of particles per area and smaller particle size and particle-size distribution. Topographic templates were also effective in positioning and controlling the size of the particles. By combining the effects of ICP power and topographic templates, particles of similar size and well-ordered distribution were obtained.

Keywords: dewetting, particles, plasma, tin

Procedia PDF Downloads 221