Search results for: discrete latent variable
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3138

Search results for: discrete latent variable

3138 A Two-Week and Six-Month Stability of Cancer Health Literacy Classification Using the CHLT-6

Authors: Levent Dumenci, Laura A. Siminoff

Abstract:

Health literacy has been shown to predict a variety of health outcomes. Reliable identification of persons with limited cancer health literacy (LCHL) has been proved questionable with existing instruments using an arbitrary cut point along a continuum. The CHLT-6, however, uses a latent mixture modeling approach to identify persons with LCHL. The purpose of this study was to estimate two-week and six-month stability of identifying persons with LCHL using the CHLT-6 with a discrete latent variable approach as the underlying measurement structure. Using a test-retest design, the CHLT-6 was administered to cancer patients with two-week (N=98) and six-month (N=51) intervals. The two-week and six-month latent test-retest agreements were 89% and 88%, respectively. The chance-corrected latent agreements estimated from Dumenci’s latent kappa were 0.62 (95% CI: 0.41 – 0.82) and .47 (95% CI: 0.14 – 0.80) for the two-week and six-month intervals, respectively. High levels of latent test-retest agreement between limited and adequate categories of cancer health literacy construct, coupled with moderate to good levels of change-corrected latent agreements indicated that the CHLT-6 classification of limited versus adequate cancer health literacy is relatively stable over time. In conclusion, the measurement structure underlying the instrument allows for estimating classification errors circumventing limitations due to arbitrary approaches adopted by all other instruments. The CHLT-6 can be used to identify persons with LCHL in oncology clinics and intervention studies to accurately estimate treatment effectiveness.

Keywords: limited cancer health literacy, the CHLT-6, discrete latent variable modeling, latent agreement

Procedia PDF Downloads 178
3137 Deep learning with Noisy Labels : Learning True Labels as Discrete Latent Variable

Authors: Azeddine El-Hassouny, Chandrashekhar Meshram, Geraldin Nanfack

Abstract:

In recent years, learning from data with noisy labels (Label Noise) has been a major concern in supervised learning. This problem has become even more worrying in Deep Learning, where the generalization capabilities have been questioned lately. Indeed, deep learning requires a large amount of data that is generally collected by search engines, which frequently return data with unreliable labels. In this paper, we investigate the Label Noise in Deep Learning using variational inference. Our contributions are : (1) exploiting Label Noise concept where the true labels are learnt using reparameterization variational inference, while observed labels are learnt discriminatively. (2) the noise transition matrix is learnt during the training without any particular process, neither heuristic nor preliminary phases. The theoretical results shows how true label distribution can be learned by variational inference in any discriminate neural network, and the effectiveness of our approach is proved in several target datasets, such as MNIST and CIFAR32.

Keywords: label noise, deep learning, discrete latent variable, variational inference, MNIST, CIFAR32

Procedia PDF Downloads 127
3136 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling

Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari

Abstract:

A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.

Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis

Procedia PDF Downloads 147
3135 Discrete-Time Bulk Queue with Service Capacity Depending on Previous Service Time

Authors: Yutae Lee

Abstract:

This paper considers a discrete-time bulk-arrival bulkservice queueing system, where service capacity varies depending on the previous service time. By using the generating function technique and the supplementary variable method, we compute the distributions of the queue length at an arbitrary slot boundary and a departure time.

Keywords: discrete-time queue, bulk queue, variable service capacity, queue length distribution

Procedia PDF Downloads 476
3134 A Contribution to the Polynomial Eigen Problem

Authors: Malika Yaici, Kamel Hariche, Tim Clarke

Abstract:

The relationship between eigenstructure (eigenvalues and eigenvectors) and latent structure (latent roots and latent vectors) is established. In control theory eigenstructure is associated with the state space description of a dynamic multi-variable system and a latent structure is associated with its matrix fraction description. Beginning with block controller and block observer state space forms and moving on to any general state space form, we develop the identities that relate eigenvectors and latent vectors in either direction. Numerical examples illustrate this result. A brief discussion of the potential of these identities in linear control system design follows. Additionally, we present a consequent result: a quick and easy method to solve the polynomial eigenvalue problem for regular matrix polynomials.

Keywords: eigenvalues/eigenvectors, latent values/vectors, matrix fraction description, state space description

Procedia PDF Downloads 470
3133 Prevalence of Workplace Bullying in Hong Kong: A Latent Class Analysis

Authors: Catalina Sau Man Ng

Abstract:

Workplace bullying is generally defined as a form of direct and indirect maltreatment at work including harassing, offending, socially isolating someone or negatively affecting someone’s work tasks. Workplace bullying is unfortunately commonplace around the world, which makes it a social phenomenon worth researching. However, the measurements and estimation methods of workplace bullying seem to be diverse in different studies, leading to dubious results. Hence, this paper attempts to examine the prevalence of workplace bullying in Hong Kong using the latent class analysis approach. It is often argued that the traditional classification of workplace bullying into the dichotomous 'victims' and 'non-victims' may not be able to fully represent the complex phenomenon of bullying. By treating workplace bullying as one latent variable and examining the potential categorical distribution within the latent variable, a more thorough understanding of workplace bullying in real-life situations may hence be provided. As a result, this study adopts a latent class analysis method, which was tested to demonstrate higher construct and higher predictive validity previously. In the present study, a representative sample of 2814 employees (Male: 54.7%, Female: 45.3%) in Hong Kong was recruited. The participants were asked to fill in a self-reported questionnaire which included measurements such as Chinese Workplace Bullying Scale (CWBS) and Chinese Version of Depression Anxiety Stress Scale (DASS). It is estimated that four latent classes will emerge: 'non-victims', 'seldom bullied', 'sometimes bullied', and 'victims'. The results of each latent class and implications of the study will also be discussed in this working paper.

Keywords: latent class analysis, prevalence, survey, workplace bullying

Procedia PDF Downloads 330
3132 The Latent Model of Linguistic Features in Korean College Students’ L2 Argumentative Writings: Syntactic Complexity, Lexical Complexity, and Fluency

Authors: Jiyoung Bae, Gyoomi Kim

Abstract:

This study explores a range of linguistic features used in Korean college students’ argumentative writings for the purpose of developing a model that identifies variables which predict writing proficiencies. This study investigated the latent variable structure of L2 linguistic features, including syntactic complexity, the lexical complexity, and fluency. One hundred forty-six university students in Korea participated in this study. The results of the study’s confirmatory factor analysis (CFA) showed that indicators of linguistic features from this study-provided a foundation for re-categorizing indicators found in extant research on L2 Korean writers depending on each latent variable of linguistic features. The CFA models indicated one measurement model of L2 syntactic complexity and L2 learners’ writing proficiency; these two latent factors were correlated with each other. Based on the overall findings of the study, integrated linguistic features of L2 writings suggested some pedagogical implications in L2 writing instructions.

Keywords: linguistic features, syntactic complexity, lexical complexity, fluency

Procedia PDF Downloads 170
3131 Exact Solutions of Discrete Sine-Gordon Equation

Authors: Chao-Qing Dai

Abstract:

Two families of exact travelling solutions for the discrete sine-Gordon equation are constructed based on the variable-coefficient Jacobian elliptic function method and different transformations. When the modulus of Jacobian elliptic function solutions tends to 1, soliton solutions can be obtained. Some soliton solutions degenerate into the known solutions in literatures. Moreover, dynamical properties of exact solutions are investigated. Our analysis and results may have potential values for certain applications in modern nonlinear science and textile engineering.

Keywords: exact solutions, variable-coefficient Jacobian elliptic function method, discrete sine-Gordon equation, dynamical behaviors

Procedia PDF Downloads 420
3130 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder

Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen

Abstract:

Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.

Keywords: natural language inference, explanation generation, variational auto-encoder, generative model

Procedia PDF Downloads 151
3129 The Application of Variable Coefficient Jacobian elliptic Function Method to Differential-Difference Equations

Authors: Chao-Qing Dai

Abstract:

In modern nonlinear science and textile engineering, nonlinear differential-difference equations are often used to describe some nonlinear phenomena. In this paper, we extend the variable coefficient Jacobian elliptic function method, which was used to find new exact travelling wave solutions of nonlinear partial differential equations, to nonlinear differential-difference equations. As illustration, we derive two series of Jacobian elliptic function solutions of the discrete sine-Gordon equation.

Keywords: discrete sine-Gordon equation, variable coefficient Jacobian elliptic function method, exact solutions, equation

Procedia PDF Downloads 668
3128 Speech Intelligibility Improvement Using Variable Level Decomposition DWT

Authors: Samba Raju, Chiluveru, Manoj Tripathy

Abstract:

Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods

Keywords: discrete wavelet transform, speech intelligibility, STOI, standard deviation

Procedia PDF Downloads 148
3127 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 143
3126 Modeling Route Selection Using Real-Time Information and GPS Data

Authors: William Albeiro Alvarez, Gloria Patricia Jaramillo, Ivan Reinaldo Sarmiento

Abstract:

Understanding the behavior of individuals and the different human factors that influence the choice when faced with a complex system such as transportation is one of the most complicated aspects of measuring in the components that constitute the modeling of route choice due to that various behaviors and driving mode directly or indirectly affect the choice. During the last two decades, with the development of information and communications technologies, new data collection techniques have emerged such as GPS, geolocation with mobile phones, apps for choosing the route between origin and destination, individual service transport applications among others, where an interest has been generated to improve discrete choice models when considering the incorporation of these developments as well as psychological factors that affect decision making. This paper implements a discrete choice model that proposes and estimates a hybrid model that integrates route choice models and latent variables based on the observation on the route of a sample of public taxi drivers from the city of Medellín, Colombia in relation to its behavior, personality, socioeconomic characteristics, and driving mode. The set of choice options includes the routes generated by the individual service transport applications versus the driver's choice. The hybrid model consists of measurement equations that relate latent variables with measurement indicators and utilities with choice indicators along with structural equations that link the observable characteristics of drivers with latent variables and explanatory variables with utilities.

Keywords: behavior choice model, human factors, hybrid model, real time data

Procedia PDF Downloads 152
3125 Herbal Based Fingerprint Powder Formulation for Latent Fingermark Visualization: Catechu (Kattha)

Authors: Pallavi Thakur, Rakesh K. Garg

Abstract:

Latent fingerprints are commonly encountered evidence at the scene of the crime. It is very important to decipher these fingerprints in order to explore their identity and a lot of research has been made on the visualization of latent fingermarks on various substrates by numerous researchers. During the past few years large number of powder formulations has been evolved for the development of latent fingermarks on different surfaces. This paper reports a new and simple fingerprint powder which is non-toxic and has been employed on different substrates successfully for the development and visualization of latent fingermarks upto the time period of twelve days in varying temperature conditions. In this study, a less expensive, simple and easily available catechu (kattha) powder has been used to decipher the latent fingermarks on different substrates namely glass, plastic, metal, aluminium foil, white paper, wall tile and wooden sheet. It is observed that it gives very clear results on all the mentioned substrates and can be successfully used for the development and visualization of twelve days old latent fingermarks in varying temperature conditions on wall tiles.

Keywords: fingermarks, catechu, visualization, aged fingermarks

Procedia PDF Downloads 188
3124 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 297
3123 Optimization of Fourth Order Discrete-Approximation Inclusions

Authors: Elimhan N. Mahmudov

Abstract:

The paper concerns the necessary and sufficient conditions of optimality for Cauchy problem of fourth order discrete (PD) and discrete-approximate (PDA) inclusions. The main problem is formulation of the fourth order adjoint discrete and discrete-approximate inclusions and transversality conditions, which are peculiar to problems including fourth order derivatives and approximate derivatives. Thus the necessary and sufficient conditions of optimality are obtained incorporating the Euler-Lagrange and Hamiltonian forms of inclusions. Derivation of optimality conditions are based on the apparatus of locally adjoint mapping (LAM). Moreover in the application of these results we consider the fourth order linear discrete and discrete-approximate inclusions.

Keywords: difference, optimization, fourth, approximation, transversality

Procedia PDF Downloads 374
3122 Topic Modelling Using Latent Dirichlet Allocation and Latent Semantic Indexing on SA Telco Twitter Data

Authors: Phumelele Kubheka, Pius Owolawi, Gbolahan Aiyetoro

Abstract:

Twitter is one of the most popular social media platforms where users can share their opinions on different subjects. As of 2010, The Twitter platform generates more than 12 Terabytes of data daily, ~ 4.3 petabytes in a single year. For this reason, Twitter is a great source for big mining data. Many industries such as Telecommunication companies can leverage the availability of Twitter data to better understand their markets and make an appropriate business decision. This study performs topic modeling on Twitter data using Latent Dirichlet Allocation (LDA). The obtained results are benchmarked with another topic modeling technique, Latent Semantic Indexing (LSI). The study aims to retrieve topics on a Twitter dataset containing user tweets on South African Telcos. Results from this study show that LSI is much faster than LDA. However, LDA yields better results with higher topic coherence by 8% for the best-performing model represented in Table 1. A higher topic coherence score indicates better performance of the model.

Keywords: big data, latent Dirichlet allocation, latent semantic indexing, telco, topic modeling, twitter

Procedia PDF Downloads 150
3121 Web 2.0 Enabling Knowledge-Sharing Practices among Students of IIUM: An Exploration of the Determinants

Authors: Shuaibu Hassan Usman, Ishaq Oyebisi Oyefolahan

Abstract:

This study was aimed to explore the latent factors in the web 2.0 enabled knowledge sharing practices instrument. Seven latent factors were identified through a factor analysis with orthogonal rotation and interpreted based on simple structure convergence, item loadings, and analytical statistics. The number of factors retains was based on the analysis of Kaiser Normalization criteria and Scree plot. The reliability tests revealed a satisfactory reliability scores on each of the seven latent factors of the web 2.0 enabled knowledge sharing practices. Limitation, conclusion, and future work of this study were also discussed.

Keywords: factor analysis, latent factors, knowledge sharing practices, students, web 2.0 enabled

Procedia PDF Downloads 434
3120 Fingerprint on Ballistic after Shooting

Authors: Narong Kulnides

Abstract:

This research involved fingerprints on ballistics after shooting. Two objectives of research were as follows; (1) to study the duration of the existence of latent fingerprints on .38, .45, 9 mm and .223 cartridge case after shooting, and (2) to compare the effectiveness of the detection of latent fingerprints by Black Powder, Super Glue, Perma Blue and Gun Bluing. The latent fingerprint appearance were studied on .38, .45, 9 mm. and .223 cartridge cases before and after shooting with Black Powder, Super Glue, Perma Blue and Gun Bluing. The detection times were 3 minute, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72, 78 and 84 hours respectively. As a result of the study, it can be conclude that: (1) Before shooting, the detection of latent fingerprints on 38, .45, and 9 mm. and .223 cartridge cases with Black Powder, Super Glue, Perma Blue and Gun Bluing can detect the fingerprints at all detection times. (2) After shooting, the detection of latent fingerprints on .38, .45, 9 mm. and .223 cartridge cases with Black Powder, Super Glue did not appear. The detection of latent fingerprints on .38, .45, 9 mm. cartridge cases with Perma Blue and Gun Bluing were found 100% of the time and the detection of latent fingerprints on .223 cartridge cases with Perma Blue and Gun Bluing were found 40% and 46.67% of the time, respectively.

Keywords: ballistic, fingerprint, shooting, detection times

Procedia PDF Downloads 418
3119 Discrete Group Search Optimizer for the Travelling Salesman Problem

Authors: Raed Alnajjar, Mohd Zakree, Ahmad Nazri

Abstract:

In this study, we apply Discrete Group Search Optimizer (DGSO) for solving Traveling Salesman Problem (TSP). The DGSO is a nature inspired optimization algorithm that imitates the animal behavior, especially animal searching behavior. The proposed DGSO uses a vector representation and some discrete operators, such as destruction, construction, differential evolution, swap and insert. The TSP is a well-known hard combinatorial optimization problem, which seeks to find the shortest path among numbers of cities. The performance of the proposed DGSO is evaluated and tested on benchmark instances which listed in LIBTSP dataset. The experimental results show that the performance of the proposed DGSO is comparable with the other methods in the state of the art for some instances. The results show that DGSO outperform Ant Colony System (ACS) in some instances whilst outperform other metaheuristic in most instances. In addition to that, the new results obtained a number of optimal solutions and some best known results. DGSO was able to obtain feasible and good quality solution across all dataset.

Keywords: discrete group search optimizer (DGSO); Travelling salesman problem (TSP); Variable neighborhood search(VNS)

Procedia PDF Downloads 324
3118 Visualization of Latent Sweat Fingerprints Deposit on Paper by Infrared Radiation and Blue Light

Authors: Xiaochun Huang, Xuejun Zhao, Yun Zou, Feiyu Yang, Wenbin Liu, Nan Deng, Ming Zhang, Nengbin Cai

Abstract:

A simple device termed infrared radiation (IR) was developed for rapid visualization of sweat fingerprints deposit on paper with blue light (450 nm, 11 W). In this approach, IR serves as the pretreatment device before the sweat fingerprints was illuminated by blue light. An annular blue light source was adopted for visualizing latent sweat fingerprints. Sample fingerprints were examined under various conditions after deposition, and experimental results indicate that the recovery rate of the latent sweat fingerprints is in the range of 50%-100% without chemical treatments. A mechanism for the observed visibility is proposed based on transportation and re-impregnation of fluorescer in paper at the region of water. And further exploratory experimental results gave the full support to the visible mechanism. Therefore, such a method as IR-pretreated in detecting latent fingerprints may be better for examination in the case where biological information of samples is needed for consequent testing.

Keywords: forensic science, visualization, infrared radiation, blue light, latent sweat fingerprints, detection

Procedia PDF Downloads 497
3117 Development of Zinc Oxide Coated Carbon Nanoparticles from Pineapples Leaves Using SOL Gel Method for Optimal Adsorption of Copper ion and Reuse in Latent Fingerprint

Authors: Bienvenu Gael Fouda Mbanga, Zikhona Tywabi-Ngeva, Kriveshini Pillay

Abstract:

This work highlighted a new method for preparing Nitrogen carbon nanoparticles fused on zinc oxide nanoparticle nanocomposite (N-CNPs/ZnONPsNC) to remove copper ions (Cu²+) from wastewater by sol-gel method and applying the metal-loaded adsorbent in latent fingerprint application. The N-CNPs/ZnONPsNC showed to be an effective sorbent for optimum Cu²+ sorption at pH 8 and 0.05 g dose. The Langmuir isotherm was found to best fit the process, with a maximum adsorption capacity of 285.71 mg/g, which was higher than most values found in other research for Cu²+ removal. Adsorption was spontaneous and endothermic at 25oC. In addition, the Cu²+-N-CNPs/ZnONPsNC was found to be sensitive and selective for latent fingerprint (LFP) recognition on a range of porous surfaces. As a result, in forensic research, it is an effective distinguishing chemical for latent fingerprint detection.

Keywords: latent fingerprint, nanocomposite, adsorption, copper ions, metal loaded adsorption, adsorbent

Procedia PDF Downloads 83
3116 Role of Discrete Event Simulation in the Assessment and Selection of the Potential Reconfigurable Manufacturing Solutions

Authors: Mohsin Raza, Arne Bilberg, Thomas Ditlev Brunø, Ann-Louise Andersen, Filip SKärin

Abstract:

Shifting from a dedicated or flexible manufacturing system to a reconfigurable manufacturing system (RMS) requires a significant amount of time, money, and effort. Therefore, it is vital to verify beforehand that the potential reconfigurable solution will be able to achieve the organizational objectives. Discrete event simulation offers the opportunity of assessing several reconfigurable alternatives against the set objectives. This study signifies the importance of using discrete-event simulation as a tool to verify several reconfiguration options. Two different industrial cases have been presented in the study to elaborate on the role of discrete event simulation in the implementation methodology of RMSs. The study concluded that discrete event simulation is one of the important tools to consider in the RMS implementation methodology.

Keywords: reconfigurable manufacturing system, discrete event simulation, Tecnomatix plant simulation, RMS

Procedia PDF Downloads 124
3115 Evaluation of the Diagnostic Potential of IL-2 as Biomarker for the Discrimination of Active and Latent Tuberculosis

Authors: Shima Mahmoudi, Setareh Mamishi, Babak Pourakbari, Majid Marjani

Abstract:

In the last years, the potential role of distinct T-cell subsets as biomarkers of active tuberculosis TB and/or latent tuberculosis infection (LTBI) has been studied. The aim of this study was to investigate the potential role of interleukin-2 (IL-2) in whole blood stimulated with M. tuberculosis-specific antigens in the QuantiFERON-TB Gold In Tube (QFT-G-IT) for the discrimination of active and latent tuberculosis. After 72-h of stimulation by antigens from the QFT-G-IT assay, IL-2 secretion was quantitated in supernatants by using ELISA (Mabtech AB, Sweden). Observing the level of IL-2 released after 72-h of incubation, we found that the level of IL-2 were significantly higher in LTBI group than in patients with active TB infection or control group (P value=0.019, Kruskal–Wallis test). The discrimination performance (assessed by the area under ROC curve) between LTBI and patients with active TB was 0.816 (95%CI: 0.72-0.97). Maximum discrimination was reached at a cut-off of 13.9 pg/mL for IL-2 following stimulation with 82% sensitivity and 86% specificity. In conclusion, although cytokine analysis has greatly contributed to the understanding of TB pathogenesis, data on cytokine profiles that might distinguish progression from latency of TB infection are scarce and even controversial. Our data indicate that the concomitant evaluation of IFN- γ and IL-2 could be instrumental in discriminating of active and latent TB infection.

Keywords: interleukin-2, discrimination, active TB, latent TB

Procedia PDF Downloads 408
3114 Multidimensional Integral and Discrete Opial–Type Inequalities

Authors: Maja Andrić, Josip Pečarić

Abstract:

Over the last five decades, an enormous amount of work has been done on Opial’s integral inequality, dealing with new proofs, various generalizations, extensions and discrete analogs. The Opial inequality is recognized as a fundamental result in the analysis of qualitative properties of solution of differential equations. We use submultiplicative convex functions, appropriate representations of functions and inequalities involving means to obtain generalizations and extensions of certain known multidimensional integral and discrete Opial-type inequalities.

Keywords: Opial's inequality, Jensen's inequality, integral inequality, discrete inequality

Procedia PDF Downloads 439
3113 Numerical Modelling of Dry Stone Masonry Structures Based on Finite-Discrete Element Method

Authors: Ž. Nikolić, H. Smoljanović, N. Živaljić

Abstract:

This paper presents numerical model based on finite-discrete element method for analysis of the structural response of dry stone masonry structures under static and dynamic loads. More precisely, each discrete stone block is discretized by finite elements. Material non-linearity including fracture and fragmentation of discrete elements as well as cyclic behavior during dynamic load are considered through contact elements which are implemented within a finite element mesh. The application of the model was conducted on several examples of these structures. The performed analysis shows high accuracy of the numerical results in comparison with the experimental ones and demonstrates the potential of the finite-discrete element method for modelling of the response of dry stone masonry structures.

Keywords: dry stone masonry structures, dynamic load, finite-discrete element method, static load

Procedia PDF Downloads 414
3112 Fundamental Solutions for Discrete Dynamical Systems Involving the Fractional Laplacian

Authors: Jorge Gonzalez Camus, Valentin Keyantuo, Mahamadi Warma

Abstract:

In this work, we obtain representation results for solutions of a time-fractional differential equation involving the discrete fractional Laplace operator in terms of generalized Wright functions. Such equations arise in the modeling of many physical systems, for example, chain processes in chemistry and radioactivity. The focus is on the linear problem of the simplified Moore - Gibson - Thompson equation, where the discrete fractional Laplacian and the Caputo fractional derivate of order on (0,2] are involved. As a particular case, we obtain the explicit solution for the discrete heat equation and discrete wave equation. Furthermore, we show the explicit solution for the equation involving the perturbed Laplacian by the identity operator. The main tool for obtaining the explicit solution are the Laplace and discrete Fourier transforms, and Stirling's formula. The methodology mainly is to apply both transforms in the equation, to find the inverse of each transform, and to prove that this solution is well defined, using Stirling´s formula.

Keywords: discrete fractional Laplacian, explicit representation of solutions, fractional heat and wave equations, fundamental

Procedia PDF Downloads 209
3111 A Hybrid Watermarking Scheme Using Discrete and Discrete Stationary Wavelet Transformation For Color Images

Authors: Bülent Kantar, Numan Ünaldı

Abstract:

This paper presents a new method which includes robust and invisible digital watermarking on images that is colored. Colored images are used as watermark. Frequency region is used for digital watermarking. Discrete wavelet transform and discrete stationary wavelet transform are used for frequency region transformation. Low, medium and high frequency coefficients are obtained by applying the two-level discrete wavelet transform to the original image. Low frequency coefficients are obtained by applying one level discrete stationary wavelet transform separately to all frequency coefficient of the two-level discrete wavelet transformation of the original image. For every low frequency coefficient obtained from one level discrete stationary wavelet transformation, watermarks are added. Watermarks are added to all frequency coefficients of two-level discrete wavelet transform. Totally, four watermarks are added to original image. In order to get back the watermark, the original and watermarked images are applied with two-level discrete wavelet transform and one level discrete stationary wavelet transform. The watermark is obtained from difference of the discrete stationary wavelet transform of the low frequency coefficients. A total of four watermarks are obtained from all frequency of two-level discrete wavelet transform. Obtained watermark results are compared with real watermark results, and a similarity result is obtained. A watermark is obtained from the highest similarity values. Proposed methods of watermarking are tested against attacks of the geometric and image processing. The results show that proposed watermarking method is robust and invisible. All features of frequencies of two level discrete wavelet transform watermarking are combined to get back the watermark from the watermarked image. Watermarks have been added to the image by converting the binary image. These operations provide us with better results in getting back the watermark from watermarked image by attacking of the geometric and image processing.

Keywords: watermarking, DWT, DSWT, copy right protection, RGB

Procedia PDF Downloads 535
3110 Superconvergence of the Iterated Discrete Legendre Galerkin Method for Fredholm-Hammerstein Equations

Authors: Payel Das, Gnaneshwar Nelakanti

Abstract:

In this paper we analyse the iterated discrete Legendre Galerkin method for Fredholm-Hammerstein integral equations with smooth kernel. Using sufficiently accurate numerical quadrature rule, we obtain superconvergence rates for the iterated discrete Legendre Galerkin solutions in both infinity and $L^2$-norm. Numerical examples are given to illustrate the theoretical results.

Keywords: hammerstein integral equations, spectral method, discrete galerkin, numerical quadrature, superconvergence

Procedia PDF Downloads 468
3109 Latent Factors of Severity in Truck-Involved and Non-Truck-Involved Crashes on Freeways

Authors: Shin-Hyung Cho, Dong-Kyu Kim, Seung-Young Kho

Abstract:

Truck-involved crashes have higher crash severity than non-truck-involved crashes. There have been many studies about the frequency of crashes and the development of severity models, but those studies only analyzed the relationship between observed variables. To identify why more people are injured or killed when trucks are involved in the crash, we must examine to quantify the complex causal relationship between severity of the crash and risk factors by adopting the latent factors of crashes. The aim of this study was to develop a structural equation or model based on truck-involved and non-truck-involved crashes, including five latent variables, i.e. a crash factor, environmental factor, road factor, driver’s factor, and severity factor. To clarify the unique characteristics of truck-involved crashes compared to non-truck-involved crashes, a confirmatory analysis method was used. To develop the model, we extracted crash data from 10,083 crashes on Korean freeways from 2008 through 2014. The results showed that the most significant variable affecting the severity of a crash is the crash factor, which can be expressed by the location, cause, and type of the crash. For non-truck-involved crashes, the crash and environment factors increase severity of the crash; conversely, the road and driver factors tend to reduce severity of the crash. For truck-involved crashes, the driver factor has a significant effect on severity of the crash although its effect is slightly less than the crash factor. The multiple group analysis employed to analyze the differences between the heterogeneous groups of drivers.

Keywords: crash severity, structural structural equation modeling (SEM), truck-involved crashes, multiple group analysis, crash on freeway

Procedia PDF Downloads 383