Search results for: discrete latent variable
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3077

Search results for: discrete latent variable

2777 Mining User-Generated Contents to Detect Service Failures with Topic Model

Authors: Kyung Bae Park, Sung Ho Ha

Abstract:

Online user-generated contents (UGC) significantly change the way customers behave (e.g., shop, travel), and a pressing need to handle the overwhelmingly plethora amount of various UGC is one of the paramount issues for management. However, a current approach (e.g., sentiment analysis) is often ineffective for leveraging textual information to detect the problems or issues that a certain management suffers from. In this paper, we employ text mining of Latent Dirichlet Allocation (LDA) on a popular online review site dedicated to complaint from users. We find that the employed LDA efficiently detects customer complaints, and a further inspection with the visualization technique is effective to categorize the problems or issues. As such, management can identify the issues at stake and prioritize them accordingly in a timely manner given the limited amount of resources. The findings provide managerial insights into how analytics on social media can help maintain and improve their reputation management. Our interdisciplinary approach also highlights several insights by applying machine learning techniques in marketing research domain. On a broader technical note, this paper illustrates the details of how to implement LDA in R program from a beginning (data collection in R) to an end (LDA analysis in R) since the instruction is still largely undocumented. In this regard, it will help lower the boundary for interdisciplinary researcher to conduct related research.

Keywords: latent dirichlet allocation, R program, text mining, topic model, user generated contents, visualization

Procedia PDF Downloads 165
2776 Finite Element and Split Bregman Methods for Solving a Family of Optimal Control Problem with Partial Differential Equation Constraint

Authors: Mahmoud Lot

Abstract:

In this article, we will discuss the solution of elliptic optimal control problem. First, by using the nite element method, we obtain the discrete form of the problem. The obtained discrete problem is actually a large scale constrained optimization problem. Solving this optimization problem with traditional methods is difficult and requires a lot of CPU time and memory. But split Bergman method converts the constrained problem to an unconstrained, and hence it saves time and memory requirement. Then we use the split Bregman method for solving this problem, and examples show the speed and accuracy of split Bregman methods for solving these types of problems. We also use the SQP method for solving the examples and compare with the split Bregman method.

Keywords: Split Bregman Method, optimal control with elliptic partial differential equation constraint, finite element method

Procedia PDF Downloads 130
2775 Characterizing the Geometry of Envy Human Behaviour Using Game Theory Model with Two Types of Homogeneous Players

Authors: A. S. Mousa, R. I. Rajab, A. A. Pinto

Abstract:

An envy behavioral game theoretical model with two types of homogeneous players is considered in this paper. The strategy space of each type of players is a discrete set with only two alternatives. The preferences of each type of players is given by a discrete utility function. All envy strategies that form Nash equilibria and the corresponding envy Nash domains for each type of players have been characterized. We use geometry to construct two dimensional envy tilings where the horizontal axis reflects the preference for players of type one, while the vertical axis reflects the preference for the players of type two. The influence of the envy behavior parameters on the Cartesian position of the equilibria has been studied, and in each envy tiling we determine the envy Nash equilibria. We observe that there are 1024 combinatorial classes of envy tilings generated from envy chromosomes: 256 of them are being structurally stable while 768 are with bifurcation. Finally, some conditions for the disparate envy Nash equilibria are stated.

Keywords: game theory, Nash equilibrium, envy Nash behavior, geometric tilings, bifurcation thresholds

Procedia PDF Downloads 194
2774 Comparative Study of Vertical and Horizontal Triplex Tube Latent Heat Storage Units

Authors: Hamid El Qarnia

Abstract:

This study investigates the impact of the eccentricity of the central tube on the thermal and fluid characteristics of a triplex tube used in latent heat energy storage technologies. Two triplex tube orientations are considered in the proposed study: vertical and horizontal. The energy storage material, which is a phase change material (PCM), is placed in the space between the inside and outside tubes. During the thermal energy storage period, a heat transfer fluid (HTF) flows inside the two tubes, transmitting the heat to the PCM through two heat exchange surfaces instead of one heat exchange surface as it is the case for double tube heat storage systems. A CFD model is developed and validated against experimental data available in the literature. The mesh independency study is carried out to select the appropriate mesh. In addition, different time steps are examined to determine a time step ensuring accuracy of the numerical results and reduction in the computational time. The numerical model is then used to conduct numerical investigations of the thermal behavior and thermal performance of the storage unit. The effects of eccentricity of the central tube and HTF mass flow rate on thermal characteristics and performance indicators are examined for two flow arrangements: co-current and counter current flows. The results are given in terms of isotherm plots, streamlines, melting time and thermal energy storage efficiency.

Keywords: energy storage, heat transfer, melting, solidification

Procedia PDF Downloads 38
2773 Public Preferences for Lung Cancer Screening in China: A Discrete Choice Experiment

Authors: Zixuan Zhao, Lingbin Du, Le Wang, Youqing Wang, Yi Yang, Jingjun Chen, Hengjin Dong

Abstract:

Objectives: Few results from public attitudes for lung cancer screening are available both in China and abroad. This study aimed to identify preferred lung cancer screening modalities in a Chinese population and predict uptake rates of different modalities. Materials and Methods: A discrete choice experiment questionnaire was administered to 392 Chinese individuals aged 50–74 years who were at high risk for lung cancer. Each choice set had two lung screening options and an option to opt-out, and respondents were asked to choose the most preferred one. Both mixed logit analysis and stepwise logistic analysis were conducted to explore whether preferences were related to respondent characteristics and identify which kinds of respondents were more likely to opt out of any screening. Results: On mixed logit analysis, attributes that were predictive of choice at 1% level of statistical significance included the screening interval, screening venue, and out-of-pocket costs. The preferred screening modality seemed to be screening by low-dose computed tomography (LDCT) + blood test once a year in a general hospital at a cost of RMB 50; this could increase the uptake rate by 0.40 compared to the baseline setting. On stepwise logistic regression, those with no endowment insurance were more likely to opt out; those who were older and housewives/househusbands, and those with a health check habit and with commercial endowment insurance were less likely to opt out from a screening programme. Conclusions: There was considerable variance between real risk and self-perceived risk of lung cancer among respondents, and further research is required in this area. Lung cancer screening uptake can be increased by offering various screening modalities, so as to help policymakers further design the screening modality.

Keywords: lung cancer, screening, China., discrete choice experiment

Procedia PDF Downloads 226
2772 Unsteadiness Effects on Variable Thrust Nozzle Performance

Authors: A. M. Tahsini, S. Tadayon Mousavi

Abstract:

The purpose of this paper is to elucidate the flow unsteady behavior for moving plug in convergent-divergent variable thrust nozzle. Compressible axisymmetric Navier-Stokes equations are used to study this physical phenomenon. Different velocities are set for plug to investigate the effect of plug movement on flow unsteadiness. Variation of mass flow rate and thrust are compared under two conditions: First, the plug is placed at different positions and flow is simulated to reach the steady state (quasi steady simulation) and second, the plug is moved with assigned velocity and flow simulation is coupled with plug movement (unsteady simulation). If plug speed is high enough and its movement time scale is at the same order of the flow time scale, variation of the mass flow rate and thrust level versus plug position demonstrate a vital discrepancy under the quasi steady and unsteady conditions. This phenomenon should be considered especially from response time viewpoints in thrusters design.

Keywords: nozzle, numerical study, unsteady, variable thrust

Procedia PDF Downloads 328
2771 An Approach to Low Velocity Impact Damage Modelling of Variable Stiffness Curved Composite Plates

Authors: Buddhi Arachchige, Hessam Ghasemnejad

Abstract:

In this study, the post impact behavior of curved composite plates subjected to low velocity impact was studied analytically and numerically. Approaches to damage modelling are proposed through the degradation of stiffness in the damaged region by reduction of thickness in the damage region. Spring-mass models were used to model the impact response of the plate and impactor. The study involved designing two damage models to compare and contrast the model best fitted with the numerical results. The theoretical force-time responses were compared with the numerical results obtained through a detailed study carried out in LS-DYNA. The modified damage model established a good prediction with the analytical force-time response for different layups and geometry. This study provides a gateway in selecting the most effective layups for variable stiffness curved composite panels able to withstand a higher impact damage.

Keywords: analytical modelling, composite damage, impact, variable stiffness

Procedia PDF Downloads 257
2770 A Similar Image Retrieval System for Auroral All-Sky Images Based on Local Features and Color Filtering

Authors: Takanori Tanaka, Daisuke Kitao, Daisuke Ikeda

Abstract:

The aurora is an attractive phenomenon but it is difficult to understand the whole mechanism of it. An approach of data-intensive science might be an effective approach to elucidate such a difficult phenomenon. To do that we need labeled data, which shows when and what types of auroras, have appeared. In this paper, we propose an image retrieval system for auroral all-sky images, some of which include discrete and diffuse aurora, and the other do not any aurora. The proposed system retrieves images which are similar to the query image by using a popular image recognition method. Using 300 all-sky images obtained at Tromso Norway, we evaluate two methods of image recognition methods with or without our original color filtering method. The best performance is achieved when SIFT with the color filtering is used and its accuracy is 81.7% for discrete auroras and 86.7% for diffuse auroras.

Keywords: data-intensive science, image classification, content-based image retrieval, aurora

Procedia PDF Downloads 427
2769 Weighing the Economic Cost of Illness Due to Dysentery and Cholera Triggered by Poor Sanitation in Rural Faisalabad, Pakistan

Authors: Syed Asif Ali Naqvi, Muhammad Azeem Tufail

Abstract:

Inadequate sanitation causes direct costs of treating illnesses and loss of income through reduced productivity. This study estimated the economic cost of health (ECH) due to poor sanitation and factors determining the lack of access to latrine for the rural, backward hamlets and slums of district Faisalabad, Pakistan. Cross sectional data were collected and analyzed for the study. As the population under study was homogenous in nature, it is why a simple random sampling technique was used for the collection of data. Data of 440 households from 4 tehsils were gathered. The ordinary least square (OLS) model was used for health cost analysis, and the Probit regression model was employed for determining the factors responsible for inaccess to toilets. The results of the study showed that condition of toilets, situation of sewerage system, access to adequate sanitation, Cholera, diarrhea and dysentery, Water and Sanitation Agency (WASA) maintenance, source of medical treatment can plausibly have a significant connection with the dependent variable. Outcomes of the second model showed that the variables of education, family system, age, and type of dwelling have positive and significant sway with the dependent variable. Variable of age depicted an insignificant association with access to toilets. Variable of monetary expenses would negatively influence the dependent variable. Findings revealed the fact, health risks are often exacerbated by inadequate sanitation, and ultimately, the cost on health also surges. Public and community toilets for youths and social campaigning are suggested for public policy.

Keywords: sanitation, toilet, economic cost of health, water, Punjab

Procedia PDF Downloads 102
2768 Image Compression on Region of Interest Based on SPIHT Algorithm

Authors: Sudeepti Dayal, Neelesh Gupta

Abstract:

Image abbreviation is utilized for reducing the size of a file without demeaning the quality of the image to an objectionable level. The depletion in file size permits more images to be deposited in a given number of spaces. It also minimizes the time necessary for images to be transferred. Storage of medical images is a most researched area in the current scenario. To store a medical image, there are two parameters on which the image is divided, regions of interest and non-regions of interest. The best way to store an image is to compress it in such a way that no important information is lost. Compression can be done in two ways, namely lossy, and lossless compression. Under that, several compression algorithms are applied. In the paper, two algorithms are used which are, discrete cosine transform, applied to non-region of interest (lossy), and discrete wavelet transform, applied to regions of interest (lossless). The paper introduces SPIHT (set partitioning hierarchical tree) algorithm which is applied onto the wavelet transform to obtain good compression ratio from which an image can be stored efficiently.

Keywords: Compression ratio, DWT, SPIHT, DCT

Procedia PDF Downloads 329
2767 Wavelet Based Signal Processing for Fault Location in Airplane Cable

Authors: Reza Rezaeipour Honarmandzad

Abstract:

Wavelet analysis is an exciting method for solving difficult problems in mathematics, physics, and engineering, with modern applications as diverse as wave propagation, data compression, signal processing, image processing, pattern recognition, etc. Wavelets allow complex information such as signals, images and patterns to be decomposed into elementary forms at different positions and scales and subsequently reconstructed with high precision. In this paper a wavelet-based signal processing algorithm for airplane cable fault location is proposed. An orthogonal discrete wavelet decomposition and reconstruction algorithm is used to eliminate the noise in the aircraft cable fault signal. The experiment result has shown that the character of emission pulse and reflect pulse used to test the aircraft cable fault point are reserved and the high-frequency noise are eliminated by means of the proposed algorithm in this paper.

Keywords: wavelet analysis, signal processing, orthogonal discrete wavelet, noise, aircraft cable fault signal

Procedia PDF Downloads 495
2766 Parameter Fitting of the Discrete Element Method When Modeling the DISAMATIC Process

Authors: E. Hovad, J. H. Walther, P. Larsen, J. Thorborg, J. H. Hattel

Abstract:

In sand casting of metal parts for the automotive industry such as brake disks and engine blocks, the molten metal is poured into a sand mold to get its final shape. The DISAMATIC molding process is a way to construct these sand molds for casting of steel parts and in the present work numerical simulations of this process are presented. During the process green sand is blown into a chamber and subsequently squeezed to finally obtain the sand mould. The sand flow is modelled with the Discrete Element method (DEM) and obtaining the correct material parameters for the simulation is the main goal. Different tests will be used to find or calibrate the DEM parameters needed; Poisson ratio, Young modulus, rolling friction coefficient, sliding friction coefficient and coefficient of restitution (COR). The Young modulus and Poisson ratio are found from compression tests of the bulk material and subsequently used in the DEM model according to the Hertz-Mindlin model. The main focus will be on calibrating the rolling resistance and sliding friction in the DEM model with respect to the behavior of “real” sand piles. More specifically, the surface profile of the “real” sand pile will be compared to the sand pile predicted with the DEM for different values of the rolling and sliding friction coefficients. When the DEM parameters are found for the particle-particle (sand-sand) interaction, the particle-wall interaction parameter values are also found. Here the sliding coefficient will be found from experiments and the rolling resistance is investigated by comparing with observations of how the green sand interacts with the chamber wall during experiments and the DEM simulations will be calibrated accordingly. The coefficient of restitution will be tested with different values in the DEM simulations and compared to video footages of the DISAMATIC process. Energy dissipation will be investigated in these simulations for different particle sizes and coefficient of restitution, where scaling laws will be considered to relate the energy dissipation for these parameters. Finally, the found parameter values are used in the overall discrete element model and compared to the video footage of the DISAMATIC process.

Keywords: discrete element method, physical properties of materials, calibration, granular flow

Procedia PDF Downloads 462
2765 Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces

Authors: Saltanat Zhilkibayeva, Muratulla Utenov, Nurzhan Utenov

Abstract:

The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.

Keywords: distributed inertial forces, internal forces, statically determinate mechanisms, statically indeterminate mechanisms

Procedia PDF Downloads 202
2764 Probabilistic Analysis of Bearing Capacity of Isolated Footing using Monte Carlo Simulation

Authors: Sameer Jung Karki, Gokhan Saygili

Abstract:

The allowable bearing capacity of foundation systems is determined by applying a factor of safety to the ultimate bearing capacity. Conventional ultimate bearing capacity calculations routines are based on deterministic input parameters where the nonuniformity and inhomogeneity of soil and site properties are not accounted for. Hence, the laws of mathematics like probability calculus and statistical analysis cannot be directly applied to foundation engineering. It’s assumed that the Factor of Safety, typically as high as 3.0, incorporates the uncertainty of the input parameters. This factor of safety is estimated based on subjective judgement rather than objective facts. It is an ambiguous term. Hence, a probabilistic analysis of the bearing capacity of an isolated footing on a clayey soil is carried out by using the Monte Carlo Simulation method. This simulated model was compared with the traditional discrete model. It was found out that the bearing capacity of soil was found higher for the simulated model compared with the discrete model. This was verified by doing the sensitivity analysis. As the number of simulations was increased, there was a significant % increase of the bearing capacity compared with discrete bearing capacity. The bearing capacity values obtained by simulation was found to follow a normal distribution. While using the traditional value of Factor of safety 3, the allowable bearing capacity had lower probability (0.03717) of occurring in the field compared to a higher probability (0.15866), while using the simulation derived factor of safety of 1.5. This means the traditional factor of safety is giving us bearing capacity that is less likely occurring/available in the field. This shows the subjective nature of factor of safety, and hence probability method is suggested to address the variability of the input parameters in bearing capacity equations.

Keywords: bearing capacity, factor of safety, isolated footing, montecarlo simulation

Procedia PDF Downloads 164
2763 Practice and Understanding of Fracturing Renovation for Risk Exploration Wells in Xujiahe Formation Tight Sandstone Gas Reservoir

Authors: Fengxia Li, Lufeng Zhang, Haibo Wang

Abstract:

The tight sandstone gas reservoir in the Xujiahe Formation of the Sichuan Basin has huge reserves, but its utilization rate is low. Fracturing and stimulation are indispensable technologies to unlock their potential and achieve commercial exploitation. Slickwater is the most widely used fracturing fluid system in the fracturing and renovation of tight reservoirs. However, its viscosity is low, its sand-carrying performance is poor, and the risk of sand blockage is high. Increasing the sand carrying capacity by increasing the displacement will increase the frictional resistance of the pipe string, affecting the resistance reduction performance. The variable viscosity slickwater can flexibly switch between different viscosities in real-time online, effectively overcoming problems such as sand carrying and resistance reduction. Based on a self-developed indoor loop friction testing system, a visualization device for proppant transport, and a HAAKE MARS III rheometer, a comprehensive evaluation was conducted on the performance of variable viscosity slickwater, including resistance reduction, rheology, and sand carrying. The indoor experimental results show that: 1. by changing the concentration of drag-reducing agents, the viscosity of the slippery water can be changed between 2~30mPa. s; 2. the drag reduction rate of the variable viscosity slickwater is above 80%, and the shear rate will not reduce the drag reduction rate of the liquid; under indoor experimental conditions, 15mPa. s of variable viscosity and slickwater can basically achieve effective carrying and uniform placement of proppant. The layered fracturing effect of the JiangX well in the dense sandstone of the Xujiahe Formation shows that the drag reduction rate of the variable viscosity slickwater is 80.42%, and the daily production of the single layer after fracturing is over 50000 cubic meters. This study provides theoretical support and on-site experience for promoting the application of variable viscosity slickwater in tight sandstone gas reservoirs.

Keywords: slickwater, hydraulic fracturing, dynamic sand laying, drag reduction rate, rheological properties

Procedia PDF Downloads 56
2762 Variable Mapping: From Bibliometrics to Implications

Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek

Abstract:

Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.

Keywords: bibliometrics, literature review, science mapping, variable mapping

Procedia PDF Downloads 93
2761 Creation and Annihilation of Spacetime Elements

Authors: Dnyanesh P. Mathur, Gregory L. Slater

Abstract:

Gravitation and the expansion of the universe at a large scale are generally regarded as two completely distinct phenomena. Yet, in general, relativity theory, they both manifest as 'curvature' of spacetime. We propose a hypothesis which treats these two 'curvature-producing' phenomena as aspects of an underlying process. This process treats spacetime itself as composed of discrete units (Plancktons) and is 'dynamic' in the sense that these elements of spacetime are continually being both created and annihilated. It is these two complementary processes of Planckton creation and Planckton annihilation which manifest themselves as - 'cosmic expansion' on the one hand and as 'gravitational attraction’ on the other. The Planckton hypothesis treats spacetime as a perfect fluid in the same manner as the co-moving frame of reference of Friedman equations and the Gullstrand-Painleve metric; i.e.Planckton hypothesis replaces 'curvature' of spacetime by the 'flow' of Plancktons (spacetime). Here we discuss how this perspective may allow a unified description of both cosmological and gravitational acceleration as well as providing a mechanism for inducing an irreducible action at every point associated with the creation and annihilation of Plancktons, which could be identified as the zero point energy.

Keywords: discrete spacetime, spacetime flow, zero point energy, planktons

Procedia PDF Downloads 88
2760 High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks

Authors: Amal Khalifa, Nicolas Vana Santos

Abstract:

Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods.

Keywords: deep learning, steganography, image, discrete wavelet transform, fusion

Procedia PDF Downloads 54
2759 Observer-Based Control Design for Double Integrators Systems with Long Sampling Periods and Actuator Uncertainty

Authors: Tomas Menard

Abstract:

The design of control-law for engineering systems has been investigated for many decades. While many results are concerned with continuous systems with continuous output, nowadays, many controlled systems have to transmit their output measurements through network, hence making it discrete-time. But it is well known that the sampling of a system whose control-law is based on the continuous output may render the system unstable, especially when this sampling period is long compared to the system dynamics. The control design then has to be adapted in order to cope with this issue. In this paper, we consider systems which can be modeled as double integrator with uncertainty on the input since many mechanical systems can be put under such form. We present a control scheme based on an observer using only discrete time measurement and which provides continuous time estimation of the state, combined with a continuous control law, which stabilized a system with second-order dynamics even in the presence of uncertainty. It is further shown that arbitrarily long sampling periods can be dealt with properly setting the control scheme parameters.

Keywords: dynamical system, control law design, sampled output, observer design

Procedia PDF Downloads 163
2758 Diagnostic Value of Different Noninvasive Criteria of Latent Myocarditis in Comparison with Myocardial Biopsy

Authors: Olga Blagova, Yuliya Osipova, Evgeniya Kogan, Alexander Nedostup

Abstract:

Purpose: to quantify the value of various clinical, laboratory and instrumental signs in the diagnosis of myocarditis in comparison with morphological studies of the myocardium. Methods: in 100 patients (65 men, 44.7±12.5 years) with «idiopathic» arrhythmias (n = 20) and dilated cardiomyopathy (DCM, n = 80) were performed 71 endomyocardial biopsy (EMB), 13 intraoperative biopsy, 5 study of explanted hearts, 11 autopsy with virus investigation (real-time PCR) of the blood and myocardium. Anti-heart antibodies (AHA) were also measured as well as cardiac CT (n = 45), MRI (n = 25), coronary angiography (n = 47). The comparison group included of 50 patients (25 men, 53.7±11.7 years) with non-inflammatory heart diseases who underwent open heart surgery. Results. Active/borderline myocarditis was diagnosed in 76.0% of the study group and in 21.6% of patients of the comparison group (p < 0.001). The myocardial viral genome was observed more frequently in patients of comparison group than in study group (group (65.0% and 40.2%; p < 0.01. Evaluated the diagnostic value of noninvasive markers of myocarditis. The panel of anti-heart antibodies had the greatest importance to identify myocarditis: sensitivity was 81.5%, positive and negative predictive value was 75.0 and 60.5%. It is defined diagnostic value of non-invasive markers of myocarditis and diagnostic algorithm providing an individual assessment of the likelihood of myocarditis is developed. Conclusion. The greatest significance in the diagnosis of latent myocarditis in patients with 'idiopathic' arrhythmias and DCM have AHA. The use of complex of noninvasive criteria allows estimate the probability of myocarditis and determine the indications for EMB.

Keywords: myocarditis, "idiopathic" arrhythmias, dilated cardiomyopathy, endomyocardial biopsy, viral genome, anti-heart antibodies

Procedia PDF Downloads 152
2757 A General Variable Neighborhood Search Algorithm to Minimize Makespan of the Distributed Permutation Flowshop Scheduling Problem

Authors: G. M. Komaki, S. Mobin, E. Teymourian, S. Sheikh

Abstract:

This paper addresses minimizing the makespan of the distributed permutation flow shop scheduling problem. In this problem, there are several parallel identical factories or flowshops each with series of similar machines. Each job should be allocated to one of the factories and all of the operations of the jobs should be performed in the allocated factory. This problem has recently gained attention and due to NP-Hard nature of the problem, metaheuristic algorithms have been proposed to tackle it. Majority of the proposed algorithms require large computational time which is the main drawback. In this study, a general variable neighborhood search algorithm (GVNS) is proposed where several time-saving schemes have been incorporated into it. Also, the GVNS uses the sophisticated method to change the shaking procedure or perturbation depending on the progress of the incumbent solution to prevent stagnation of the search. The performance of the proposed algorithm is compared to the state-of-the-art algorithms based on standard benchmark instances.

Keywords: distributed permutation flow shop, scheduling, makespan, general variable neighborhood search algorithm

Procedia PDF Downloads 336
2756 Content Based Face Sketch Images Retrieval in WHT, DCT, and DWT Transform Domain

Authors: W. S. Besbas, M. A. Artemi, R. M. Salman

Abstract:

Content based face sketch retrieval can be used to find images of criminals from their sketches for 'Crime Prevention'. This paper investigates the problem of CBIR of face sketch images in transform domain. Face sketch images that are similar to the query image are retrieved from the face sketch database. Features of the face sketch image are extracted in the spectrum domain of a selected transforms. These transforms are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Walsh Hadamard Transform (WHT). For the performance analyses of features selection methods three face images databases are used. These are 'Sheffield face database', 'Olivetti Research Laboratory (ORL) face database', and 'Indian face database'. The City block distance measure is used to evaluate the performance of the retrieval process. The investigation concludes that, the retrieval rate is database dependent. But in general, the DCT is the best. On the other hand, the WHT is the best with respect to the speed of retrieving images.

Keywords: Content Based Image Retrieval (CBIR), face sketch image retrieval, features selection for CBIR, image retrieval in transform domain

Procedia PDF Downloads 465
2755 Tenants Use Less Input on Rented Plots: Evidence from Northern Ethiopia

Authors: Desta Brhanu Gebrehiwot

Abstract:

The study aims to investigate the impact of land tenure arrangements on fertilizer use per hectare in Northern Ethiopia. Household and Plot level data are used for analysis. Land tenure contracts such as sharecropping and fixed rent arrangements have endogeneity. Different unobservable characteristics may affect renting-out decisions. Thus, the appropriate method of analysis was the instrumental variable estimation technic. Therefore, the family of instrumental variable estimation methods two-stage least-squares regression (2SLS, the generalized method of moments (GMM), Limited information maximum likelihood (LIML), and instrumental variable Tobit (IV-Tobit) was used. Besides, a method to handle a binary endogenous variable is applied, which uses a two-step estimation. In the first step probit model includes instruments, and in the second step, maximum likelihood estimation (MLE) (“etregress” command in Stata 14) was used. There was lower fertilizer use per hectare on sharecropped and fixed rented plots relative to owner-operated. The result supports the Marshallian inefficiency principle in sharecropping. The difference in fertilizer use per hectare could be explained by a lack of incentivized detailed contract forms, such as giving more proportion of the output to the tenant under sharecropping contracts, which motivates to use of more fertilizer in rented plots to maximize the production because most sharecropping arrangements share output equally between tenants and landlords.

Keywords: tenure-contracts, endogeneity, plot-level data, Ethiopia, fertilizer

Procedia PDF Downloads 64
2754 Applying the Quad Model to Estimate the Implicit Self-Esteem of Patients with Depressive Disorders: Comparing the Psychometric Properties with the Implicit Association Test Effect

Authors: Yi-Tung Lin

Abstract:

Researchers commonly assess implicit self-esteem with the Implicit Association Test (IAT). The IAT’s measure, often referred to as the IAT effect, indicates the strengths of automatic preferences for the self relative to others, which is often considered an index of implicit self-esteem. However, based on the Dual-process theory, the IAT does not rely entirely on the automatic process; it is also influenced by a controlled process. The present study, therefore, analyzed the IAT data with the Quad model, separating four processes on the IAT performance: the likelihood that automatic association is activated by the stimulus in the trial (AC); that a correct response is discriminated in the trial (D); that the automatic bias is overcome in favor of a deliberate response (OB); and that when the association is not activated, and the individual fails to discriminate a correct answer, there is a guessing or response bias drives the response (G). The AC and G processes are automatic, while the D and OB processes are controlled. The AC parameter is considered as the strength of the association activated by the stimulus, which reflects what implicit measures of social cognition aim to assess. The stronger the automatic association between self and positive valence, the more likely it will be activated by a relevant stimulus. Therefore, the AC parameter was used as the index of implicit self-esteem in the present study. Meanwhile, the relationship between implicit self-esteem and depression is not fully investigated. In the cognitive theory of depression, it is assumed that the negative self-schema is crucial in depression. Based on this point of view, implicit self-esteem would be negatively associated with depression. However, the results among empirical studies are inconsistent. The aims of the present study were to examine the psychometric properties of the AC (i.e., test-retest reliability and its correlations with explicit self-esteem and depression) and compare it with that of the IAT effect. The present study had 105 patients with depressive disorders completing the Rosenberg Self-Esteem Scale, Beck Depression Inventory-II and the IAT on the pretest. After at least 3 weeks, the participants completed the second IAT. The data were analyzed by the latent-trait multinomial processing tree model (latent-trait MPT) with the TreeBUGS package in R. The result showed that the latent-trait MPT had a satisfactory model fit. The effect size of test-retest reliability of the AC and the IAT effect were medium (r = .43, p < .0001) and small (r = .29, p < .01) respectively. Only the AC showed a significant correlation with explicit self-esteem (r = .19, p < .05). Neither of the two indexes was correlated with depression. Collectively, the AC parameter was a satisfactory index of implicit self-esteem compared with the IAT effect. Also, the present study supported the results that implicit self-esteem was not correlated with depression.

Keywords: cognitive modeling, implicit association test, implicit self-esteem, quad model

Procedia PDF Downloads 105
2753 Finite Sample Inferences for Weak Instrument Models

Authors: Gubhinder Kundhi, Paul Rilstone

Abstract:

It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. Finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.

Keywords: bootstrap, Instrumental Variable, Edgeworth expansions, Saddlepoint expansions

Procedia PDF Downloads 290
2752 Direct Approach in Modeling Particle Breakage Using Discrete Element Method

Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang

Abstract:

Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.

Keywords: particle bed, breakage models, breakage kinetic, discrete element method

Procedia PDF Downloads 176
2751 Fault Detection of Pipeline in Water Distribution Network System

Authors: Shin Je Lee, Go Bong Choi, Jeong Cheol Seo, Jong Min Lee, Gibaek Lee

Abstract:

Water pipe network is installed underground and once equipped; it is difficult to recognize the state of pipes when the leak or burst happens. Accordingly, post management is often delayed after the fault occurs. Therefore, the systematic fault management system of water pipe network is required to prevent the accident and minimize the loss. In this work, we develop online fault detection system of water pipe network using data of pipes such as flow rate or pressure. The transient model describing water flow in pipelines is presented and simulated using Matlab. The fault situations such as the leak or burst can be also simulated and flow rate or pressure data when the fault happens are collected. Faults are detected using statistical methods of fast Fourier transform and discrete wavelet transform, and they are compared to find which method shows the better fault detection performance.

Keywords: fault detection, water pipeline model, fast Fourier transform, discrete wavelet transform

Procedia PDF Downloads 489
2750 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption

Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko

Abstract:

Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.

Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.

Procedia PDF Downloads 89
2749 Mental Health Challenges, Internalizing and Externalizing Behavior Problems, and Academic Challenges among Adolescents from Broken Families

Authors: Fadzai Munyuki

Abstract:

Parental divorce is one of youth's most stressful life events and is associated with long-lasting emotional and behavioral problems. Over the last few decades, research has consistently found strong associations between divorce and adverse health effects in adolescents. Parental divorce has been hypothesized to lead to psychosocial development problems, mental health challenges, internalizing and externalizing behavior problems, and low academic performance among adolescents. This is supported by the Positive youth development theory, which states that a family setup has a major role to play in adolescent development and well-being. So, the focus of this research will be to test this hypothesized process model among adolescents in five provinces in Zimbabwe. A cross-sectional study will be conducted to test this hypothesis, and 1840 (n = 1840) adolescents aged between 14 to 17 will be employed for this study. A Stress and Questionnaire scale, a Child behavior checklist scale, and an academic concept scale will be used for this study. Data analysis will be done using Structural Equations Modeling. This study has many limitations, including the lack of a 'real-time' study, a few cross-sectional studies, a lack of a thorough and validated population measure, and many studies that have been done that have focused on one variable in relation to parental divorce. Therefore, this study seeks to bridge this gap between past research and current literature by using a validated population measure, a real-time study, and combining three latent variables in this study.

Keywords: mental health, internalizing and externalizing behavior, divorce, academic achievements

Procedia PDF Downloads 52
2748 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System

Authors: Benjamin Chijioke Agwah, Paulinus Chinaenye Eze

Abstract:

Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC- VZLC provided fast tracking of desired wheel slip, eliminate chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.

Keywords: ABS, fuzzy logic controller, variable zero lag compensator, wheel slip tracking

Procedia PDF Downloads 128