Search results for: generalized Douglas-Weyl (GDW) metric
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1024

Search results for: generalized Douglas-Weyl (GDW) metric

874 On Confidence Intervals for the Difference between Inverse of Normal Means with Known Coefficients of Variation

Authors: Arunee Wongkhao, Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

In this paper, we propose two new confidence intervals for the difference between the inverse of normal means with known coefficients of variation. One of these two confidence intervals for this problem is constructed based on the generalized confidence interval and the other confidence interval is constructed based on the closed form method of variance estimation. We examine the performance of these confidence intervals in terms of coverage probabilities and expected lengths via Monte Carlo simulation.

Keywords: coverage probability, expected length, inverse of normal mean, coefficient of variation, generalized confidence interval, closed form method of variance estimation

Procedia PDF Downloads 278
873 Active Space Debris Removal by Extreme Ultraviolet Radiation

Authors: A. Anandha Selvan, B. Malarvizhi

Abstract:

In recent year the problem of space debris have become very serious. The mass of the artificial objects in orbit increased quite steadily at the rate of about 145 metric tons annually, leading to a total tally of approximately 7000 metric tons. Now most of space debris object orbiting in LEO region about 97%. The catastrophic collision can be mostly occurred in LEO region, where this collision generate the new debris. Thus, we propose a concept for cleaning the space debris in the region of thermosphere by passing the Extreme Ultraviolet (EUV) radiation to in front of space debris object from the re-orbiter. So in our concept the Extreme Ultraviolet (EUV) radiation will create the thermosphere expansion by reacting with atmospheric gas particles. So the drag is produced in front of the space debris object by thermosphere expansion. This drag force is high enough to slow down the space debris object’s relative velocity. Therefore the space debris object gradually reducing the altitude and finally enter into the earth’s atmosphere. After the first target is removed, the re-orbiter can be goes into next target. This method remove the space debris object without catching debris object. Thus it can be applied to a wide range of debris object without regard to their shapes or rotation. This paper discusses the operation of re-orbiter for removing the space debris in thermosphere region.

Keywords: active space debris removal, space debris, LEO, extreme ultraviolet, re-orbiter, thermosphere

Procedia PDF Downloads 431
872 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 70
871 When Sex Matters: A Comparative Generalized Structural Equation Model (GSEM) for the Determinants of Stunting Amongst Under-fives in Uganda

Authors: Vallence Ngabo M., Leonard Atuhaire, Peter Clever Rutayisire

Abstract:

The main aim of this study was to establish the differences in both the determinants of stunting and the causal mechanism through which the identified determinants influence stunting amongst male and female under-fives in Uganda. Literature shows that male children below the age of five years are at a higher risk of being stunted than their female counterparts. Specifically, studies in Uganda indicate that being a male child is positively associated with stunting, while being a female is negatively associated with stunting. Data for 904 males and 829 females under-fives was extracted form UDHS-2016 survey dataset. Key variables for this study were identified and used in generating relevant models and paths. Structural equation modeling techniques were used in their generalized form (GSEM). The generalized nature necessitated specifying both the family and link functions for each response variable in the system of the model. The sex of the child (b4) was used as a grouping factor and the height for age (HAZ) scores were used to construct the status for stunting of under-fives. The estimated models and path clearly indicated that the set of underlying factors that influence male and female under-fives respectively was different and the path through which they influence stunting was different. However, some of the determinants that influenced stunting amongst male under-fives also influenced stunting amongst the female under-fives. To reduce the stunting problem to the desirable state, it is important to consider the multifaceted and complex nature of the risk factors that influence stunting amongst the under-fives but, more importantly, consider the different sex-specific factors and their causal mechanism or paths through which they influence stunting.

Keywords: stunting, underfives, sex of the child, GSEM, causal mechanism

Procedia PDF Downloads 104
870 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood based estimating methodology. The joint generalized quasilikelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQLIII) that are based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: longitudinal, com-Poisson, ill-conditioned, INAR(1), GLMS, GQL

Procedia PDF Downloads 332
869 Generalized Model Estimating Strength of Bauxite Residue-Lime Mix

Authors: Sujeet Kumar, Arun Prasad

Abstract:

The present work investigates the effect of multiple parameters on the unconfined compressive strength of the bauxite residue-lime mix. A number of unconfined compressive strength tests considering various curing time, lime content, dry density and moisture content were carried out. The results show that an empirical correlation may be successfully developed using volumetric lime content, porosity, moisture content, curing time unconfined compressive strength for the range of the bauxite residue-lime mix studied. The proposed empirical correlations efficiently predict the strength of bauxite residue-lime mix, and it can be used as a generalized empirical equation to estimate unconfined compressive strength.

Keywords: bauxite residue, curing time, porosity/volumetric lime ratio, unconfined compressive strength

Procedia PDF Downloads 204
868 Applying Serious Game Design Frameworks to Existing Games for Integration of Custom Learning Objectives

Authors: Jonathan D. Moore, Mark G. Reith, David S. Long

Abstract:

Serious games (SGs) have been shown to be an effective teaching tool in many contexts. Because of the success of SGs, several design frameworks have been created to expedite the process of making original serious games to teach specific learning objectives (LOs). Even with these frameworks, the time required to create a custom SG from conception to implementation can range from months to years. Furthermore, it is even more difficult to design a game framework that allows an instructor to create customized game variants supporting multiple LOs within the same field. This paper proposes a refactoring methodology to apply the theoretical principles from well-established design frameworks to a pre-existing serious game. The expected result is a generalized game that can be quickly customized to teach LOs not originally targeted by the game. This methodology begins by describing the general components in a game, then uses a combination of two SG design frameworks to extract the teaching elements present in the game. The identified teaching elements are then used as the theoretical basis to determine the range of LOs that can be taught by the game. This paper evaluates the proposed methodology by presenting a case study of refactoring the serious game Battlespace Next (BSN) to teach joint military capabilities. The range of LOs that can be taught by the generalized BSN are identified, and examples of creating custom LOs are given. Survey results from users of the generalized game are also provided. Lastly, the expected impact of this work is discussed and a road map for future work and evaluation is presented.

Keywords: serious games, learning objectives, game design, learning theory, game framework

Procedia PDF Downloads 76
867 Marginalized Two-Part Joint Models for Generalized Gamma Family of Distributions

Authors: Mohadeseh Shojaei Shahrokhabadi, Ding-Geng (Din) Chen

Abstract:

Positive continuous outcomes with a substantial number of zero values and incomplete longitudinal follow-up are quite common in medical cost data. To jointly model semi-continuous longitudinal cost data and survival data and to provide marginalized covariate effect estimates, a marginalized two-part joint model (MTJM) has been developed for outcome variables with lognormal distributions. In this paper, we propose MTJM models for outcome variables from a generalized gamma (GG) family of distributions. The GG distribution constitutes a general family that includes approximately all of the most frequently used distributions like the Gamma, Exponential, Weibull, and Log Normal. In the proposed MTJM-GG model, the conditional mean from a conventional two-part model with a three-parameter GG distribution is parameterized to provide the marginal interpretation for regression coefficients. In addition, MTJM-gamma and MTJM-Weibull are developed as special cases of MTJM-GG. To illustrate the applicability of the MTJM-GG, we applied the model to a set of real electronic health record data recently collected in Iran, and we provided SAS code for application. The simulation results showed that when the outcome distribution is unknown or misspecified, which is usually the case in real data sets, the MTJM-GG consistently outperforms other models. The GG family of distribution facilitates estimating a model with improved fit over the MTJM-gamma, standard Weibull, or Log-Normal distributions.

Keywords: marginalized two-part model, zero-inflated, right-skewed, semi-continuous, generalized gamma

Procedia PDF Downloads 148
866 Homeostatic Analysis of the Integrated Insulin and Glucagon Signaling Network: Demonstration of Bistable Response in Catabolic and Anabolic States

Authors: Pramod Somvanshi, Manu Tomar, K. V. Venkatesh

Abstract:

Insulin and glucagon are responsible for homeostasis of key plasma metabolites like glucose, amino acids and fatty acids in the blood plasma. These hormones act antagonistically to each other during the secretion and signaling stages. In the present work, we analyze the effect of macronutrients on the response from integrated insulin and glucagon signaling pathways. The insulin and glucagon pathways are connected by DAG (a calcium signaling component which is part of the glucagon signaling module) which activates PKC and inhibits IRS (insulin signaling component) constituting a crosstalk. AKT (insulin signaling component) inhibits cAMP (glucagon signaling component) through PDE3 forming the other crosstalk between the two signaling pathways. Physiological level of anabolism and catabolism is captured through a metric quantified by the activity levels of AKT and PKA in their phosphorylated states, which represent the insulin and glucagon signaling endpoints, respectively. Under resting and starving conditions, the phosphorylation metric represents homeostasis indicating a balance between the anabolic and catabolic activities in the tissues. The steady state analysis of the integrated network demonstrates the presence of a bistable response in the phosphorylation metric with respect to input plasma glucose levels. This indicates that two steady state conditions (one in the homeostatic zone and other in the anabolic zone) are possible for a given glucose concentration depending on the ON or OFF path. When glucose levels rise above normal, during post-meal conditions, the bistability is observed in the anabolic space denoting the dominance of the glycogenesis in liver. For glucose concentrations lower than the physiological levels, while exercising, metabolic response lies in the catabolic space denoting the prevalence of glycogenolysis in liver. The non-linear positive feedback of AKT on IRS in insulin signaling module of the network is the main cause of the bistable response. The span of bistability in the phosphorylation metric increases as plasma fatty acid and amino acid levels rise and eventually the response turns monostable and catabolic representing diabetic conditions. In the case of high fat or protein diet, fatty acids and amino acids have an inhibitory effect on the insulin signaling pathway by increasing the serine phosphorylation of IRS protein via the activation of PKC and S6K, respectively. Similar analysis was also performed with respect to input amino acid and fatty acid levels. This emergent property of bistability in the integrated network helps us understand why it becomes extremely difficult to treat obesity and diabetes when blood glucose level rises beyond a certain value.

Keywords: bistability, diabetes, feedback and crosstalk, obesity

Procedia PDF Downloads 242
865 Particle and Photon Trajectories near the Black Hole Immersed in the Nonstatic Cosmological Background

Authors: Elena M. Kopteva, Pavlina Jaluvkova, Zdenek Stuchlik

Abstract:

The question of constructing a consistent model of the cosmological black hole remains to be unsolved and still attracts the interest of cosmologists as far as it is important in a wide set of research problems including the problem of the black hole horizon dynamics, the problem of interplay between cosmological expansion and local gravity, the problem of structure formation in the early universe etc. In this work, the model of the cosmological black hole is built on the basis of the exact solution of the Einstein equations for the spherically symmetric inhomogeneous dust distribution in the approach of the mass function use. Possible trajectories for massive particles and photons near the black hole immersed in the nonstatic dust cosmological background are investigated in frame of the obtained model. The reference system of distant galaxy comoving to cosmological expansion combined with curvature coordinates is used, so that the resulting metric becomes nondiagonal and involves both proper ‘cosmological’ time and curvature spatial coordinates. For this metric the geodesic equations are analyzed for the test particles and photons, and the respective trajectories are built.

Keywords: exact solutions for Einstein equations, Lemaitre-Tolman-Bondi solution, cosmological black holes, particle and photon trajectories

Procedia PDF Downloads 313
864 Bivariate Generalization of q-α-Bernstein Polynomials

Authors: Tarul Garg, P. N. Agrawal

Abstract:

We propose to define the q-analogue of the α-Bernstein Kantorovich operators and then introduce the q-bivariate generalization of these operators to study the approximation of functions of two variables. We obtain the rate of convergence of these bivariate operators by means of the total modulus of continuity, partial modulus of continuity and the Peetre’s K-functional for continuous functions. Further, in order to study the approximation of functions of two variables in a space bigger than the space of continuous functions, i.e. Bögel space; the GBS (Generalized Boolean Sum) of the q-bivariate operators is considered and degree of approximation is discussed for the Bögel continuous and Bögel differentiable functions with the aid of the Lipschitz class and the mixed modulus of smoothness.

Keywords: Bögel continuous, Bögel differentiable, generalized Boolean sum, K-functional, mixed modulus of smoothness

Procedia PDF Downloads 354
863 Impact Factor Analysis for Spatially Varying Aerosol Optical Depth in Wuhan Agglomeration

Authors: Wenting Zhang, Shishi Liu, Peihong Fu

Abstract:

As an indicator of air quality and directly related to concentration of ground PM2.5, the spatial-temporal variation and impact factor analysis of Aerosol Optical Depth (AOD) have been a hot spot in air pollution. This paper concerns the non-stationarity and the autocorrelation (with Moran’s I index of 0.75) of the AOD in Wuhan agglomeration (WHA), in central China, uses the geographically weighted regression (GRW) to identify the spatial relationship of AOD and its impact factors. The 3 km AOD product of Moderate Resolution Imaging Spectrometer (MODIS) is used in this study. Beyond the economic-social factor, land use density factors, vegetable cover, and elevation, the landscape metric is also considered as one factor. The results suggest that the GWR model is capable of dealing with spatial varying relationship, with R square, corrected Akaike Information Criterion (AICc) and standard residual better than that of ordinary least square (OLS) model. The results of GWR suggest that the urban developing, forest, landscape metric, and elevation are the major driving factors of AOD. Generally, the higher AOD trends to located in the place with higher urban developing, less forest, and flat area.

Keywords: aerosol optical depth, geographically weighted regression, land use change, Wuhan agglomeration

Procedia PDF Downloads 331
862 The Influence of Audio on Perceived Quality of Segmentation

Authors: Silvio Ricardo Rodrigues Sanches, Bianca Cogo Barbosa, Beatriz Regina Brum, Cléber Gimenez Corrêa

Abstract:

To evaluate the quality of a segmentation algorithm, the authors use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user.

Keywords: background substitution, influence of audio, segmentation evaluation, segmentation quality

Procedia PDF Downloads 91
861 Closed-Form Sharma-Mittal Entropy Rate for Gaussian Processes

Authors: Septimia Sarbu

Abstract:

The entropy rate of a stochastic process is a fundamental concept in information theory. It provides a limit to the amount of information that can be transmitted reliably over a communication channel, as stated by Shannon's coding theorems. Recently, researchers have focused on developing new measures of information that generalize Shannon's classical theory. The aim is to design more efficient information encoding and transmission schemes. This paper continues the study of generalized entropy rates, by deriving a closed-form solution to the Sharma-Mittal entropy rate for Gaussian processes. Using the squeeze theorem, we solve the limit in the definition of the entropy rate, for different values of alpha and beta, which are the parameters of the Sharma-Mittal entropy. In the end, we compare it with Shannon and Rényi's entropy rates for Gaussian processes.

Keywords: generalized entropies, Sharma-Mittal entropy rate, Gaussian processes, eigenvalues of the covariance matrix, squeeze theorem

Procedia PDF Downloads 476
860 The Association between Masculinity and Anxiety in Canadian Men

Authors: Nikk Leavitt, Peter Kellett, Cheryl Currie, Richard Larouche

Abstract:

Background: Masculinity has been associated with poor mental health outcomes in adult men and is colloquially referred to as toxic. Masculinity is traditionally measured using the Male Role Norms Inventory, which examines behaviors that may be common in men but that are themselves associated with poor mental health regardless of gender (e.g., aggressiveness). The purpose of this study was to examine if masculinity is associated with generalized anxiety among men using this inventory vs. a man’s personal definition of it. Method: An online survey collected data from 1,200 men aged 18-65 across Canada in July 2022. Masculinity was measured using: 1) the Male Role Norms Inventory Short Form and 2) by asking men to self-define what being masculine means. Men were then asked to rate the extent they perceived themselves to be masculine on a scale of 1 to 10 based on their definition of the construct. Generalized anxiety disorder was measured using the GAD-7. Multiple linear regression was used to examine associations between each masculinity score and anxiety score, adjusting for confounders. Results: The masculinity score measured using the inventory was positively associated with increased anxiety scores among men (β = 0.02, p < 0.01). Masculinity subscales most strongly correlated with higher anxiety were restrictive emotionality (β = 0.29, p < 0.01) and dominance (β = 0.30, p < 0.01). When traditional masculinity was replaced by a man’s self-rated masculinity score in the model, the reverse association was found, with increasing masculinity resulting in a significantly reduced anxiety score (β = -0.13, p = 0.04). Discussion: These findings highlight the need to revisit the ways in which masculinity is defined and operationalized in research to better understand its impacts on men’s mental health. The findings also highlight the importance of allowing participants to self-define gender-based constructs, given they are fluid and socially constructed.

Keywords: masculinity, generalized anxiety disorder, race, intersectionality

Procedia PDF Downloads 41
859 A Problem on Homogeneous Isotropic Microstretch Thermoelastic Half Space with Mass Diffusion Medium under Different Theories

Authors: Devinder Singh, Rajneesh Kumar, Arvind Kumar

Abstract:

The present investigation deals with generalized model of the equations for a homogeneous isotropic microstretch thermoelastic half space with mass diffusion medium. Theories of generalized thermoelasticity Lord-Shulman (LS) Green-Lindsay (GL) and Coupled Theory (CT) theories are applied to investigate the problem. The stresses in the considered medium have been studied due to normal force and tangential force. The normal mode analysis technique is used to calculate the normal stress, shear stress, couple stresses and microstress. A numerical computation has been performed on the resulting quantity. The computed numerical results are shown graphically.

Keywords: microstretch, thermoelastic, normal mode analysis, normal and tangential force, microstress force

Procedia PDF Downloads 508
858 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions

Authors: Timothy Kayode Samson, Adedoyin Isola Lawal

Abstract:

The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.

Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH

Procedia PDF Downloads 66
857 Conformal Invariance and F(R,T) Gravity

Authors: P. Y. Tsyba, O. V. Razina, E. Güdekli, R. Myrzakulov

Abstract:

In this paper, we consider the equation of motion for the F(R,T) gravity on their property of conformal invariance. It is shown that in the general case such a theory is not conformally invariant. Special cases for the functions v and u, in which the properties of the theory can appear, were studied.

Keywords: conformal invariance, gravity, space-time, metric

Procedia PDF Downloads 628
856 Estimation of Rare and Clustered Population Mean Using Two Auxiliary Variables in Adaptive Cluster Sampling

Authors: Muhammad Nouman Qureshi, Muhammad Hanif

Abstract:

Adaptive cluster sampling (ACS) is specifically developed for the estimation of highly clumped populations and applied to a wide range of situations like animals of rare and endangered species, uneven minerals, HIV patients and drug users. In this paper, we proposed a generalized semi-exponential estimator with two auxiliary variables under the framework of ACS design. The expressions of approximate bias and mean square error (MSE) of the proposed estimator are derived. Theoretical comparisons of the proposed estimator have been made with existing estimators. A numerical study is conducted on real and artificial populations to demonstrate and compare the efficiencies of the proposed estimator. The results indicate that the proposed generalized semi-exponential estimator performed considerably better than all the adaptive and non-adaptive estimators considered in this paper.

Keywords: auxiliary information, adaptive cluster sampling, clustered populations, Hansen-Hurwitz estimation

Procedia PDF Downloads 201
855 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: graph cuts, lung CT scan, lung parenchyma segmentation, patch-based similarity metric

Procedia PDF Downloads 141
854 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 109
853 Evaluating Traffic Congestion Using the Bayesian Dirichlet Process Mixture of Generalized Linear Models

Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig

Abstract:

This study applied traffic speed and occupancy to develop clustering models that identify different traffic conditions. Particularly, these models are based on the Dirichlet Process Mixture of Generalized Linear regression (DML) and change-point regression (CR). The model frameworks were implemented using 2015 historical traffic data aggregated at a 15-minute interval from an Interstate 295 freeway in Jacksonville, Florida. Using the deviance information criterion (DIC) to identify the appropriate number of mixture components, three traffic states were identified as free-flow, transitional, and congested condition. Results of the DML revealed that traffic occupancy is statistically significant in influencing the reduction of traffic speed in each of the identified states. Influence on the free-flow and the congested state was estimated to be higher than the transitional flow condition in both evening and morning peak periods. Estimation of the critical speed threshold using CR revealed that 47 mph and 48 mph are speed thresholds for congested and transitional traffic condition during the morning peak hours and evening peak hours, respectively. Free-flow speed thresholds for morning and evening peak hours were estimated at 64 mph and 66 mph, respectively. The proposed approaches will facilitate accurate detection and prediction of traffic congestion for developing effective countermeasures.

Keywords: traffic congestion, multistate speed distribution, traffic occupancy, Dirichlet process mixtures of generalized linear model, Bayesian change-point detection

Procedia PDF Downloads 264
852 A Boundary Backstepping Control Design for 2-D, 3-D and N-D Heat Equation

Authors: Aziz Sezgin

Abstract:

We consider the problem of stabilization of an unstable heat equation in a 2-D, 3-D and generally n-D domain by deriving a generalized backstepping boundary control design methodology. To stabilize the systems, we design boundary backstepping controllers inspired by the 1-D unstable heat equation stabilization procedure. We assume that one side of the boundary is hinged and the other side is controlled for each direction of the domain. Thus, controllers act on two boundaries for 2-D domain, three boundaries for 3-D domain and ”n” boundaries for n-D domain. The main idea of the design is to derive ”n” controllers for each of the dimensions by using ”n” kernel functions. Thus, we obtain ”n” controllers for the ”n” dimensional case. We use a transformation to change the system into an exponentially stable ”n” dimensional heat equation. The transformation used in this paper is a generalized Volterra/Fredholm type with ”n” kernel functions for n-D domain instead of the one kernel function of 1-D design.

Keywords: backstepping, boundary control, 2-D, 3-D, n-D heat equation, distributed parameter systems

Procedia PDF Downloads 375
851 Optimization of Spatial Light Modulator to Generate Aberration Free Optical Traps

Authors: Deepak K. Gupta, T. R. Ravindran

Abstract:

Holographic Optical Tweezers (HOTs) in general use iterative algorithms such as weighted Gerchberg-Saxton (WGS) to generate multiple traps, which produce traps with 99% uniformity theoretically. But in experiments, it is the phase response of the spatial light modulator (SLM) which ultimately determines the efficiency, uniformity, and quality of the trap spots. In general, SLMs show a nonlinear phase response behavior, and they may even have asymmetric phase modulation depth before and after π. This affects the resolution with which the gray levels are addressed before and after π, leading to a degraded trap performance. We present a method to optimize the SLM for a linear phase response behavior along with a symmetric phase modulation depth around π. Further, we optimize the SLM for its varying phase response over different spatial regions by optimizing the brightness/contrast and gamma of the hologram in different subsections. We show the effect of the optimization on an array of trap spots resulting in improved efficiency and uniformity. We also calculate the spot sharpness metric and trap performance metric and show a tightly focused spot with reduced aberration. The trap performance is compared by calculating the trap stiffness of a trapped particle in a given trap spot before and after aberration correction. The trap stiffness is found to improve by 200% after the optimization.

Keywords: spatial light modulator, optical trapping, aberration, phase modulation

Procedia PDF Downloads 143
850 On the Performance of Improvised Generalized M-Estimator in the Presence of High Leverage Collinearity Enhancing Observations

Authors: Habshah Midi, Mohammed A. Mohammed, Sohel Rana

Abstract:

Multicollinearity occurs when two or more independent variables in a multiple linear regression model are highly correlated. The ridge regression is the commonly used method to rectify this problem. However, the ridge regression cannot handle the problem of multicollinearity which is caused by high leverage collinearity enhancing observation (HLCEO). Since high leverage points (HLPs) are responsible for inducing multicollinearity, the effect of HLPs needs to be reduced by using Generalized M estimator. The existing GM6 estimator is based on the Minimum Volume Ellipsoid (MVE) which tends to swamp some low leverage points. Hence an improvised GM (MGM) estimator is presented to improve the precision of the GM6 estimator. Numerical example and simulation study are presented to show how HLPs can cause multicollinearity. The numerical results show that our MGM estimator is the most efficient method compared to some existing methods.

Keywords: identification, high leverage points, multicollinearity, GM-estimator, DRGP, DFFITS

Procedia PDF Downloads 220
849 A Generalized Family of Estimators for Estimation of Unknown Population Variance in Simple Random Sampling

Authors: Saba Riaz, Syed A. Hussain

Abstract:

This paper is addressing the estimation method of the unknown population variance of the variable of interest. A new generalized class of estimators of the finite population variance has been suggested using the auxiliary information. To improve the precision of the proposed class, known population variance of the auxiliary variable has been used. Mathematical expressions for the biases and the asymptotic variances of the suggested class are derived under large sample approximation. Theoretical and numerical comparisons are made to investigate the performances of the proposed class of estimators. The empirical study reveals that the suggested class of estimators performs better than the usual estimator, classical ratio estimator, classical product estimator and classical linear regression estimator. It has also been found that the suggested class of estimators is also more efficient than some recently published estimators.

Keywords: study variable, auxiliary variable, finite population variance, bias, asymptotic variance, percent relative efficiency

Procedia PDF Downloads 192
848 Generalized Synchronization in Systems with a Complex Topology of Attractor

Authors: Olga I. Moskalenko, Vladislav A. Khanadeev, Anastasya D. Koloskova, Alexey A. Koronovskii, Anatoly A. Pivovarov

Abstract:

Generalized synchronization is one of the most intricate phenomena in nonlinear science. It can be observed both in systems with a unidirectional and mutual type of coupling including the complex networks. Such a phenomenon has a number of practical applications, for example, for the secure information transmission through the communication channel with a high level of noise. Known methods for the secure information transmission needs in the increase of the privacy of data transmission that arises a question about the observation of such phenomenon in systems with a complex topology of chaotic attractor possessing two or more positive Lyapunov exponents. The present report is devoted to the study of such phenomenon in two unidirectionally and mutually coupled dynamical systems being in chaotic (with one positive Lyapunov exponent) and hyperchaotic (with two or more positive Lyapunov exponents) regimes, respectively. As the systems under study, we have used two mutually coupled modified Lorenz oscillators and two unidirectionally coupled time-delayed generators. We have shown that in both cases the generalized synchronization regime can be detected by means of the calculation of Lyapunov exponents and phase tube approach whereas due to the complex topology of attractor the nearest neighbor method is misleading. Moreover, the auxiliary system approaches being the standard method for the synchronous regime observation, for the mutual type of coupling results in incorrect results. To calculate the Lyapunov exponents in time-delayed systems we have proposed an approach based on the modification of Gram-Schmidt orthogonalization procedure in the context of the time-delayed system. We have studied in detail the mechanisms resulting in the generalized synchronization regime onset paying a great attention to the field where one positive Lyapunov exponent has already been become negative whereas the second one is a positive yet. We have found the intermittency here and studied its characteristics. To detect the laminar phase lengths the method based on a calculation of local Lyapunov exponents has been proposed. The efficiency of the method has been verified using the example of two unidirectionally coupled Rössler systems being in the band chaos regime. We have revealed the main characteristics of intermittency, i.e. the distribution of the laminar phase lengths and dependence of the mean length of the laminar phases on the criticality parameter, for all systems studied in the report. This work has been supported by the Russian President's Council grant for the state support of young Russian scientists (project MK-531.2018.2).

Keywords: complex topology of attractor, generalized synchronization, hyperchaos, Lyapunov exponents

Procedia PDF Downloads 241
847 On Generalized Cumulative Past Inaccuracy Measure for Marginal and Conditional Lifetimes

Authors: Amit Ghosh, Chanchal Kundu

Abstract:

Recently, the notion of past cumulative inaccuracy (CPI) measure has been proposed in the literature as a generalization of cumulative past entropy (CPE) in univariate as well as bivariate setup. In this paper, we introduce the notion of CPI of order α (alpha) and study the proposed measure for conditionally specified models of two components failed at different time instants called generalized conditional CPI (GCCPI). We provide some bounds using usual stochastic order and investigate several properties of GCCPI. The effect of monotone transformation on this proposed measure has also been examined. Furthermore, we characterize some bivariate distributions under the assumption of conditional proportional reversed hazard rate model. Moreover, the role of GCCPI in reliability modeling has also been investigated for a real-life problem.

Keywords: cumulative past inaccuracy, marginal and conditional past lifetimes, conditional proportional reversed hazard rate model, usual stochastic order

Procedia PDF Downloads 222
846 Generalized Rough Sets Applied to Graphs Related to Urban Problems

Authors: Mihai Rebenciuc, Simona Mihaela Bibic

Abstract:

Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, & c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.

Keywords: (bi)digraphs, rough set theory, systems of interacting agents, complex systems

Procedia PDF Downloads 210
845 Statistical Modeling of Mobile Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad S. Daba, J. P. Dubois

Abstract:

Understanding the statistics of non-isotropic scattering multipath channels that fade randomly with respect to time, frequency, and space in a mobile environment is very crucial for the accurate detection of received signals in wireless and cellular communication systems. In this paper, we derive stochastic models for the probability density function (PDF) of the shift in the carrier frequency caused by the Doppler Effect on the received illuminating signal in the presence of a dominant line of sight. Our derivation is based on a generalized Clarke’s and a two-wave partially developed scattering models, where the statistical distribution of the frequency shift is shown to be consistent with the power spectral density of the Doppler shifted signal.

Keywords: Doppler shift, filtered Poisson process, generalized Clark’s model, non-isotropic scattering, partially developed scattering, Rician distribution

Procedia PDF Downloads 345