Search results for: linearly constrained minimum variance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3532

Search results for: linearly constrained minimum variance

3502 Dual Solutions in Mixed Convection Boundary Layer Flow: A Stability Analysis

Authors: Anuar Ishak

Abstract:

The mixed convection stagnation point flow toward a vertical plate is investigated. The external flow impinges normal to the heated plate and the surface temperature is assumed to vary linearly with the distance from the stagnation point. The governing partial differential equations are transformed into a set of ordinary differential equations, which are then solved numerically using MATLAB routine boundary value problem solver bvp4c. Numerical results show that dual solutions are possible for a certain range of the mixed convection parameter. A stability analysis is performed to determine which solution is linearly stable and physically realizable.

Keywords: dual solutions, heat transfer, mixed convection, stability analysis

Procedia PDF Downloads 352
3501 Challenging the Constitutionality of Mandatory Sentences: A South African Perspective

Authors: Alphonso Goliath

Abstract:

With mandatory minimum sentences, even with its qualification of “substantial and compelling circumstances”, the sentence severity for violent crimes has increased substantially to combat crime. Considering the upsurge in violent crime, the paper argues that minimum sentences failed to prevent or curb violent crime. These sentences deprive offenders more than what is reasonably necessary of their freedom to curb the offense and punish the offender. Minimum sentences amount to cruel, inhuman, and degrading punishment unjustified and vulnerable to constitutional challenge.

Keywords: constitutionality, deterrence, incapacitation, minimum sentencing legislation, prison overcrowding, rehabilitation, recidivism, retribution, violent crime

Procedia PDF Downloads 60
3500 Welding Process Selection for Storage Tank by Integrated Data Envelopment Analysis and Fuzzy Credibility Constrained Programming Approach

Authors: Rahmad Wisnu Wardana, Eakachai Warinsiriruk, Sutep Joy-A-Ka

Abstract:

Selecting the most suitable welding process usually depends on experiences or common application in similar companies. However, this approach generally ignores many criteria that can be affecting the suitable welding process selection. Therefore, knowledge automation through knowledge-based systems will significantly improve the decision-making process. The aims of this research propose integrated data envelopment analysis (DEA) and fuzzy credibility constrained programming approach for identifying the best welding process for stainless steel storage tank in the food and beverage industry. The proposed approach uses fuzzy concept and credibility measure to deal with uncertain data from experts' judgment. Furthermore, 12 parameters are used to determine the most appropriate welding processes among six competitive welding processes.

Keywords: welding process selection, data envelopment analysis, fuzzy credibility constrained programming, storage tank

Procedia PDF Downloads 136
3499 The Study of the Mutual Effect of Genotype in Environment by Percent of Oil Criterion in Sunflower

Authors: Seyed Mohammad Nasir Mousavi, Pasha Hejazi, Maryam Ebrahimian Dehkordi

Abstract:

In order to study the Mutual effect of genotype × environment for the percent of oil index in sunflower items, an experiment was accomplished in form of complete random block designs in four iteration in four diverse researching station comprising Esfahan, Birjand, Sari, and Karaj. Complex variance analysis showed that there is an important diversity between the items under investigation. The results pertaining the coefficient variation of items Azargol and Vidoc has respectively allocated the minimum coefficient of variations. According to the results extrapolated from Shokla stability variance, the Items Brocar, Allison and Fabiola, are among the stable genotypes for oil percent respectively. in the biplot GGE, the location under investigations divided in two super-environment, first one comprised of locations naming Esfahan, Karaj, and Birjand, and second one were such a location as Sari. By this point of view, in the first super-environment, the Item Fabiola and in the second Almanzor item was among the best items and crops.

Keywords: sunflower, stability, GGE bipilot, super-environment

Procedia PDF Downloads 512
3498 Loading Methodology for a Capacity Constrained Job-Shop

Authors: Viraj Tyagi, Ajai Jain, P. K. Jain, Aarushi Jain

Abstract:

This paper presents a genetic algorithm based loading methodology for a capacity constrained job-shop with the consideration of alternative process plans for each part to be produced. Performance analysis of the proposed methodology is carried out for two case studies by considering two different manufacturing scenarios. Results obtained indicate that the methodology is quite effective in improving the shop load balance, and hence, it can be included in the frameworks of manufacturing planning systems of job-shop oriented industries.

Keywords: manufacturing planning, loading, genetic algorithm, job shop

Procedia PDF Downloads 270
3497 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 91
3496 On the Basis Number and the Minimum Cycle Bases of the Wreath Product of Paths with Wheels

Authors: M. M. M. Jaradat

Abstract:

For a given graph G, the set Ԑ of all subsets of E(G) forms an |E(G)| dimensional vector space over Z2 with vector addition X⊕Y = (X\Y ) [ (Y \X) and scalar multiplication 1.X = X and 0.X = Ø for all X, Yϵ Ԑ. The cycle space, C(G), of a graph G is the vector subspace of (E; ⊕; .) spanned by the cycles of G. Traditionally there have been two notions of minimality among bases of C(G). First, a basis B of G is called a d-fold if each edge of G occurs in at most d cycles of the basis B. The basis number, b(G), of G is the least non-negative integer d such that C(G) has a d-fold basis; a required basis of C(G) is a basis for which each edge of G belongs to at most b(G) elements of B. Second, a basis B is called a minimum cycle basis (MCB) if its total length Σ BϵB |B| is minimum among all bases of C(G). The lexicographic product GρH has the vertex set V (GρH) = V (G) x V (H) and the edge set E(GρH) = {(u1, v1)(u2, v2)|u1 = u2 and v1 v2 ϵ E(H); or u1u2 ϵ E(G) and there is α ϵ Aut(H) such that α (v1) = v2}. In this work, a construction of a minimum cycle basis for the wreath product of wheels with paths is presented. Also, the length of the longest cycle of a minimum cycle basis is determined. Moreover, the basis number for the wreath product of the same is investigated.

Keywords: cycle space, minimum cycle basis, basis number, wreath product

Procedia PDF Downloads 232
3495 The Light-Effect in Cylindrical Quantum Wire with an Infinite Potential for the Case of Electrons: Optical Phonon Scattering

Authors: Hoang Van Ngoc, Nguyen Vu Nhan, Nguyen Quang Bau

Abstract:

The light-effect in cylindrical quantum wire with an infinite potential for the case of electrons, optical phonon scattering, is studied based on the quantum kinetic equation. The density of the direct current in a cylindrical quantum wire by a linearly polarized electromagnetic wave, a DC electric field, and an intense laser field is calculated. Analytic expressions for the density of the direct current are studied as a function of the frequency of the laser radiation field, the frequency of the linearly polarized electromagnetic wave, the temperature of system, and the size of quantum wire. The density of the direct current in cylindrical quantum wire with an infinite potential for the case of electrons – optical phonon scattering is nonlinearly dependent on the frequency of the linearly polarized electromagnetic wave. The analytic expressions are numerically evaluated and plotted for a specific quantum wire, GaAs/GaAsAl.

Keywords: the light–effect, cylindrical quantum wire with an infinite potential, the density of the direct current, electrons-optical phonon scattering

Procedia PDF Downloads 307
3494 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 562
3493 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 98
3492 South African Mandatory Minimum Sentencing: Causes and Consequences

Authors: Alphonso Augustine Goliath

Abstract:

In 1997 South Africa adopted legislation introducing severe mandatory minimum sentences. This was a political response to counter the escalating violent crime the country experienced when it transitioned to democracy. Despite minimum sentences being fully operational for more than two decades, violent crimes like murder and rape have not abated. This paper provides a critique of the efficacy of minimums sentences with a primary focus on the legislation’s main aim of preventing or curbing crime, its relationship with prison overcrowding, and its continued constitutionality.

Keywords: constitutionality, deterrence, incapacitation, minimum sentencing legislation, prison overcrowding, rehabilitation, recidivism, retribution, violent crime

Procedia PDF Downloads 58
3491 The Evaluation of the Performance of Different Filtering Approaches in Tracking Problem and the Effect of Noise Variance

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

Performance of different filtering approaches depends on modeling of dynamical system and algorithm structure. For modeling and smoothing the data the evaluation of posterior distribution in different filtering approach should be chosen carefully. In this paper different filtering approaches like filter KALMAN, EKF, UKF, EKS and smoother RTS is simulated in some trajectory tracking of path and accuracy and limitation of these approaches are explained. Then probability of model with different filters is compered and finally the effect of the noise variance to estimation is described with simulations results.

Keywords: Gaussian approximation, Kalman smoother, parameter estimation, noise variance

Procedia PDF Downloads 403
3490 A Mean–Variance–Skewness Portfolio Optimization Model

Authors: Kostas Metaxiotis

Abstract:

Portfolio optimization is one of the most important topics in finance. This paper proposes a mean–variance–skewness (MVS) portfolio optimization model. Traditionally, the portfolio optimization problem is solved by using the mean–variance (MV) framework. In this study, we formulate the proposed model as a three-objective optimization problem, where the portfolio's expected return and skewness are maximized whereas the portfolio risk is minimized. For solving the proposed three-objective portfolio optimization model we apply an adapted version of the non-dominated sorting genetic algorithm (NSGAII). Finally, we use a real dataset from FTSE-100 for validating the proposed model.

Keywords: evolutionary algorithms, portfolio optimization, skewness, stock selection

Procedia PDF Downloads 152
3489 An Approach to Noise Variance Estimation in Very Low Signal-to-Noise Ratio Stochastic Signals

Authors: Miljan B. Petrović, Dušan B. Petrović, Goran S. Nikolić

Abstract:

This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.

Keywords: noise, signal-to-noise ratio, stochastic signals, variance estimation

Procedia PDF Downloads 357
3488 Solutions to Probabilistic Constrained Optimal Control Problems Using Concentration Inequalities

Authors: Tomoaki Hashimoto

Abstract:

Recently, optimal control problems subject to probabilistic constraints have attracted much attention in many research field. Although probabilistic constraints are generally intractable in optimization problems, several methods haven been proposed to deal with probabilistic constraints. In most methods, probabilistic constraints are transformed to deterministic constraints that are tractable in optimization problems. This paper examines a method for transforming probabilistic constraints into deterministic constraints for a class of probabilistic constrained optimal control problems.

Keywords: optimal control, stochastic systems, discrete-time systems, probabilistic constraints

Procedia PDF Downloads 250
3487 Minimum Data of a Speech Signal as Special Indicators of Identification in Phonoscopy

Authors: Nazaket Gazieva

Abstract:

Voice biometric data associated with physiological, psychological and other factors are widely used in forensic phonoscopy. There are various methods for identifying and verifying a person by voice. This article explores the minimum speech signal data as individual parameters of a speech signal. Monozygotic twins are believed to be genetically identical. Using the minimum data of the speech signal, we came to the conclusion that the voice imprint of monozygotic twins is individual. According to the conclusion of the experiment, we can conclude that the minimum indicators of the speech signal are more stable and reliable for phonoscopic examinations.

Keywords: phonogram, speech signal, temporal characteristics, fundamental frequency, biometric fingerprints

Procedia PDF Downloads 111
3486 Correlation of the Biometric Parameters of Eggs

Authors: S. Zenia, A. Menasseria, A. E. Kheidous, F. Lariouna, A. Smai, H. Saadi, F. Haddadj, A. Milla, F. Marniche

Abstract:

The objective of this study was to estimate the correlation ship between different pheasant external egg quality traits. A total of 938 eggs were collected. Egg weight (g), egg length (mm), egg width (mm), volume (cm3), shape index egg, surface area and water loss were measured. The overall mean values obtained for the different variables are respectively 29.2 ± 2,24, 43.01 ± 1,84, 34.05 ± 1,44, 25.63 ± 2.88 cm3, 79.00 ± 3%, 68% and 13%. Concerning studied regressions, it was considered only the most important regressions. Those that show significant links between the different parameters studied. The ANOVA procedure was applied to estimate correlations for the examined traits. The weights of the eggs being observed before incubation and before hatching are linearly correlated with a positive correlation coefficient of order 0.75. Egg length and the weight before incubation had a good and positive correlation with a coefficient r = 0.6. However, density had high and negative correlations with egg height r = -0.78. Shape index had a good linear and negative r= - 0.71 correlation with water loss.

Keywords: correlation, egg, morphometry of eggs, analysis of variance

Procedia PDF Downloads 420
3485 Portfolio Optimization under a Hybrid Stochastic Volatility and Constant Elasticity of Variance Model

Authors: Jai Heui Kim, Sotheara Veng

Abstract:

This paper studies the portfolio optimization problem for a pension fund under a hybrid model of stochastic volatility and constant elasticity of variance (CEV) using asymptotic analysis method. When the volatility component is fast mean-reverting, it is able to derive asymptotic approximations for the value function and the optimal strategy for general utility functions. Explicit solutions are given for the exponential and hyperbolic absolute risk aversion (HARA) utility functions. The study also shows that using the leading order optimal strategy results in the value function, not only up to the leading order, but also up to first order correction term. A practical strategy that does not depend on the unobservable volatility level is suggested. The result is an extension of the Merton's solution when stochastic volatility and elasticity of variance are considered simultaneously.

Keywords: asymptotic analysis, constant elasticity of variance, portfolio optimization, stochastic optimal control, stochastic volatility

Procedia PDF Downloads 265
3484 DNA and DNA-Complexes Modified with Electromagnetic Radiation

Authors: Ewelina Nowak, Anna Wisla-Swider, Krzysztof Danel

Abstract:

Aqueous suspensions of DNA were illuminated with linearly polarized visible light and ultraviolet for 5, 15, 20 and 40 h. In order to check the nature of modification, DNA interactions were characterized by FTIR spectroscopy. For each illuminated sample, weight average molecular weight and hydrodynamic radius were measured by high pressure size exclusion chromatography. Resulting optical changes for illuminated DNA were investigated using UV-Vis spectra and photoluminescent. Optical properties show potential application in sensors based on modified DNA. Then selected DNA-surfactant complexes were illuminated with electromagnetic radiation for 5h. Molecular structure, optical characteristic were examinated for obtained complexes. Illumination led to changes of complexes physicochemical properties as compared with native DNA. Observed changes were induced by rearrangement of the molecular structure of DNA chains.

Keywords: biopolymers, deoxyribonucleic acid, ionic liquids, linearly polarized visible light, ultraviolet

Procedia PDF Downloads 189
3483 The Effect of "Trait" Variance of Personality on Depression: Application of the Trait-State-Occasion Modeling

Authors: Pei-Chen Wu

Abstract:

Both preexisting cross-sectional and longitudinal studies of personality-depression relationship have suffered from one main limitation: they ignored the stability of the construct of interest (e.g., personality and depression) can be expected to influence the estimate of the association between personality and depression. To address this limitation, the Trait-State-Occasion (TSO) modeling was adopted to analyze the sources of variance of the focused constructs. A TSO modeling was operated by partitioning a state variance into time-invariant (trait) and time-variant (occasion) components. Within a TSO framework, it is possible to predict change on the part of construct that really changes (i.e., time-variant variance), when controlling the trait variances. 750 high school students were followed for 4 waves over six-month intervals. The baseline data (T1) were collected from the senior high schools (aged 14 to 15 years). Participants were given Beck Depression Inventory and Big Five Inventory at each assessment. TSO modeling revealed that 70~78% of the variance in personality (five constructs) was stable over follow-up period; however, 57~61% of the variance in depression was stable. For personality construct, there were 7.6% to 8.4% of the total variance from the autoregressive occasion factors; for depression construct there were 15.2% to 18.1% of the total variance from the autoregressive occasion factors. Additionally, results showed that when controlling initial symptom severity, the time-invariant components of all five dimensions of personality were predictive of change in depression (Extraversion: B= .32, Openness: B = -.21, Agreeableness: B = -.27, Conscientious: B = -.36, Neuroticism: B = .39). Because five dimensions of personality shared some variance, the models in which all five dimensions of personality were simultaneous to predict change in depression were investigated. The time-invariant components of five dimensions were still significant predictors for change in depression (Extraversion: B = .30, Openness: B = -.24, Agreeableness: B = -.28, Conscientious: B = -.35, Neuroticism: B = .42). In sum, the majority of the variability of personality was stable over 2 years. Individuals with the greater tendency of Extraversion and Neuroticism have higher degrees of depression; individuals with the greater tendency of Openness, Agreeableness and Conscientious have lower degrees of depression.

Keywords: assessment, depression, personality, trait-state-occasion model

Procedia PDF Downloads 150
3482 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction

Authors: Bastien Batardière, Joon Kwon

Abstract:

For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

Keywords: convex optimization, variance reduction, adaptive algorithms, loopless

Procedia PDF Downloads 24
3481 Brain Tumor Segmentation Based on Minimum Spanning Tree

Authors: Simeon Mayala, Ida Herdlevær, Jonas Bull Haugsøen, Shamundeeswari Anandan, Sonia Gavasso, Morten Brun

Abstract:

In this paper, we propose a minimum spanning tree-based method for segmenting brain tumors. The proposed method performs interactive segmentation based on the minimum spanning tree without tuning parameters. The steps involve preprocessing, making a graph, constructing a minimum spanning tree, and a newly implemented way of interactively segmenting the region of interest. In the preprocessing step, a Gaussian filter is applied to 2D images to remove the noise. Then, the pixel neighbor graph is weighted by intensity differences and the corresponding minimum spanning tree is constructed. The image is loaded in an interactive window for segmenting the tumor. The region of interest and the background are selected by clicking to split the minimum spanning tree into two trees. One of these trees represents the region of interest and the other represents the background. Finally, the segmentation given by the two trees is visualized. The proposed method was tested by segmenting two different 2D brain T1-weighted magnetic resonance image data sets. The comparison between our results and the standard gold segmentation confirmed the validity of the minimum spanning tree approach. The proposed method is simple to implement and the results indicate that it is accurate and efficient.

Keywords: brain tumor, brain tumor segmentation, minimum spanning tree, segmentation, image processing

Procedia PDF Downloads 96
3480 Statistic Regression and Open Data Approach for Identifying Economic Indicators That Influence e-Commerce

Authors: Apollinaire Barme, Simon Tamayo, Arthur Gaudron

Abstract:

This paper presents a statistical approach to identify explanatory variables linearly related to e-commerce sales. The proposed methodology allows specifying a regression model in order to quantify the relevance between openly available data (economic and demographic) and national e-commerce sales. The proposed methodology consists in collecting data, preselecting input variables, performing regressions for choosing variables and models, testing and validating. The usefulness of the proposed approach is twofold: on the one hand, it allows identifying the variables that influence e- commerce sales with an accessible approach. And on the other hand, it can be used to model future sales from the input variables. Results show that e-commerce is linearly dependent on 11 economic and demographic indicators.

Keywords: e-commerce, statistical modeling, regression, empirical research

Procedia PDF Downloads 194
3479 Asymmetric Linkages Between Global Sustainable Index (Green Bond) and Cryptocurrency Markets with Portfolio Implications

Authors: Faheem Ur Rehman, Muhammad Khalil Khan, Miao Qing

Abstract:

This study investigated the asymmetric links and portfolio strategies between green bonds and the markets of three different cryptocurrencies, i.e., green, Islamic, and conventional, using data from January 1, 2018, to April 8, 2022, and employing asymmetric TVP-VAR model to quantify risk spillovers in the network analysis. In addition, we use the minimum variance, minimum correlation, and minimum connectedness methodologies to assess the portfolio implications. The results of the asymmetric dynamic connectedness index (TCI) model show that by adopting cryptocurrencies for digital finance, risk spillovers are found to be reduced. The findings of net directional connectedness demonstrate that during the study period, green bonds consistently get return spillovers from all other network variables. Positive return spillovers are bigger in magnitude than negative ones. These results imply that the influence of the green bond market on the cryptocurrency markets is decreasing. Positive return spillovers generate higher connectedness values for (HG, BNB, and TRX) coins and persistent net recipients in the specific network. On the other hand, Cardano and ADA coins are persistent net transmitters in the system. XLM and MIOTA's responsibilities shift over time, and there is evidence of asymmetry when both positive and negative returns are considered. According to the pairwise portfolio weights, BNB vs. BTC has the largest portfolio weights in the system, followed by BNB vs. Ethereum, suggesting the best investment strategies in the network.

Keywords: asymmetric TVP-VAR, global sustainable index, cryptocurrency, portfolios

Procedia PDF Downloads 43
3478 Methane versus Carbon Dioxide Mitigation Prospects

Authors: Alexander J. Severinsky, Allen L. Sessoms

Abstract:

Atmospheric carbon dioxide (CO₂) has dominated the discussion about the causes of climate change. This is a reflection of the time horizon that has become the norm adopted by the IPCC as the planning horizon. Recently, it has become clear that a 100-year time horizon is much too long, and yet almost all mitigation efforts, including those in the near-term horizon of 30 years, are geared toward it. In this paper, we show that, for a 30-year time horizon, methane (CH₄) is the greenhouse gas whose radiative forcing exceeds that of CO₂. In our analysis, we used radiative forcing of greenhouse gases in the atmosphere since they directly affect the temperature rise on Earth. In 2019, the radiative forcing of methane was ~2.5 W/m² and that of carbon dioxide ~2.1 W/m². Under a business-as-usual (BAU) scenario until 2050, such forcing would be ~2.8 W/m² and ~3.1 W/m², respectively. There is a substantial spread in the data for anthropogenic and natural methane emissions as well as CH₄ leakages from production to consumption. We estimated the minimum and maximum effects of the reduction of these leakages. Such action may reduce the annual radiative forcing of all CH₄ emissions by between ~15% and ~30%. This translates into a reduction of the RF by 2050 from ~2.8 W/m² to ~2.5 W/m² in the case of the minimum effect and to ~2.15 W/m² in the case of the maximum. Under the BAU, we found that the RF of CO₂ would increase from ~2.1 W/m² nowadays to ~3.1 W/m² by 2050. We assumed a reduction of 50% of anthropogenic emission linearly over the next 30 years. That would reduce radiative forcing from ~3.1 W/m² to ~2.9 W/m². In the case of ‘net zero,’ the other 50% of reduction of only anthropogenic emissions would be limited to either from sources of emissions or directly from the atmosphere. The total reduction would be from ~3.1 to ~2.7, or ~0.4 W/m². To achieve the same radiative forcing as in the scenario of maximum reduction of methane leakages of ~2.15 W/m², then an additional reduction of radiative forcing of CO₂ would be approximately 2.7 -2.15=0.55 W/m². This is a much larger value than in expectations from ‘net zero’. In total, one needs to remove from the atmosphere ~660 GT to match the maximum reduction of current methane leakages and ~270 GT to achieve ‘net zero.’ This amounts to over 900 GT in total.

Keywords: methane leakages, methane radiative forcing, methane mitigation, methane net zero

Procedia PDF Downloads 117
3477 Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance

Authors: Nada Jasim Habeeb, Rana Saad Mohammed, Muntaha Khudair Abbass

Abstract:

For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques.

Keywords: temporal differencing, video summarization, histogram differencing, sum conditional variance

Procedia PDF Downloads 320
3476 A Multi-Criteria Model for Scheduling of Stochastic Single Machine Problem with Outsourcing and Solving It through Application of Chance Constrained

Authors: Homa Ghave, Parmis Shahmaleki

Abstract:

This paper presents a new multi-criteria stochastic mathematical model for a single machine scheduling with outsourcing allowed. There are multiple jobs processing in batch. For each batch, all of job or a quantity of it can be outsourced. The jobs have stochastic processing time and lead time and deterministic due dates arrive randomly. Because of the stochastic inherent of processing time and lead time, we use the chance constrained programming for modeling the problem. First, the problem is formulated in form of stochastic programming and then prepared in a form of deterministic mixed integer linear programming. The objectives are considered in the model to minimize the maximum tardiness and outsourcing cost simultaneously. Several procedures have been developed to deal with the multi-criteria problem. In this paper, we utilize the concept of satisfaction functions to increases the manager’s preference. The proposed approach is tested on instances where the random variables are normally distributed.

Keywords: single machine scheduling, multi-criteria mathematical model, outsourcing strategy, uncertain lead times and processing times, chance constrained programming, satisfaction function

Procedia PDF Downloads 237
3475 Resource-Constrained Assembly Line Balancing Problems with Multi-Manned Workstations

Authors: Yin-Yann Chen, Jia-Ying Li

Abstract:

Assembly line balancing problems can be categorized into one-sided, two-sided, and multi-manned ones by using the number of operators deployed at workstations. This study explores the balancing problem of a resource-constrained assembly line with multi-manned workstations. Resources include machines or tools in assembly lines such as jigs, fixtures, and hand tools. A mathematical programming model was developed to carry out decision-making and planning in order to minimize the numbers of workstations, resources, and operators for achieving optimal production efficiency. To improve the solution-finding efficiency, a genetic algorithm (GA) and a simulated annealing algorithm (SA) were designed and developed in this study to be combined with a practical case in car making. Results of the GA/SA and mathematics programming were compared to verify their validity. Finally, analysis and comparison were conducted in terms of the target values, production efficiency, and deployment combinations provided by the algorithms in order for the results of this study to provide references for decision-making on production deployment.

Keywords: heuristic algorithms, line balancing, multi-manned workstation, resource-constrained

Procedia PDF Downloads 173
3474 GIS-Based Topographical Network for Minimum “Exertion” Routing

Authors: Katherine Carl Payne, Moshe Dror

Abstract:

The problem of minimum cost routing has been extensively explored in a variety of contexts. While there is a prevalence of routing applications based on least distance, time, and related attributes, exertion-based routing has remained relatively unexplored. In particular, the network structures traditionally used to construct minimum cost paths are not suited to representing exertion or finding paths of least exertion based on road gradient. In this paper, we introduce a topographical network or “topograph” that enables minimum cost routing based on the exertion metric on each arc in a given road network as it is related to changes in road gradient. We describe an algorithm for topograph construction and present the implementation of the topograph on a road network of the state of California with ~22 million nodes.

Keywords: topograph, RPE, routing, GIS

Procedia PDF Downloads 517
3473 Taylor’s Law and Relationship between Life Expectancy at Birth and Variance in Age at Death in Period Life Table

Authors: David A. Swanson, Lucky M. Tedrow

Abstract:

Taylor’s Law is a widely observed empirical pattern that relates variances to means in sets of non-negative measurements via an approximate power function, which has found application to human mortality. This study adds to this research by showing that Taylor’s Law leads to a model that reasonably describes the relationship between life expectancy at birth (e0, which also is equal to mean age at death in a life table) and variance at age of death in seven World Bank regional life tables measured at two points in time, 1970 and 2000. Using as a benchmark a non-random sample of four Japanese female life tables covering the period from 1950 to 2004, the study finds that the simple linear model provides reasonably accurate estimates of variance in age at death in a life table from e0, where the latter range from 60.9 to 85.59 years. Employing 2017 life tables from the Human Mortality Database, the simple linear model is used to provide estimates of variance at age in death for six countries, three of which have high e0 values and three of which have lower e0 values. The paper provides a substantive interpretation of Taylor’s Law relative to e0 and concludes by arguing that reasonably accurate estimates of variance in age at death in a period life table can be calculated using this approach, which also can be used where e0 itself is estimated rather than generated through the construction of a life table, a useful feature of the model.

Keywords: empirical pattern, mean age at death in a life table, mean age of a stationary population, stationary population

Procedia PDF Downloads 295