Search results for: likelihood parameters
9162 Number of Necessary Parameters for Parametrization of Stabilizing Controllers for two times two RHinf Systems
Authors: Kazuyoshi Mori
Abstract:
In this paper, we consider the number of parameters for the parametrization of stabilizing controllers for RHinf systems with size 2 × 2. Fortunately, any plant of this model can admit doubly coprime factorization. Thus we can use the Youla parameterization to parametrize the stabilizing contollers . However, Youla parameterization does not give itself the minimal number of parameters. This paper shows that the minimal number of parameters is four. As a result, we show that the Youla parametrization naturally gives the parameterization of stabilizing controllers with minimal numbers.Keywords: RHinfo, parameterization, number of parameters, multi-input, multi-output systems
Procedia PDF Downloads 4079161 Intelligent Production Machine
Authors: A. Şahinoğlu, R. Gürbüz, A. Güllü, M. Karhan
Abstract:
This study in production machines, it is aimed that machine will automatically perceive cutting data and alter cutting parameters. The two most important parameters have to be checked in machine control unit are progress feed rate and speeds. These parameters are aimed to be controlled by sounds of machine. Optimum sound’s features introduced to computer. During process, real time data is received and converted by Matlab software. Data is converted into numerical values. According to them progress and speeds decreases/increases at a certain rate and thus optimum sound is acquired. Cutting process is made in respect of optimum cutting parameters. During chip remove progress, features of cutting tools, kind of cut material, cutting parameters and used machine; affects on various parameters. Instead of required parameters need to be measured such as temperature, vibration, and tool wear that emerged during cutting process; detailed analysis of the sound emerged during cutting process will provide detection of various data that included in the cutting process by the much more easy and economic way. The relation between cutting parameters and sound is being identified.Keywords: cutting process, sound processing, intelligent late, sound analysis
Procedia PDF Downloads 3349160 Optimal Risk and Financial Stability
Authors: Rahmoune Abdelhaq
Abstract:
Systemic risk is a key concern for central banks charged with safeguarding overall financial stability. In this work, we investigate how systemic risk is affected by the structure of the financial system. We construct banking systems that are composed of a number of banks that are connected by interbank linkages. We then vary the key parameters that define the structure of the financial system — including its level of capitalization, the degree to which banks are connected, the size of interbank exposures and the degree of concentration of the system — and analyses the influence of these parameters on the likelihood of contagious (knock-on) defaults. First, we find that the better-capitalized banks are, the more resilient is the banking system against contagious defaults and this effect is non-linear. Second, the effect of the degree of connectivity is non-monotonic, that is, initially a small increase in connectivity increases the contagion effect; but after a certain threshold value, connectivity improves the ability of a banking system to absorb shocks. Third, the size of interbank liabilities tends to increase the risk of knock-on default, even if banks hold capital against such exposures. Fourth, more concentrated banking systems are shown to be prone to larger systemic risk, all else equal. In an extension to the main analysis, we study how liquidity effects interact with banking structure to produce a greater chance of systemic breakdown. We finally consider how the risk of contagion might depend on the degree of asymmetry (tier) inherent in the structure of the banking system. A number of our results have important implications for public policy, which this paper also draws out. This paper also discusses why bank risk management is needed to get the optimal one.Keywords: financial stability, contagion, liquidity risk, optimal risk
Procedia PDF Downloads 4009159 Hidden Markov Movement Modelling with Irregular Data
Authors: Victoria Goodall, Paul Fatti, Norman Owen-Smith
Abstract:
Hidden Markov Models have become popular for the analysis of animal tracking data. These models are being used to model the movements of a variety of species in many areas around the world. A common assumption of the model is that the observations need to have regular time steps. In many ecological studies, this will not be the case. The objective of the research is to modify the movement model to allow for irregularly spaced locations and investigate the effect on the inferences which can be made about the latent states. A modification of the likelihood function to allow for these irregular spaced locations is investigated, without using interpolation or averaging the movement rate. The suitability of the modification is investigated using GPS tracking data for lion (Panthera leo) in South Africa, with many observations obtained during the night, and few observations during the day. Many nocturnal predator tracking studies are set up in this way, to obtain many locations at night when the animal is most active and is difficult to observe. Few observations are obtained during the day, when the animal is expected to rest and is potentially easier to observe. Modifying the likelihood function allows the popular Hidden Markov Model framework to be used to model these irregular spaced locations, making use of all the observed data.Keywords: hidden Markov Models, irregular observations, animal movement modelling, nocturnal predator
Procedia PDF Downloads 2449158 Bidirectional Long Short-Term Memory-Based Signal Detection for Orthogonal Frequency Division Multiplexing With All Index Modulation
Authors: Mahmut Yildirim
Abstract:
This paper proposed the bidirectional long short-term memory (Bi-LSTM) network-aided deep learning (DL)-based signal detection for Orthogonal frequency division multiplexing with all index modulation (OFDM-AIM), namely Bi-DeepAIM. OFDM-AIM is developed to increase the spectral efficiency of OFDM with index modulation (OFDM-IM), a promising multi-carrier technique for communication systems beyond 5G. In this paper, due to its strong classification ability, Bi-LSTM is considered an alternative to the maximum likelihood (ML) algorithm, which is used for signal detection in the classical OFDM-AIM scheme. The performance of the Bi-DeepAIM is compared with LSTM network-aided DL-based OFDM-AIM (DeepAIM) and classic OFDM-AIM that uses (ML)-based signal detection via BER performance and computational time criteria. Simulation results show that Bi-DeepAIM obtains better bit error rate (BER) performance than DeepAIM and lower computation time in signal detection than ML-AIM.Keywords: bidirectional long short-term memory, deep learning, maximum likelihood, OFDM with all index modulation, signal detection
Procedia PDF Downloads 729157 Maximum-likelihood Inference of Multi-Finger Movements Using Neural Activities
Authors: Kyung-Jin You, Kiwon Rhee, Marc H. Schieber, Nitish V. Thakor, Hyun-Chool Shin
Abstract:
It remains unknown whether M1 neurons encode multi-finger movements independently or as a certain neural network of single finger movements although multi-finger movements are physically a combination of single finger movements. We present an evidence of correlation between single and multi-finger movements and also attempt a challenging task of semi-blind decoding of neural data with minimum training of the neural decoder. Data were collected from 115 task-related neurons in M1 of a trained rhesus monkey performing flexion and extension of each finger and the wrist (12 single and 6 two-finger-movements). By exploiting correlation of temporal firing pattern between movements, we found that correlation coefficient for physically related movements pairs is greater than others; neurons tuned to single finger movements increased their firing rate when multi-finger commands were instructed. According to this knowledge, neural semi-blind decoding is done by choosing the greatest and the second greatest likelihood for canonical candidates. We achieved a decoding accuracy about 60% for multiple finger movement without corresponding training data set. this results suggest that only with the neural activities on single finger movements can be exploited to control dexterous multi-fingered neuroprosthetics.Keywords: finger movement, neural activity, blind decoding, M1
Procedia PDF Downloads 3209156 Copula Markov Switching Multifractal Models for Forecasting Value-at-Risk
Authors: Giriraj Achari, Malay Bhattacharyya
Abstract:
In this paper, the effectiveness of Copula Markov Switching Multifractal (MSM) models at forecasting Value-at-Risk of a two-stock portfolio is studied. The innovations are allowed to be drawn from distributions that can capture skewness and leptokurtosis, which are well documented empirical characteristics observed in financial returns. The candidate distributions considered for this purpose are Johnson-SU, Pearson Type-IV and α-Stable distributions. The two univariate marginal distributions are combined using the Student-t copula. The estimation of all parameters is performed by Maximum Likelihood Estimation. Finally, the models are compared in terms of accurate Value-at-Risk (VaR) forecasts using tests of unconditional coverage and independence. It is found that Copula-MSM-models with leptokurtic innovation distributions perform slightly better than Copula-MSM model with Normal innovations. Copula-MSM models, in general, produce better VaR forecasts as compared to traditional methods like Historical Simulation method, Variance-Covariance approach and Copula-Generalized Autoregressive Conditional Heteroscedasticity (Copula-GARCH) models.Keywords: Copula, Markov Switching, multifractal, value-at-risk
Procedia PDF Downloads 1649155 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 899154 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data
Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin
Abstract:
The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test
Procedia PDF Downloads 2989153 A Review on Parametric Optimization of Casting Processes Using Optimization Techniques
Authors: Bhrugesh Radadiya, Jaydeep Shah
Abstract:
In Indian foundry industry, there is a need of defect free casting with minimum production cost in short lead time. Casting defect is a very large issue in foundry shop which increases the rejection rate of casting and wastage of materials. The various parameters influences on casting process such as mold machine related parameters, green sand related parameters, cast metal related parameters, mold related parameters and shake out related parameters. The mold related parameters are most influences on casting defects in sand casting process. This paper review the casting produced by foundry with shrinkage and blow holes as a major defects was analyzed and identified that mold related parameters such as mold temperature, pouring temperature and runner size were not properly set in sand casting process. These parameters were optimized using different optimization techniques such as Taguchi method, Response surface methodology, Genetic algorithm and Teaching-learning based optimization algorithm. Finally, concluded that a Teaching-learning based optimization algorithm give better result than other optimization techniques.Keywords: casting defects, genetic algorithm, parametric optimization, Taguchi method, TLBO algorithm
Procedia PDF Downloads 7289152 The Role of Teacher-Student Relationship on Teachers’ Attitudes towards School Bullying
Authors: Ghada Shahrour, Nusiebeh Ananbh, Heyam Dalky, Mohammad Rababa, Fatmeh Alzoubi
Abstract:
Positive teacher-student relationship has been found to affect students’ attitudes towards bullying and, in turn, their engagement in bullying behavior. However, no investigation has been conducted to explore whether teacher-student relationship affects teachers’ attitudes towards bullying. The aim of this study was to examine the role of teacher-student relationship on teachers’ attitudes towards bullying in terms of bullying seriousness, empathic responding, and likelihood to intervene in bullying situation. A cross-sectional, descriptive design was employed among a convenience sample of 173 school teachers (50.9% female) of 12 to 17-year-old students. The teachers were recruited from secondary public schools of three governorates in the Northern district of Jordan. Each group of students has multiple teachers for different subjects. Results showed that teacher-student relationship is partially related to teachers’ attitudes towards bullying. More specifically, having a close teacher-student relationship significantly increased teachers’ perception of bullying seriousness and empathy but not the likelihood to intervene. Research is needed to examine teachers’ obstacles for not providing bullying interventions, as the barriers may be culturally contextualized. Meanwhile, interventions that promote quality teacher-student relationship are necessary to increase teachers’ perception of bullying seriousness and empathy. Students have been found to adopt the values of their teachers, and this may deter them from engaging in bullying behavior.Keywords: school bullying, teachers’ attitudes, teacher-student relationship, adolescent students
Procedia PDF Downloads 1009151 Role of Business Incubators and Social Capital on Innovation and Growth of Firms: Evidence from Ethiopia
Authors: Hailemariam Gebremichael Gebretsadik, Abrham Hagos Tesfaslasea
Abstract:
To satisfy the high need for ICT entrepreneurship and rectify the weak entrepreneurial culture in Ethiopia, the country has established ICT Business incubation centers with the intention of preventing business failures, promoting innovation, and accelerating the growth and success of firms. This study investigates the role of business incubators and social capital on the innovation and growth of firms in Ethiopia. In this research, innovation and growth of firms were considered as dependent variables, whereas business incubation and social capital were treated as independent variables. The researcher employed an e-mail survey among 137 tenant Firms (Firms that joined and/or graduated to/from the Business incubation centers available in Ethiopia) to collect the data and obtained 113 responses that were appropriate for this research. The result of this study reveals that the dimensions of business incubation (physical resource, business support, and networking) have a significant effect on the innovation of Firms, but these dimensions of business incubation do not show a significant effect on the growth of firms. On the other hand, the dimensions of social capital (structural, cognitive, and relational) show a significant positive impact on the likelihood of Firms' growth but not on the innovation of firms. Moreover, the result of this study indicates that the dimensions of business incubation and social capital together have a significant effect on the likelihood of tenant firms innovating and growing.Keywords: business incubation, innovation, social capital, tenant firms
Procedia PDF Downloads 839150 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination
Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan
Abstract:
The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.Keywords: confidence interval, handwriting, kernel density estimator, KDE, logistic regression LoR, repeatability, reproducibility
Procedia PDF Downloads 1249149 Recommender Systems Using Ensemble Techniques
Authors: Yeonjeong Lee, Kyoung-jae Kim, Youngtae Kim
Abstract:
This study proposes a novel recommender system that uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user’s preference. The proposed model consists of two steps. In the first step, this study uses logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. Then, this study combines the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. In the second step, this study uses the market basket analysis to extract association rules for co-purchased products. Finally, the system selects customers who have high likelihood to purchase products in each product group and recommends proper products from same or different product groups to them through above two steps. We test the usability of the proposed system by using prototype and real-world transaction and profile data. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The results also show that the proposed system may be useful in real-world online shopping store.Keywords: product recommender system, ensemble technique, association rules, decision tree, artificial neural networks
Procedia PDF Downloads 2949148 Optimizing of Machining Parameters of Plastic Material Using Taguchi Method
Authors: Jumazulhisham Abdul Shukor, Mohd. Sazali Said, Roshanizah Harun, Shuib Husin, Ahmad Razlee Ab Kadir
Abstract:
This paper applies Taguchi Optimization Method in determining the best machining parameters for pocket milling process on Polypropylene (PP) using CNC milling machine where the surface roughness is considered and the Carbide inserts cutting tool are used. Three machining parameters; speed, feed rate and depth of cut are investigated along three levels; low, medium and high of each parameter (Taguchi Orthogonal Arrays). The setting of machining parameters were determined by using Taguchi Method and the Signal-to-Noise (S/N) ratio are assessed to define the optimal levels and to predict the effect of surface roughness with assigned parameters based on L9. The final experimental outcomes are presented to prove the optimization parameters recommended by manufacturer are accurate.Keywords: inserts, milling process, signal-to-noise (S/N) ratio, surface roughness, Taguchi Optimization Method
Procedia PDF Downloads 6379147 A Comparative Study of Dividend Policy and Share Price across the South Asian Countries
Authors: Anwar Hussain, Ahmed Imran, Farida Faisal, Fatima Sultana
Abstract:
The present research evaluates a comparative assessment of dividend policy and share price across the South Asian countries including Pakistan, India and Sri-Lanka over the period of 2010 to 2014. Academic writers found that dividend policy and share price relationship is not same in south Asian market due to different reasons. Moreover, Panel Models used = for the evaluation of current study. In addition, Redundant fixed effect Likelihood and Hausman test used for determine of Common, Fixed and Random effect model. Therefore Indian market dividend policies play a fundamental role and significant impact on Market Share Prices. Although, present research found that different as compared to previous study that dividend policy have no impact on share price in Sri-Lanka and Pakistan.Keywords: dividend policy, share price, South Asian countries, panel data analysis, theories and parameters of dividend
Procedia PDF Downloads 3239146 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate
Procedia PDF Downloads 1259145 Model Averaging in a Multiplicative Heteroscedastic Model
Authors: Alan Wan
Abstract:
In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk
Procedia PDF Downloads 3849144 Genetics of Birth and Weaning Weight of Holstein, Friesians in Sudan
Authors: Safa A. Mohammed Ali, Ammar S. Ahamed, Mohammed Khair Abdalla
Abstract:
The objectives of this study were to estimate the means and genetic parameters of birth and weaning weight of calves of pure Holstein-Friesian cows raised in Sudan. The traits studied were:*Weight at birth *Weight at weaning. The study also included some of the important factors that affected these traits. The data were analyzed using Harvey’s Least Squares and Maximum Likelihood programme. The results obtained showed that the overall mean weight at birth of the calves under study was 34.36±0.94kg. Male calves were found to be heavier than females; the difference between the sexes was highly significant (P<0.001). The mean weight at birth of male calves was 34.27±1.17 kg while that of females was 32.51±1.14kg. The effect of sex of calves, sire and parity of dam were highly significant (P<0.001). The overall mean of weight at weaning was 67.10 ± 5.05 kg, weight at weaning was significantly (p<0.001) effected by sex of calves, sire, year and season of birth have highly significant (P<0.001) effect on either trait. Also estimates heritabilities of birth weight was (0.033±0.015) lower than heritabilities of weaning weight (0.224±0.039), and genetic correlation was 0.563, the phenotypic correlation 0.281, and the environmental correlation 0.268.Keywords: birth, weaning, weight, friesian
Procedia PDF Downloads 6659143 Effect of Design Parameters on a Two Stage Launch Vehicle Performance
Authors: Assem Sallam, Aly Elzahaby, Ahmed Makled, Mohamed Khalil
Abstract:
Change in design parameters of launch vehicle affects its overall flight path trajectory. In this paper, several design parameters are introduced to study their effect. Selected parameters are the launch vehicle mass, which is presented in the form of payload mass, the maximum allowable angle of attack the launch vehicle can withstand, the flight path angle that is predefined for the launch vehicle second stage, the required inclination and its effect on the launch azimuth and finally by changing the launch pad coordinate. Selected design parameters are studied for their effect on the variation of altitude, ground range, absolute velocity and the flight path angle. The study gives a general mean of adjusting the design parameters to reach the required launch vehicle performance.Keywords: launch vehicle azimuth, launch vehicle trajectory, launch vehicle payload, launch pad location
Procedia PDF Downloads 3129142 A Study of the Understated Violence within Social Contexts against Adolescent Girls
Authors: Niranjana Soperna, Shivangi Nigam
Abstract:
Violence against women is linked to their disadvantageous position in the society. It is rooted in unequal power relationships between men and women in society and is a global problem which is not limited to a specific group of women in society. An adolescent girl’s life is often accustomed to the likelihood of violence, and acts of violence exert additional power over girls because the stigma of violence often attaches more to a girl than to her doer. The experience of violence is distressing at the individual emotional and physical level. The field of research and programs for adolescent girls has traditionally focused on sexuality, reproductive health, and behavior, neglecting the broader social issues that underpin adolescent girls’ human rights, overall development, health, and well-being. This paper is an endeavor to address the understated or disguised form of violence which the adolescent girls experience within the social contexts. The parameters exposed under this research had been ignored to a large extent when it came to studying the dimension of violence under the social domain. Hence, the researchers attempted to explore this camouflaged form of violence and discovered some specific parameters such as: Diminished Self Worth and Esteem, Verbal Abuse, Menstruation Taboo and Social Rigidity, Negligence of Medical and Health Facilities and Complexion- A Prime Parameter for Judging Beauty. The study was conducted in the districts of Haryana where personal interviews were taken from both urban and rural adolescent girls (aged 13 to 19 years) based on structured interview schedule. The results revealed that the adolescent girls, both in urban as well as rural areas were quite affected with the above mentioned issues. In urban areas, however, due to the higher literacy rate, which resulted in more rational thinking, the magnitude was comparatively smaller, but the difference was still negligible.Keywords: adolescent girls, education, social contexts, understated violence
Procedia PDF Downloads 3179141 Regression for Doubly Inflated Multivariate Poisson Distributions
Authors: Ishapathik Das, Sumen Sen, N. Rao Chaganty, Pooja Sengupta
Abstract:
Dependent multivariate count data occur in several research studies. These data can be modeled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells, and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present a real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.Keywords: copula, Gaussian copula, multivariate distributions, inflated distributios
Procedia PDF Downloads 1569140 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 2789139 Symbolic Analysis of Power Spectrum of CMOS Cross Couple Oscillator
Authors: Kittipong Tripetch
Abstract:
This paper proposes for the first time symbolic formula of the power spectrum of cross couple oscillator and its modified circuit. Many principle existed to derived power spectrum in microwave textbook such as impedance, admittance parameters, ABCD, H parameters, etc. It can be compared by graph of power spectrum which methodology is the best from the point of view of practical measurement setup such as condition of impedance parameter which used superposition of current to derived (its current injection of the other port of the circuit is zero, which is impossible in reality). Four Graphs of impedance parameters of cross couple oscillator is proposed. After that four graphs of Scattering parameters of cross couple oscillator will be shown.Keywords: optimization, power spectrum, impedance parameters, scattering parameter
Procedia PDF Downloads 4669138 Optimization of End Milling Process Parameters for Minimization of Surface Roughness of AISI D2 Steel
Authors: Pankaj Chandna, Dinesh Kumar
Abstract:
The present work analyses different parameters of end milling to minimize the surface roughness for AISI D2 steel. D2 Steel is generally used for stamping or forming dies, punches, forming rolls, knives, slitters, shear blades, tools, scrap choppers, tyre shredders etc. Surface roughness is one of the main indices that determines the quality of machined products and is influenced by various cutting parameters. In machining operations, achieving desired surface quality by optimization of machining parameters, is a challenging job. In case of mating components the surface roughness become more essential and is influenced by the cutting parameters, because, these quality structures are highly correlated and are expected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects (i.e. on process environment). In this work, the effects of selected process parameters on surface roughness and subsequent setting of parameters with the levels have been accomplished by Taguchi’s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L9 orthogonal array. Experimental investigation of the end milling of AISI D2 steel with carbide tool by varying feed, speed and depth of cut and the surface roughness has been measured using surface roughness tester. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the contribution of the different process parameters on the process.Keywords: D2 steel, orthogonal array, optimization, surface roughness, Taguchi methodology
Procedia PDF Downloads 5449137 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning
Authors: Jiahao Tian, Michael D. Porter
Abstract:
Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation
Procedia PDF Downloads 669136 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment
Authors: Neda Orak, Mostafa Zarei
Abstract:
Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park
Procedia PDF Downloads 2939135 Thyroid Malignancy Concurrent with Hyperthyroidism: Variations with Thyroid Status and Age
Authors: N. J. Nawarathna, N. R. Kmarasinghe, D. Chandrasekara, B. M. R. S. Balasooriya, R. A. A. Shaminda, R. J. K. Senevirathne
Abstract:
Introduction: Thyroid malignancy associated with hyperthyroidism is considered rare. Retrospective studies have shown the incidence of thyroid malignancy in hyperthyroid patients to be low (0.7-8.5%). To assess the clinical relevance of this association, thyroid status in a cohort of patients with thyroid malignancy were analyzed. Method: Thyroid malignancies diagnosed histologically in 56 patients, over a 18 month period beginning from April 2013, in a single surgical unit at Teaching Hospital Kandy were included. Preoperative patient details and progression of thyroid status were asessed with Thyroid Stimulating Hormone, free Thyroxin and free Triiodothyronine levels. Results: Amongst 56 patients Papillary carcinoma was diagnosed in 44(78.6%), follicular carcinomas in 7(12.5%) and 5(8.9%) with medullary and anaplastic carcinomas. 12(21.4%) were males and 44(78.6%) were females. 20(35.7%) were less than 40years, 29(51.8%) were between 40 to 59years and 7(12.5%) were above 59years. Cross tabulation of Type of carcinoma with Gender revealed likelihood ratio of 6.908, Significance p = 0.032. Biochemically 12(21.4%) were hyperthyroid. Out of them 5(41.7%) had primary hyperthyroidism and 7(58.3%) had secondary hyperthyroidism. Mean age of euthyroid patients was 43.77years (SD 10.574) and hyperthyroid patients was 53.25years(SD 16.057). Independent Samples Test t is -2.446, two tailed significance p =0.018. When cross tabulate thyroid status with Age group Likelihood Ratio was 9.640, Significance p = 0.008. Conclusion: Papillary carcinoma is seen more among females. Among the patients with thyroid carcinomas, those with biochemically proven hyperthyroidism were more among the older age group than those who were euthyroid. Hence careful evaluation of elderly hyperthyroid patients to select the most suitable therapeutic approach is justified.Keywords: age, hyperthyroidism, thyroid malignancy, thyroid status
Procedia PDF Downloads 4039134 Dissolved Oxygen Prediction Using Support Vector Machine
Authors: Sorayya Malek, Mogeeb Mosleh, Sharifah M. Syed
Abstract:
In this study, Support Vector Machine (SVM) technique was applied to predict the dichotomized value of Dissolved oxygen (DO) from two freshwater lakes namely Chini and Bera Lake (Malaysia). Data sample contained 11 parameters for water quality features from year 2005 until 2009. All data parameters were used to predicate the dissolved oxygen concentration which was dichotomized into 3 different levels (High, Medium, and Low). The input parameters were ranked, and forward selection method was applied to determine the optimum parameters that yield the lowest errors, and highest accuracy. Initial results showed that pH, water temperature, and conductivity are the most important parameters that significantly affect the predication of DO. Then, SVM model was applied using the Anova kernel with those parameters yielded 74% accuracy rate. We concluded that using SVM models to predicate the DO is feasible, and using dichotomized value of DO yields higher prediction accuracy than using precise DO value.Keywords: dissolved oxygen, water quality, predication DO, support vector machine
Procedia PDF Downloads 2909133 Minimizing the Impact of Covariate Detection Limit in Logistic Regression
Authors: Shahadut Hossain, Jacek Wesolowski, Zahirul Hoque
Abstract:
In many epidemiological and environmental studies covariate measurements are subject to the detection limit. In most applications, covariate measurements are usually truncated from below which is known as left-truncation. Because the measuring device, which we use to measure the covariate, fails to detect values falling below the certain threshold. In regression analyses, it causes inflated bias and inaccurate mean squared error (MSE) to the estimators. This paper suggests a response-based regression calibration method to correct the deleterious impact introduced by the covariate detection limit in the estimators of the parameters of simple logistic regression model. Compared to the maximum likelihood method, the proposed method is computationally simpler, and hence easier to implement. It is robust to the violation of distributional assumption about the covariate of interest. In producing correct inference, the performance of the proposed method compared to the other competing methods has been investigated through extensive simulations. A real-life application of the method is also shown using data from a population-based case-control study of non-Hodgkin lymphoma.Keywords: environmental exposure, detection limit, left truncation, bias, ad-hoc substitution
Procedia PDF Downloads 236