Search results for: standard normal variance
8457 Appropriate Depth of Needle Insertion during Rhomboid Major Trigger Point Block
Authors: Seongho Jang
Abstract:
Objective: To investigate an appropriate depth of needle insertion during trigger point injection into the rhomboid major muscle. Methods: Sixty-two patients who visited our department with shoulder or upper back pain participated in this study. The distance between the skin and the rhomboid major muscle (SM) and the distance between the skin and rib (SB) were measured using ultrasonography. The subjects were divided into 3 groups according to BMI: BMI less than 23 kg/m2 (underweight or normal group); 23 kg/m2 or more to less than 25 kg/m2 (overweight group); and 25 kg/m2 or more (obese group). The mean ±standard deviation (SD) of SM and SB of each group were calculated. A range between mean+1 SD of SM and the mean-1 SD of SB was defined as a safe margin. Results: The underweight or normal group’s SM, SB, and the safe margin were 1.2±0.2, 2.1±0.4, and 1.4 to 1.7 cm, respectively. The overweight group’s SM and SB were 1.4±0.2 and 2.4±0.9 cm, respectively. The safe margin could not be calculated for this group. The obese group’s SM, SB, and the safe margin were 1.8±0.3, 2.7±0.5, and 2.1 to 2.2 cm, respectively. Conclusion: This study will help us to set the standard depth of safe needle insertion into the rhomboid major muscle in an effective manner without causing any complications.Keywords: pneumothorax, rhomboid major muscle, trigger point injection, ultrasound
Procedia PDF Downloads 2908456 The Modality of Multivariate Skew Normal Mixture
Authors: Bader Alruwaili, Surajit Ray
Abstract:
Finite mixtures are a flexible and powerful tool that can be used for univariate and multivariate distributions, and a wide range of research analysis has been conducted based on the multivariate normal mixture and multivariate of a t-mixture. Determining the number of modes is an important activity that, in turn, allows one to determine the number of homogeneous groups in a population. Our work currently being carried out relates to the study of the modality of the skew normal distribution in the univariate and multivariate cases. For the skew normal distribution, the aims are associated with studying the modality of the skew normal distribution and providing the ridgeline, the ridgeline elevation function, the $\Pi$ function, and the curvature function, and this will be conducive to an exploration of the number and location of mode when mixing the two components of skew normal distribution. The subsequent objective is to apply these results to the application of real world data sets, such as flow cytometry data.Keywords: mode, modality, multivariate skew normal, finite mixture, number of mode
Procedia PDF Downloads 4888455 Secondary Traumatic Stress and Related Factors in Australian Social Workers and Psychologists
Authors: Cindy Davis, Samantha Rayner
Abstract:
Secondary traumatic stress (STS) is an indirect form of trauma affecting the psychological well-being of mental health workers; STS is found to be a prevalent risk in mental health occupations. Various factors impact the development of STS within the literature; including the level of trauma individuals are exposed to and their level of empathy. Research is limited on STS in mental health workers in Australia; therefore, this study examined STS and related factors of empathetic behavior and trauma caseload among mental health workers. The research utilized an online survey quantitative research design with a purposive sample of 190 mental health workers (176 females) recruited via professional websites and unofficial social media groups. Participants completed an online questionnaire comprising of demographics, the secondary traumatic stress scale and the empathy scale for social workers. A standard hierarchical regression analysis was conducted to examine the significance of covariates, traumatized clients, traumatic stress within workload and empathy in predicting STS. The current research found 29.5% of participants to meet the criteria for a diagnosis of STS. Age and past trauma within the covariates were significantly associated with STS. Amount of traumatized clients significantly predicted 4.7% of the variance in STS, traumatic stress within workload significantly predicted 4.8% of the variance in STS and empathy significantly predicted 4.9% of the variance in STS. These three independent variables and the covariates accounted for 18.5% of the variance in STS. Practical implications include a focus on developing risk strategies and treatment methods that can diminish the impact of STS.Keywords: mental health, PTSD, social work, trauma
Procedia PDF Downloads 3328454 Applying Multivariate and Univariate Analysis of Variance on Socioeconomic, Health, and Security Variables in Jordan
Authors: Faisal G. Khamis, Ghaleb A. El-Refae
Abstract:
Many researchers have studied socioeconomic, health, and security variables in the developed countries; however, very few studies used multivariate analysis in developing countries. The current study contributes to the scarce literature about the determinants of the variance in socioeconomic, health, and security factors. Questions raised were whether the independent variables (IVs) of governorate and year impact the socioeconomic, health, and security dependent variables (DVs) in Jordan, whether the marginal mean of each DV in each governorate and in each year is significant, which governorates are similar in difference means of each DV, and whether these DVs vary. The main objectives were to determine the source of variances in DVs, collectively and separately, testing which governorates are similar and which diverge for each DV. The research design was time series and cross-sectional analysis. The main hypotheses are that IVs affect DVs collectively and separately. Multivariate and univariate analyses of variance were carried out to test these hypotheses. The population of 12 governorates in Jordan and the available data of 15 years (2000–2015) accrued from several Jordanian statistical yearbooks. We investigated the effect of two factors of governorate and year on the four DVs of divorce rate, mortality rate, unemployment percentage, and crime rate. All DVs were transformed to multivariate normal distribution. We calculated descriptive statistics for each DV. Based on the multivariate analysis of variance, we found a significant effect in IVs on DVs with p < .001. Based on the univariate analysis, we found a significant effect of IVs on each DV with p < .001, except the effect of the year factor on unemployment was not significant with p = .642. The grand and marginal means of each DV in each governorate and each year were significant based on a 95% confidence interval. Most governorates are not similar in DVs with p < .001. We concluded that the two factors produce significant effects on DVs, collectively and separately. Based on these findings, the government can distribute its financial and physical resources to governorates more efficiently. By identifying the sources of variance that contribute to the variation in DVs, insights can help inform focused variation prevention efforts.Keywords: ANOVA, crime, divorce, governorate, hypothesis test, Jordan, MANOVA, means, mortality, unemployment, year
Procedia PDF Downloads 2758453 The Comparison of Parental Childrearing Styles and Anxiety in Children with Stuttering and Normal Population
Authors: Pegah Farokhzad
Abstract:
Family has a crucial role in maintaining the physical, social and mental health of the children. Most of the mental and anxiety problems of children reflects the complex interpersonal situations among family members, especially parents. In other words, anxiety problems of the children is correlated with deficit relationships of family members and improper child rearing styles. The parental child rearing styles leads to positive and negative consequences which affect the children’s mental health. Therefore, the present research was aimed to compare the parental child rearing styles and anxiety of children with stuttering and normal population. It was also aimed to study the relationship between parental child rearing styles and anxiety of children. The research sample included 54 boys with stuttering and 54 normal boys who were selected from the children (boys) of Tehran, Iran in the age range of 5 to 8 years in 2013. In order to collect data, Baumrind Child rearing Styles Inventory and Spence Parental Anxiety Inventory were used. Appropriate descriptive statistical methods and multivariate variance analysis and t test for independent groups were used to test the study hypotheses. Statistical data analyses demonstrated that there was a significant difference between stuttering boys and normal boys in anxiety (t = 7.601, p< 0.01); But there was no significant difference between stuttering boys and normal boys in parental child rearing styles (F = 0.129). There was also not found significant relationship between parental child rearing styles and children anxiety (F = 0.135, p< 0.05). It can be concluded that the influential factors of children’s society are parents, school, teachers, peers and media. So, parental child rearing styles are not the only influential factors on anxiety of children, and other factors including genetic, environment and child experiences are effective in anxiety as well. Details are discussed.Keywords: child rearing styles, anxiety, stuttering, Iran
Procedia PDF Downloads 5018452 CFD Study for Normal and Rifled Tube with a Convergence Check
Authors: Sharfi Dirar, Shihab Elhaj, Ahmed El Fatih
Abstract:
Computational fluid dynamics were used to simulate and study the heated water boiler tube for both normal and rifled tube with a refinement of the mesh to check the convergence. The operation condition was taken from GARRI power station and used in a boundary condition accordingly. The result indicates the rifled tube has higher heat transfer efficiency than the normal tube.Keywords: boiler tube, convergence check, normal tube, rifled tube
Procedia PDF Downloads 3348451 A Comparative Study of Cognitive Functions in Relapsing-Remitting Multiple Sclerosis Patients, Secondary-Progressive Multiple Sclerosis Patients and Normal People
Authors: Alireza Pirkhaefi
Abstract:
Background: Multiple sclerosis (MS) is one of the most common diseases of the central nervous system (brain and spinal cord). Given the importance of cognitive disorders in patients with multiple sclerosis, the present study was in order to compare cognitive functions (Working memory, Attention and Centralization, and Visual-spatial perception) in patients with relapsing- remitting multiple sclerosis (RRMS) and secondary progressive multiple sclerosis (SPMS). Method: Present study was performed as a retrospective study. This research was conducted with Ex-Post Facto method. The samples of research consisted of 60 patients with multiple sclerosis (30 patients relapsing-retrograde and 30 patients secondary progressive), who were selected from Tehran Community of MS Patients Supported as convenience sampling. 30 normal persons were also selected as a comparison group. Montreal Cognitive Assessment (MOCA) was used to assess cognitive functions. Data were analyzed using multivariate analysis of variance. Results: The results showed that there were significant differences among cognitive functioning in patients with RRMS, SPMS, and normal individuals. There were not significant differences in working memory between two groups of patients with RRMS and SPMS; while significant differences in these variables were seen between the two groups and normal individuals. Also, results showed significant differences in attention and centralization and visual-spatial perception among three groups. Conclusions: Results showed that there are differences between cognitive functions of RRMS and SPMS patients so that the functions of RRMS patients are better than SPMS patients. These results have a critical role in improvement of cognitive functions; reduce the factors causing disability due to cognitive impairment, and especially overall health of society.Keywords: multiple sclerosis, cognitive function, secondary-progressive, normal subjects
Procedia PDF Downloads 2398450 Methods of Variance Estimation in Two-Phase Sampling
Authors: Raghunath Arnab
Abstract:
The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators
Procedia PDF Downloads 5888449 Health-Related QOL of Motorists with Spinal Cord Injury in Japan
Authors: Hiroaki Hirose, Hiroshi Ikeda, Isao Takeda
Abstract:
The Japanese version of the SF-36 has been employed to assess individuals’ health-related QOL (HRQOL). This study aimed to clarify the HRQOL of motorists with a spinal cord injury, in order to compare these individuals' SF-36 scores and national standard values. A total of 100 motorists with a spinal cord injury participated in this study. Participants’ HRQOL was evaluated using the Japanese version of the SF-36 (second edition). The score for each subscale was standardized based on data on the Japanese population. The average scores for NPF, NRP, NBP, NGH, NVT, NSF, NRE, and NMH were 10.9, 41.8, 45.9, 47.1, 46.1, 46.7, 46.0, and 47.4 points, respectively. Subjects showed significantly lower scores for NPF and NRP compared with national standard values, which were both ≤ 45.0 points, but relatively normal scores for the other items: NBP, NGH, NVT, NSF, NRE and NMH (> 45.0 points). The average scores for PCS, MCS and RCS were 21.9, 56.0, and 50.0 points, respectively. Subjects showed a significantly lower PCS score (≤ 20.0 points); however, the MCS score was higher (> 55.0 points) along with a relatively normal RCS score in these individuals (= 50.0 points).Keywords: health-related QOL, HRQOL, SF-36, motorist, spinal cord injury, Japan
Procedia PDF Downloads 3348448 The Evaluation of the Performance of Different Filtering Approaches in Tracking Problem and the Effect of Noise Variance
Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri
Abstract:
Performance of different filtering approaches depends on modeling of dynamical system and algorithm structure. For modeling and smoothing the data the evaluation of posterior distribution in different filtering approach should be chosen carefully. In this paper different filtering approaches like filter KALMAN, EKF, UKF, EKS and smoother RTS is simulated in some trajectory tracking of path and accuracy and limitation of these approaches are explained. Then probability of model with different filters is compered and finally the effect of the noise variance to estimation is described with simulations results.Keywords: Gaussian approximation, Kalman smoother, parameter estimation, noise variance
Procedia PDF Downloads 4398447 A Mean–Variance–Skewness Portfolio Optimization Model
Authors: Kostas Metaxiotis
Abstract:
Portfolio optimization is one of the most important topics in finance. This paper proposes a mean–variance–skewness (MVS) portfolio optimization model. Traditionally, the portfolio optimization problem is solved by using the mean–variance (MV) framework. In this study, we formulate the proposed model as a three-objective optimization problem, where the portfolio's expected return and skewness are maximized whereas the portfolio risk is minimized. For solving the proposed three-objective portfolio optimization model we apply an adapted version of the non-dominated sorting genetic algorithm (NSGAII). Finally, we use a real dataset from FTSE-100 for validating the proposed model.Keywords: evolutionary algorithms, portfolio optimization, skewness, stock selection
Procedia PDF Downloads 1988446 Rational Probabilistic Method for Calculating Thermal Cracking Risk of Mass Concrete Structures
Authors: Naoyuki Sugihashi, Toshiharu Kishi
Abstract:
The probability of occurrence of thermal cracks in mass concrete in Japan is evaluated by the cracking probability diagram that represents the relationship between the thermal cracking index and the probability of occurrence of cracks in the actual structure. In this paper, we propose a method to directly calculate the cracking probability, following a probabilistic theory by modeling the variance of tensile stress and tensile strength. In this method, the relationship between the variance of tensile stress and tensile strength, the thermal cracking index, and the cracking probability are formulated and presented. In addition, standard deviation of tensile stress and tensile strength was identified, and the method of calculating cracking probability in a general construction controlled environment was also demonstrated.Keywords: thermal crack control, mass concrete, thermal cracking probability, durability of concrete, calculating method of cracking probability
Procedia PDF Downloads 3468445 An Approach to Noise Variance Estimation in Very Low Signal-to-Noise Ratio Stochastic Signals
Authors: Miljan B. Petrović, Dušan B. Petrović, Goran S. Nikolić
Abstract:
This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.Keywords: noise, signal-to-noise ratio, stochastic signals, variance estimation
Procedia PDF Downloads 3868444 Existence of Rational Primitive Normal Pairs with Prescribed Norm and Trace
Authors: Soniya Takshak, R. K. Sharma
Abstract:
Let q and n be positive integers, then Fᵩ denotes the finite field of q elements, and Fqn denotes the extension of Fᵩ of degree n. Also, Fᵩ* represents the multiplicative group of non-zero elements of Fᵩ, and the generators of Fᵩ* are called primitive elements. A normal element α of a finite field Fᵩⁿ is such that {α, αᵠ, . . . , αᵠⁿ⁻¹} forms a basis for Fᵩⁿ over Fᵩ. Primitive normal elements have several applications in coding theory and cryptography. So, establishing the existence of primitive normal elements under certain conditions is both theoretically important and a natural issue. In this article, we provide a sufficient condition for the existence of a primitive normal element α in Fᵩⁿ of a prescribed primitive norm and non-zero trace over Fᵩ such that f(α) is also primitive, where f(x) ∈ Fᵩⁿ(x) is a rational function of degree sum m. Particularly, we investigated the rational functions of degree sum 4 over Fᵩⁿ, where q = 11ᵏ and demonstrated that there are only 3 exceptional pairs (q, n), n ≥ 7 for which such kind of primitive normal elements may not exist. In general, we show that such elements always exist except for finitely many choices of (q, n). To arrive to our conclusion, we used additive and multiplicative character sums.Keywords: finite field, primitive element, normal element, norm, trace, character
Procedia PDF Downloads 1048443 EWMA and MEWMA Control Charts for Monitoring Mean and Variance in Industrial Processes
Authors: L. A. Toro, N. Prieto, J. J. Vargas
Abstract:
There are many control charts for monitoring mean and variance. Among these, the X y R, X y S, S2 Hotteling and Shewhart control charts, for mentioning some, are widely used for monitoring mean a variance in industrial processes. In particular, the Shewhart charts are based on the information about the process contained in the current observation only and ignore any information given by the entire sequence of points. Moreover, that the Shewhart chart is a control chart without memory. Consequently, Shewhart control charts are found to be less sensitive in detecting smaller shifts, particularly smaller than 1.5 times of the standard deviation. These kind of small shifts are important in many industrial applications. In this study and effective alternative to Shewhart control chart was implemented. In case of univariate process an Exponentially Moving Average (EWMA) control chart was developed and Multivariate Exponentially Moving Average (MEWMA) control chart in case of multivariate process. Both of these charts were based on memory and perform better that Shewhart chart while detecting smaller shifts. In these charts, information the past sample is cumulated up the current sample and then the decision about the process control is taken. The mentioned characteristic of EWMA and MEWMA charts, are of the paramount importance when it is necessary to control industrial process, because it is possible to correct or predict problems in the processes before they come to a dangerous limit.Keywords: control charts, multivariate exponentially moving average (MEWMA), exponentially moving average (EWMA), industrial control process
Procedia PDF Downloads 3548442 Detection of Internal Mold Infection of Intact Tomatoes by Non-Destructive, Transmittance VIS-NIR Spectroscopy
Authors: K. Petcharaporn
Abstract:
The external characteristics of tomatoes, such as freshness, color and size are typically used in quality control processes for tomatoes sorting. However, the internal mold infection of intact tomato cannot be sorted based on a visible assessment and destructive method alone. In this study, a non-destructive technique was used to predict the internal mold infection of intact tomatoes by using transmittance visible and near infrared (VIS-NIR) spectroscopy. Spectra for 200 samples contained 100 samples for normal tomatoes and 100 samples for mold infected tomatoes were acquired in the wavelength range between 665-955 nm. This data was used in conjunction with partial least squares-discriminant analysis (PLS-DA) method to generate a classification model for tomato quality between groups of internal mold infection of intact tomato samples. For this task, the data was split into two groups, 140 samples were used for a training set and 60 samples were used for a test set. The spectra of both normal and internally mold infected tomatoes showed different features in the visible wavelength range. Combined spectral pretreatments of standard normal variate transformation (SNV) and smoothing (Savitzky-Golay) gave the optimal calibration model in training set, 85.0% (63 out of 71 for the normal samples and 56 out of 69 for the internal mold samples). The classification accuracy of the best model on the test set was 91.7% (29 out of 29 for the normal samples and 26 out of 31 for the internal mold tomato samples). The results from this experiment showed that transmittance VIS-NIR spectroscopy can be used as a non-destructive technique to predict the internal mold infection of intact tomatoes.Keywords: tomato, mold, quality, prediction, transmittance
Procedia PDF Downloads 3628441 Detection of Internal Mold Infection of Intact For Tomatoes by Non-Destructive, Transmittance VIS-NIR Spectroscopy
Authors: K. Petcharaporn, N. Prathengjit
Abstract:
The external characteristics of tomatoes, such as freshness, color and size are typically used in quality control processes for tomatoes sorting. However, the internal mold infection of intact tomato cannot be sorted based on a visible assessment and destructive method alone. In this study, a non-destructive technique was used to predict the internal mold infection of intact tomatoes by using transmittance visible and near infrared (VIS-NIR) spectroscopy. Spectra for 200 samples contained 100 samples for normal tomatoes and 100 samples for mold infected tomatoes were acquired in the wavelength range between 665-955 nm. This data was used in conjunction with partial least squares-discriminant analysis (PLS-DA) method to generate a classification model for tomato quality between groups of internal mold infection of intact tomato samples. For this task, the data was split into two groups, 140 samples were used for a training set and 60 samples were used for a test set. The spectra of both normal and internally mold infected tomatoes showed different features in the visible wavelength range. Combined spectral pretreatments of standard normal variate transformation (SNV) and smoothing (Savitzky-Golay) gave the optimal calibration model in training set, 85.0% (63 out of 71 for the normal samples and 56 out of 69 for the internal mold samples). The classification accuracy of the best model on the test set was 91.7% (29 out of 29 for the normal samples and 26 out of 31 for the internal mold tomato samples). The results from this experiment showed that transmittance VIS-NIR spectroscopy can be used as a non-destructive technique to predict the internal mold infection of intact tomatoes.Keywords: tomato, mold, quality, prediction, transmittance
Procedia PDF Downloads 5198440 Children’s Concept of Forgiveness
Authors: Lida Landicho, Analiza R. Adarlo, Janine Mae V. Corpuz, Joan C. Villanueva
Abstract:
Testing the idea that the process of forgiveness is intrinsically different across diverse relationships, this study examined whether forgiveness can already be facilitated by children ages 4-6. Two different intervention sessions which consists of 40 children (half heard stories about unfair blame and half heard stories about a double standard (between subjects variable) was completed. Investigators performed experimental analyses to examine the role of forgiveness in social and familial context. Results indicated that forgiveness can already be facilitated by children. Children see scenarios on double standard to be more unfair than normal scenarios (Scenario 2 (double standard) (M=7.54) Scenario 1 (unfair blame) (M=4.50), Scenario 4 (double standard) (M=7.) Scenario 3 (getting blamed for something the friend did) (M=6.80)p <.05.The findings confirmed that children were generally willing to grant forgiveness to a mother even though she was unfair, but less so to a friend. Correlations between sex, age and forgiveness were analyzed. Significant relationships was found on scenarios presented and caring task scores (rxy= -.314).Their tendency to forgive was related to dispositional and situational factors.Keywords: forgiveness, situational and dispositional factors, familial context, social context
Procedia PDF Downloads 4258439 Portfolio Optimization under a Hybrid Stochastic Volatility and Constant Elasticity of Variance Model
Authors: Jai Heui Kim, Sotheara Veng
Abstract:
This paper studies the portfolio optimization problem for a pension fund under a hybrid model of stochastic volatility and constant elasticity of variance (CEV) using asymptotic analysis method. When the volatility component is fast mean-reverting, it is able to derive asymptotic approximations for the value function and the optimal strategy for general utility functions. Explicit solutions are given for the exponential and hyperbolic absolute risk aversion (HARA) utility functions. The study also shows that using the leading order optimal strategy results in the value function, not only up to the leading order, but also up to first order correction term. A practical strategy that does not depend on the unobservable volatility level is suggested. The result is an extension of the Merton's solution when stochastic volatility and elasticity of variance are considered simultaneously.Keywords: asymptotic analysis, constant elasticity of variance, portfolio optimization, stochastic optimal control, stochastic volatility
Procedia PDF Downloads 2998438 The Effect of "Trait" Variance of Personality on Depression: Application of the Trait-State-Occasion Modeling
Authors: Pei-Chen Wu
Abstract:
Both preexisting cross-sectional and longitudinal studies of personality-depression relationship have suffered from one main limitation: they ignored the stability of the construct of interest (e.g., personality and depression) can be expected to influence the estimate of the association between personality and depression. To address this limitation, the Trait-State-Occasion (TSO) modeling was adopted to analyze the sources of variance of the focused constructs. A TSO modeling was operated by partitioning a state variance into time-invariant (trait) and time-variant (occasion) components. Within a TSO framework, it is possible to predict change on the part of construct that really changes (i.e., time-variant variance), when controlling the trait variances. 750 high school students were followed for 4 waves over six-month intervals. The baseline data (T1) were collected from the senior high schools (aged 14 to 15 years). Participants were given Beck Depression Inventory and Big Five Inventory at each assessment. TSO modeling revealed that 70~78% of the variance in personality (five constructs) was stable over follow-up period; however, 57~61% of the variance in depression was stable. For personality construct, there were 7.6% to 8.4% of the total variance from the autoregressive occasion factors; for depression construct there were 15.2% to 18.1% of the total variance from the autoregressive occasion factors. Additionally, results showed that when controlling initial symptom severity, the time-invariant components of all five dimensions of personality were predictive of change in depression (Extraversion: B= .32, Openness: B = -.21, Agreeableness: B = -.27, Conscientious: B = -.36, Neuroticism: B = .39). Because five dimensions of personality shared some variance, the models in which all five dimensions of personality were simultaneous to predict change in depression were investigated. The time-invariant components of five dimensions were still significant predictors for change in depression (Extraversion: B = .30, Openness: B = -.24, Agreeableness: B = -.28, Conscientious: B = -.35, Neuroticism: B = .42). In sum, the majority of the variability of personality was stable over 2 years. Individuals with the greater tendency of Extraversion and Neuroticism have higher degrees of depression; individuals with the greater tendency of Openness, Agreeableness and Conscientious have lower degrees of depression.Keywords: assessment, depression, personality, trait-state-occasion model
Procedia PDF Downloads 1758437 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction
Authors: Bastien Batardière, Joon Kwon
Abstract:
For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.Keywords: convex optimization, variance reduction, adaptive algorithms, loopless
Procedia PDF Downloads 708436 Numerical Solution of Manning's Equation in Rectangular Channels
Authors: Abdulrahman Abdulrahman
Abstract:
When the Manning equation is used, a unique value of normal depth in the uniform flow exists for a given channel geometry, discharge, roughness, and slope. Depending on the value of normal depth relative to the critical depth, the flow type (supercritical or subcritical) for a given characteristic of channel conditions is determined whether or not flow is uniform. There is no general solution of Manning's equation for determining the flow depth for a given flow rate, because the area of cross section and the hydraulic radius produce a complicated function of depth. The familiar solution of normal depth for a rectangular channel involves 1) a trial-and-error solution; 2) constructing a non-dimensional graph; 3) preparing tables involving non-dimensional parameters. Author in this paper has derived semi-analytical solution to Manning's equation for determining the flow depth given the flow rate in rectangular open channel. The solution was derived by expressing Manning's equation in non-dimensional form, then expanding this form using Maclaurin's series. In order to simplify the solution, terms containing power up to 4 have been considered. The resulted equation is a quartic equation with a standard form, where its solution was obtained by resolving this into two quadratic factors. The proposed solution for Manning's equation is valid over a large range of parameters, and its maximum error is within -1.586%.Keywords: channel design, civil engineering, hydraulic engineering, open channel flow, Manning's equation, normal depth, uniform flow
Procedia PDF Downloads 2218435 Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance
Authors: Nada Jasim Habeeb, Rana Saad Mohammed, Muntaha Khudair Abbass
Abstract:
For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques.Keywords: temporal differencing, video summarization, histogram differencing, sum conditional variance
Procedia PDF Downloads 3488434 Comparing Abused and Normal Male Students in Tehran Guidance Schools: Emphasizing the Co-Dependency of Their Mothers
Authors: Mohamad Saleh Sangin Ostadi, Esmail Safari, Somayeh Akbari, Kaveh Qaderi Bagajan
Abstract:
The aim of this study is to compare abused and normal male students in Tehran guidance schools with emphasis on the co-dependency of their mothers. The method of this study is based on survey method and comparison (Ex-Post Facto). The method of sampling is also multi-stage cluster. Accordingly, we did sampling from secondary schools of education and training in Tehran, including 12 schools with levels of first, second and third. Each of the schools represents the three – high, medium and low- economic and social conditions. In the following, three classes from every school and 20 students from each class were randomly selected. By (CTQ) abused and normal students were separated that 670 children were recognized as normal and 50 children as abused. Then, 50 children were randomly selected from normal group and compared with abused group. Using Spanned-Fischer Co-dependency Scale, we compared mothers of abused and normal students. The results showed that mothers of the abused children have higher co- dependency average comparing to the mothers of the normal children.Keywords: co-dependency, child abuse, abused children, parental psychological health
Procedia PDF Downloads 3388433 An Automated Procedure for Estimating the Glomerular Filtration Rate and Determining the Normality or Abnormality of the Kidney Stages Using an Artificial Neural Network
Authors: Hossain A., Chowdhury S. I.
Abstract:
Introduction: The use of a gamma camera is a standard procedure in nuclear medicine facilities or hospitals to diagnose chronic kidney disease (CKD), but the gamma camera does not precisely stage the disease. The authors sought to determine whether they could use an artificial neural network to determine whether CKD was in normal or abnormal stages based on GFR values (ANN). Method: The 250 kidney patients (Training 188, Testing 62) who underwent an ultrasonography test to diagnose a renal test in our nuclear medical center were scanned using a gamma camera. Before the scanning procedure, the patients received an injection of ⁹⁹ᵐTc-DTPA. The gamma camera computes the pre- and post-syringe radioactive counts after the injection has been pushed into the patient's vein. The artificial neural network uses the softmax function with cross-entropy loss to determine whether CKD is normal or abnormal based on the GFR value in the output layer. Results: The proposed ANN model had a 99.20 % accuracy according to K-fold cross-validation. The sensitivity and specificity were 99.10 and 99.20 %, respectively. AUC was 0.994. Conclusion: The proposed model can distinguish between normal and abnormal stages of CKD by using an artificial neural network. The gamma camera could be upgraded to diagnose normal or abnormal stages of CKD with an appropriate GFR value following the clinical application of the proposed model.Keywords: artificial neural network, glomerular filtration rate, stages of the kidney, gamma camera
Procedia PDF Downloads 1038432 Based on MR Spectroscopy, Metabolite Ratio Analysis of MRI Images for Metastatic Lesion
Authors: Hossain A, Hossain S.
Abstract:
Introduction: In a small cohort, we sought to assess the magnetic resonance spectroscopy's (MRS) ability to predict the presence of metastatic lesions. Method: A Popular Diagnostic Centre Limited enrolled patients with neuroepithelial tumors. The 1H CSI MRS of the brain allows us to detect changes in the concentration of specific metabolites caused by metastatic lesions. Among these metabolites are N-acetyl-aspartate (NNA), creatine (Cr), and choline (Cho). For Cho, NAA, Cr, and Cr₂, the metabolic ratio was calculated using the division method. Results: The NAA values were 0.63 and 5.65 for tumor cells, 1.86 and 5.66 for normal cells, and 1.86 and 5.66 for normal cells 2. NAA values for normal cells 1 were 1.84, 10.6, and 1.86 for normal cells 2, respectively. Cho levels were as low as 0.8 and 10.53 in the tumor cell, compared to 1.12 and 2.7 in the normal cell 1 and 1.24 and 6.36 in the normal cell 2. Cho/Cr₂ barely distinguished itself from the other ratios in terms of significance. For tumor cells, the ratios of Cho/NAA, Cho/Cr₂, NAA/Cho, and NAA/Cr₂ were significant. Normal cell 1 had significant Cho/NAA, Cho/Cr, NAA/Cho, and NAA/Cr ratios. Conclusion: The clinical result can be improved by using 1H-MRSI to guide the size of resection for metastatic lesions. Even though it is non-invasive and doesn't present any difficulties during the procedure, MRS has been shown to predict the detection of metastatic lesions.Keywords: metabolite ratio, MRI images, metastatic lesion, MR spectroscopy, N-acetyl-aspartate
Procedia PDF Downloads 948431 Effect of Ginger (Zingiber Officinal) Root Extract on Blood Glucose Level and Lipid Profile in Normal and Alloxan-Diabetic Rabbits
Authors: Khalil Abdullah Ahmed Khalil, Elsadig Mohamed Ahmed
Abstract:
Ginger is one of the most important medicinal plants, which is widely used in folk medicine. This study was designed to go further step and evaluate the hypoglycemic and hypolipidaemic effects of the aqueous ginger root extract in normal and alloxan diabetic rabbits. Results revealed that the aqueous ginger has a significant hypoglycemic effect (P<0.05) in diabetic rabbits but a non-significant hypoglycemic effect (P>0.05) in normal rabbits. There were also significant decreases in the concentrations (P<0.05) in serum cholesterol, triglycerides and LDL – cholesterol in both normal and diabetic rabbits. Although there was an elevation in serum HDL- cholesterol in both normal and diabetic rabbits, these elevations were non-significant (P>0.05). Our data suggest the aqueous ginger has a hypoglycemic effect in diabetic rabbits and lipid-lowering properties in both normal and diabetic rabbits.Keywords: aqueous extract of ginger root (AEGR), hypoglycemic, cholesterol, triglyceride
Procedia PDF Downloads 2928430 Beyond Classic Program Evaluation and Review Technique: A Generalized Model for Subjective Distributions with Flexible Variance
Authors: Byung Cheol Kim
Abstract:
The Program Evaluation and Review Technique (PERT) is widely used for project management, but it struggles with subjective distributions, particularly due to its assumptions of constant variance and light tails. To overcome these limitations, we propose the Generalized PERT (G-PERT) model, which enhances PERT by incorporating variability in three-point subjective estimates. Our methodology extends the original PERT model to cover the full range of unimodal beta distributions, enabling the model to handle thick-tailed distributions and offering formulas for computing mean and variance. This maintains the simplicity of PERT while providing a more accurate depiction of uncertainty. Our empirical analysis demonstrates that the G-PERT model significantly improves performance, particularly when dealing with heavy-tail subjective distributions. In comparative assessments with alternative models such as triangular and lognormal distributions, G-PERT shows superior accuracy and flexibility. These results suggest that G-PERT offers a more robust solution for project estimation while still retaining the user-friendliness of the classic PERT approach.Keywords: PERT, subjective distribution, project management, flexible variance
Procedia PDF Downloads 188429 Taylor’s Law and Relationship between Life Expectancy at Birth and Variance in Age at Death in Period Life Table
Authors: David A. Swanson, Lucky M. Tedrow
Abstract:
Taylor’s Law is a widely observed empirical pattern that relates variances to means in sets of non-negative measurements via an approximate power function, which has found application to human mortality. This study adds to this research by showing that Taylor’s Law leads to a model that reasonably describes the relationship between life expectancy at birth (e0, which also is equal to mean age at death in a life table) and variance at age of death in seven World Bank regional life tables measured at two points in time, 1970 and 2000. Using as a benchmark a non-random sample of four Japanese female life tables covering the period from 1950 to 2004, the study finds that the simple linear model provides reasonably accurate estimates of variance in age at death in a life table from e0, where the latter range from 60.9 to 85.59 years. Employing 2017 life tables from the Human Mortality Database, the simple linear model is used to provide estimates of variance at age in death for six countries, three of which have high e0 values and three of which have lower e0 values. The paper provides a substantive interpretation of Taylor’s Law relative to e0 and concludes by arguing that reasonably accurate estimates of variance in age at death in a period life table can be calculated using this approach, which also can be used where e0 itself is estimated rather than generated through the construction of a life table, a useful feature of the model.Keywords: empirical pattern, mean age at death in a life table, mean age of a stationary population, stationary population
Procedia PDF Downloads 3308428 Effect of Threshold Configuration on Accuracy in Upper Airway Analysis Using Cone Beam Computed Tomography
Authors: Saba Fahham, Supak Ngamsom, Suchaya Damrongsri
Abstract:
Objective: The objective is to determine the optimal threshold of Romexis software for the airway volume and minimum cross-section area (MCA) analysis using Image J as a gold standard. Materials and Methods: A total of ten cone-beam computed tomography (CBCT) images were collected. The airway volume and MCA of each patient were analyzed using the automatic airway segmentation function in the CBCT DICOM viewer (Romexis). Airway volume and MCA measurements were conducted on each CBCT sagittal view with fifteen different threshold values from the Romexis software, Ranging from 300 to 1000. Duplicate DICOM files, in axial view, were imported into Image J for concurrent airway volume and MCA analysis as the gold standard. The airway volume and MCA measured from Romexis and Image J were compared using a t-test with Bonferroni correction, and statistical significance was set at p<0.003. Results: Concerning airway volume, thresholds of 600 to 850 as well as 1000, exhibited results that were not significantly distinct from those obtained through Image J. Regarding MCA, employing thresholds from 400 to 850 within Romexis Viewer showed no variance from Image J. Notably, within the threshold range of 600 to 850, there were no statistically significant differences observed in both airway volume and MCA analyses, in comparison to Image J. Conclusion: This study demonstrated that the utilization of Planmeca Romexis Viewer 6.4.3.3 within threshold range of 600 to 850 yields airway volume and MCA measurements that exhibit no statistically significant variance in comparison to measurements obtained through Image J. This outcome holds implications for diagnosing upper airway obstructions and post-orthodontic surgical monitoring.Keywords: airway analysis, airway segmentation, cone beam computed tomography, threshold
Procedia PDF Downloads 44