Search results for: likelihood ratio test
13615 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 39713614 Assessing the Resilience of the Insurance Industry under Solvency II
Authors: Vincenzo Russo, Rosella Giacometti
Abstract:
The paper aims to assess the insurance industry's resilience under Solvency II against adverse scenarios. Starting from the economic balance sheet available under Solvency II for insurance and reinsurance undertakings, we assume that assets and liabilities follow a bivariate geometric Brownian motion (GBM). Then, using the results available under Margrabe's formula, we establish an analytical solution to calibrate the volatility of the asset-liability ratio. In such a way, we can estimate the probability of default and the probability of breaching the undertaking's Solvency Capital Requirement (SCR). Furthermore, since estimating the volatility of the Solvency Ratio became crucial for insurers in light of the financial crises featured in the last decades, we introduce a novel measure that we call Resiliency Ratio. The Resiliency Ratio can be used, in addition to the Solvency Ratio, to evaluate the insurance industry's resilience in case of adverse scenarios. Finally, we introduce a simplified stress test tool to evaluate the economic balance sheet under stressed conditions. The model we propose is featured by analytical tractability and fast calibration procedure where only the disclosed data available under the Solvency II public reporting are needed for the calibration. Using the data published regularly by the European Insurance and Occupational Pensions Authority (EIOPA) in an aggregated form by country, an empirical analysis has been performed to calibrate the model and provide the related results at the country level.Keywords: Solvency II, solvency ratio, volatility of the asset-liability ratio, probability of default, probability to breach the SCR, resilience ratio, stress test
Procedia PDF Downloads 8113613 The Ratio of Second to Fourth Digit Length Correlates with Cardiorespiratory Fitness in Male College Students Men but Not in Female
Authors: Cheng-Chen Hsu
Abstract:
Background: The ratio of the length of the second finger (index finger, 2D) to the fourth finger (ring finger, 4D) (2D:4D) is a putative marker of prenatal hormones. A low 2D:4D ratio is related to high prenatal testosterone (PT) levels. Physiological research has suggested that a low 2D:4D ratio is correlated with high sports ability. Aim: To examine the association between cardiorespiratory fitness and 2D:4D. Methods: Assessment of 2D:4D; Images of hands were collected from participants using a computer scanner. Hands were placed lightly on the surface of the plate. Image analysis was performed using Image-Pro Plus 5.0 software. Feature points were marked at the tip of the finger and at the center of the proximal crease on the second and fourth digits. Actual measurement was carried out automatically, 2D:4D was calculated by dividing 2nd by 4th digit length. YMCA 3-min Step Test; The test involves stepping up and down at a rate of 24 steps/min for 3 min; a tape recording of the correct cadence (96 beats/min) is played to assist the participant in keeping the correct pace. Following the step test, the participant immediately sits down and, within 5 s, the tester starts counting the pulse for 1 min. The score for the test, the total 1-min postexercise heart rate, reflects the heart’s ability to recover quickly. Statistical Analysis ; Pearson’s correlation (r) was used for assessing the relationship between age, physical measurements, one-minute heart rate after YMCA 3-minute step test (HR) and 2D:4D. An independent-sample t-test was used for determining possible differences in HR between subjects with low and high values of 2D:4D. All statistical analyses were carried out with SPSS 18 for Window. All P-values were two-tailed at P = 0.05, if not reported otherwise. Results: A median split by 2D:4D was applied, resulting in a high and a low group. One-minute heart rate after YMCA 3-minute step test was significantly difference between groups of male right-hand 2D:4D (p = 0.024). However, no difference in left-hand 2D:4D values between groups in male, and no digit ratio difference between groups in female. Conclusion: The results showed that cardiopulmonary fitness is related to right 2D:4D, only in men. We argue that prenatal testosterone may have an effect on cardiorespiratory fitness in male but not in female.Keywords: college students, digit ratio, finger, step test, fitness
Procedia PDF Downloads 27513612 Automating Test Activities: Test Cases Creation, Test Execution, and Test Reporting with Multiple Test Automation Tools
Authors: Loke Mun Sei
Abstract:
Software testing has become a mandatory process in assuring the software product quality. Hence, test management is needed in order to manage the test activities conducted in the software test life cycle. This paper discusses on the challenges faced in the software test life cycle, and how the test processes and test activities, mainly on test cases creation, test execution, and test reporting is being managed and automated using several test automation tools, i.e. Jira, Robot Framework, and Jenkins.Keywords: test automation tools, test case, test execution, test reporting
Procedia PDF Downloads 58313611 Motion-Based Detection and Tracking of Multiple Pedestrians
Authors: A. Harras, A. Tsuji, K. Terada
Abstract:
Tracking of moving people has gained a matter of great importance due to rapid technological advancements in the field of computer vision. The objective of this study is to design a motion based detection and tracking multiple walking pedestrians randomly in different directions. In our proposed method, Gaussian mixture model (GMM) is used to determine moving persons in image sequences. It reacts to changes that take place in the scene like different illumination; moving objects start and stop often, etc. Background noise in the scene is eliminated through applying morphological operations and the motions of tracked people which is determined by using the Kalman filter. The Kalman filter is applied to predict the tracked location in each frame and to determine the likelihood of each detection. We used a benchmark data set for the evaluation based on a side wall stationary camera. The actual scenes from the data set are taken on a street including up to eight people in front of the camera in different two scenes, the duration is 53 and 35 seconds, respectively. In the case of walking pedestrians in close proximity, the proposed method has achieved the detection ratio of 87%, and the tracking ratio is 77 % successfully. When they are deferred from each other, the detection ratio is increased to 90% and the tracking ratio is also increased to 79%.Keywords: automatic detection, tracking, pedestrians, counting
Procedia PDF Downloads 25713610 A Study of Mode Choice Model Improvement Considering Age Grouping
Authors: Young-Hyun Seo, Hyunwoo Park, Dong-Kyu Kim, Seung-Young Kho
Abstract:
The purpose of this study is providing an improved mode choice model considering parameters including age grouping of prime-aged and old age. In this study, 2010 Household Travel Survey data were used and improper samples were removed through the analysis. Chosen alternative, date of birth, mode, origin code, destination code, departure time, and arrival time are considered from Household Travel Survey. By preprocessing data, travel time, travel cost, mode, and ratio of people aged 45 to 55 years, 55 to 65 years and over 65 years were calculated. After the manipulation, the mode choice model was constructed using LIMDEP by maximum likelihood estimation. A significance test was conducted for nine parameters, three age groups for three modes. Then the test was conducted again for the mode choice model with significant parameters, travel cost variable and travel time variable. As a result of the model estimation, as the age increases, the preference for the car decreases and the preference for the bus increases. This study is meaningful in that the individual and households characteristics are applied to the aggregate model.Keywords: age grouping, aging, mode choice model, multinomial logit model
Procedia PDF Downloads 32213609 The Use of Spirulina during Aerobic Exercise on the Performance of Immune and Consumption Indicators (A Case Study: Young Men After Physical Training)
Authors: Vahab Behmanesh
Abstract:
One of the topics that has always attracted the attention of sports medicine and sports science experts is the positive or negative effect of sports activities on the functioning of the body's immune system. In the present research, a course of aerobic running with spirulina consumption has been studied on the maximum oxygen consumption and the performance of some indicators of the immune system of men who have trained after one session of physical activity. In this research, 50 trained students were studied randomly in four groups, spirulina- aerobic, spirulina, placebo- aerobic, and control. In order to test the research hypotheses, one-way statistical method of variance (ANOVA) was used considering the significance level of a=0.005 and post hoc test (LSD). A blood sample was taken from the participants in the first stage test in fasting and resting state immediately after Bruce's maximal test on the treadmill until complete relaxation was reached, and their Vo2max value was determined through the aforementioned test. The subjects of the spirulina-aerobic running and placebo-aerobic running groups took three 500 mg spirulina and 500 mg placebo pills a day for six weeks and ran three times a week for 30 minutes at the threshold of aerobic stimulation. The spirulina and placebo groups also consumed spirulina and placebo tablets in the above method for six weeks. Then they did the same first stage test as the second stage test. Blood samples were taken to measure the number of CD4+, CD8+, NK, and the ratio of CD4+ to CD8+ on four occasions before and after the first and second stage tests. The analysis of the findings showed that: aerobic running and spirulina supplement alone increase Vo2max. Aerobic running and consumption of spirulina increases Vo2max more than other groups (P<0.05), +CD4 and hemoglobin of the spirulina-aerobic running group was significantly different from other groups (P=0.002), +CD4 of the groups together There was no significant difference, NK increased in all groups, the ratio of CD4+ to CD8+ between the groups had a significant difference (P=0.002), the ratio of CD4+ to CD8+ in the spirulina- aerobic group was lower than the spirulina and placebo groups. All in all, it can be concluded that the supplement of spirulina and aerobic exercise may increase Vo2max and improve safety indicators.Keywords: spirulina (Q2), hemoglobin (Q3), aerobic exercise (Q3), residual activity (Q2), CD4+ to CD8+ ratio (Q3)
Procedia PDF Downloads 12213608 Evaluation of Liquefaction Potential of Fine Grained Soil: Kerman Case Study
Authors: Reza Ziaie Moayed, Maedeh Akhavan Tavakkoli
Abstract:
This research aims to investigate and evaluate the liquefaction potential in a project in Kerman city based on different methods for fine-grained soils. Examining the previous damages caused by recent earthquakes, it has been observed that fine-grained soils play an essential role in the level of damage caused by soil liquefaction. But, based on previous investigations related to liquefaction, there is limited attention to evaluating the cyclic resistance ratio for fine-grain soils, especially with the SPT method. Although using a standard penetration test (SPT) to find the liquefaction potential of fine-grain soil is not common, it can be a helpful method based on its rapidness, serviceability, and availability. In the present study, the liquefaction potential has been first determined by the soil’s physical properties obtained from laboratory tests. Then, using the SPT test and its available criterion for evaluating the cyclic resistance ratio and safety factor of liquefaction, the correction of effecting fine-grained soils is made, and then the results are compared. The results show that using the SPT test for liquefaction is more accurate than using laboratory tests in most cases due to the contribution of different physical parameters of soil, which leads to an increase in the ultimate N₁(60,cs).Keywords: liquefaction, cyclic resistance ratio, SPT test, clay soil, cohesion soils
Procedia PDF Downloads 10113607 An Elaboration Likelihood Model to Evaluate Consumer Behavior on Facebook Marketplace: Trust on Seller as a Moderator
Authors: Sharmistha Chowdhury, Shuva Chowdhury
Abstract:
Buying-selling new as well as second-hand goods like tools, furniture, household, electronics, clothing, baby stuff, vehicles, and hobbies through the Facebook marketplace has become a new paradigm for c2c sellers. This phenomenon encourages and empowers decentralised home-oriented sellers. This study adopts Elaboration Likelihood Model (ELM) to explain consumer behaviour on Facebook Marketplace (FM). ELM suggests that consumers process information through the central and peripheral routes, which eventually shape their attitudes towards posts. The central route focuses on information quality, and the peripheral route focuses on cues. Sellers’ FM posts usually include product features, prices, conditions, pictures, and pick-up location. This study uses information relevance and accuracy as central route factors. The post’s attractiveness represents cues and creates positive or negative associations with the product. A post with remarkable pictures increases the attractiveness of the post. So, post aesthetics is used as a peripheral route factor. People influenced via the central or peripheral route forms an attitude that includes multiple processes – response and purchase intention. People respond to FM posts through save, share and chat. Purchase intention reflects a positive image of the product and higher purchase intention. This study proposes trust on sellers as a moderator to test the strength of its influence on consumer attitudes and behaviour. Trust on sellers is assessed whether sellers have badges or not. A sample questionnaire will be developed and distributed among a group of random FM sellers who are selling vehicles on this platform to conduct the study. The chosen product of this study is the vehicle, a high-value purchase item. High-value purchase requires consumers to consider forming their attitude without any sign of impulsiveness seriously. Hence, vehicles are the perfect choice to test the strength of consumers attitudes and behaviour. The findings of the study add to the elaboration likelihood model and online second-hand marketplace literature.Keywords: consumer behaviour, elaboration likelihood model, facebook marketplace, c2c marketing
Procedia PDF Downloads 13813606 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring
Authors: Daniel Fundi Murithi
Abstract:
Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring
Procedia PDF Downloads 16313605 Diagnostic Performance of Mean Platelet Volume in the Diagnosis of Acute Myocardial Infarction: A Meta-Analysis
Authors: Kathrina Aseanne Acapulco-Gomez, Shayne Julieane Morales, Tzar Francis Verame
Abstract:
Mean platelet volume (MPV) is the most accurate measure of the size of platelets and is routinely measured by most automated hematological analyzers. Several studies have shown associations between MPV and cardiovascular risks and outcomes. Although its measurement may provide useful data, MPV remains to be a diagnostic tool that is yet to be included in routine clinical decision making. The aim of this systematic review and meta-analysis is to determine summary estimates of the diagnostic accuracy of mean platelet volume for the diagnosis of myocardial infarction among adult patients with angina and/or its equivalents in terms of sensitivity, specificity, diagnostic odds ratio, and likelihood ratios, and to determine the difference of the mean MPV values between those with MI and those in the non-MI controls. The primary search was done through search in electronic databases PubMed, Cochrane Review CENTRAL, HERDIN (Health Research and Development Information Network), Google Scholar, Philippine Journal of Pathology, and Philippine College of Physicians Philippine Journal of Internal Medicine. The reference list of original reports was also searched. Cross-sectional, cohort, and case-control articles studying the diagnostic performance of mean platelet volume in the diagnosis of acute myocardial infarction in adult patients were included in the study. Studies were included if: (1) CBC was taken upon presentation to the ER or upon admission (within 24 hours of symptom onset); (2) myocardial infarction was diagnosed with serum markers, ECG, or according to accepted guidelines by the Cardiology societies (American Heart Association (AHA), American College of Cardiology (ACC), European Society of Cardiology (ESC); and, (3) if outcomes were measured as significant difference AND/OR sensitivity and specificity. The authors independently screened for inclusion of all the identified potential studies as a result of the search. Eligible studies were appraised using well-defined criteria. Any disagreement between the reviewers was resolved through discussion and consensus. The overall mean MPV value of those with MI (9.702 fl; 95% CI 9.07 – 10.33) was higher than in those of the non-MI control group (8.85 fl; 95% CI 8.23 – 9.46). Interpretation of the calculated t-value of 2.0827 showed that there was a significant difference in the mean MPV values of those with MI and those of the non-MI controls. The summary sensitivity (Se) and specificity (Sp) for MPV were 0.66 (95% CI; 0.59 - 0.73) and 0.60 (95% CI; 0.43 – 0.75), respectively. The pooled diagnostic odds ratio (DOR) was 2.92 (95% CI; 1.90 – 4.50). The positive likelihood ratio of MPV in the diagnosis of myocardial infarction was 1.65 (95% CI; 1.20 – 22.27), and the negative likelihood ratio was 0.56 (95% CI; 0.50 – 0.64). The intended role for MPV in the diagnostic pathway of myocardial infarction would perhaps be best as a triage tool. With a DOR of 2.92, MPV values can discriminate between those who have MI and those without. For a patient with angina presenting with elevated MPV values, it is 1.65 times more likely that he has MI. Thus, it is implied that the decision to treat a patient with angina or its equivalents as a case of MI could be supported by an elevated MPV value.Keywords: mean platelet volume, MPV, myocardial infarction, angina, chest pain
Procedia PDF Downloads 8713604 The Value of Serum Procalcitonin in Patients with Acute Musculoskeletal Infections
Authors: Mustafa Al-Yaseen, Haider Mohammed Mahdi, Haider Ali Al–Zahid, Nazar S. Haddad
Abstract:
Background: Early diagnosis of musculoskeletal infections is of vital importance to avoid devastating complications. There is no single laboratory marker which is sensitive and specific in diagnosing these infections accurately. White blood cell count, erythrocyte sedimentation rate, and C-reactive protein are not specific as they can also be elevated in conditions other than bacterial infections. Materials Culture and sensitivity is not a true gold standard due to its varied positivity rates. Serum Procalcitonin is one of the new laboratory markers for pyogenic infections. The objective of this study is to assess the value of PCT in the diagnosis of soft tissue, bone, and joint infections. Patients and Methods: Patients of all age groups (seventy-four patients) with a diagnosis of musculoskeletal infection are prospectively included in this study. All patients were subjected to White blood cell count, erythrocyte sedimentation rate, C-reactive protein, and serum Procalcitonin measurements. A healthy non infected outpatient group (twenty-two patients) taken as a control group and underwent the same evaluation steps as the study group. Results: The study group showed mean Procalcitonin levels of 1.3 ng/ml. Procalcitonin, at 0.5 ng/ml, was (42.6%) sensitive and (95.5%) specific in diagnosing of musculoskeletal infections with (positive predictive value of 87.5% and negative predictive value of 48.3%) and (positive likelihood ratio of 9.3 and negative likelihood ratio of 0.6). Conclusion: Serum Procalcitonin, at a cut – off of 0.5 ng/ml, is a specific but not sensitive marker in the diagnosis of musculoskeletal infections, and it can be used effectively to rule in the diagnosis of infection but not to rule out it.Keywords: procalcitonin, infection, labratory markers, musculoskeletal
Procedia PDF Downloads 16313603 Relationship between Readability of Paper-Based Braille and Character Spacing
Authors: T. Nishimura, K. Doi, H. Fujimoto, T. Wada
Abstract:
The Number of people with acquired visual impairments has increased in recent years. In specialized courses at schools for the blind and in Braille lessons offered by social welfare organizations, many people with acquired visual impairments cannot learn to read adequately Braille. One of the reasons is that the common Braille patterns for people visual impairments who already has mature Braille reading skill being difficult to read for Braille reading beginners. In addition, there is the scanty knowledge of Braille book manufacturing companies regarding what Braille patterns would be easy to read for beginners. Therefore, it is required to investigate a suitable Braille patterns would be easy to read for beginners. In order to obtain knowledge regarding suitable Braille patterns for beginners, this study aimed to elucidate the relationship between readability of paper-based Braille and its patterns. This study focused on character spacing, which readily affects Braille reading ability, to determine a suitable character spacing ratio (ratio of character spacing to dot spacing) for beginners. Specifically, considering beginners with acquired visual impairments who are unfamiliar with reading Braille, we quantitatively evaluated the effect of character spacing ratio on Braille readability through an evaluation experiment using sighted subjects with no experience of reading Braille. In this experiment, ten sighted adults took the blindfold were asked to read test piece (three Braille characters). Braille used as test piece was composed of five dots. They were asked to touch the Braille by sliding their forefinger on the test piece immediately after the test examiner gave a signal to start the experiment. Then, they were required to release their forefinger from the test piece when they perceived the Braille characters. Seven conditions depended on character spacing ratio was held (i.e., 1.2, 1.4, 1.5, 1.6, 1.8, 2.0, 2.2 [mm]), and the other four depended on the dot spacing (i.e., 2.0, 2.5, 3.0, 3.5 [mm]). Ten trials were conducted for each conditions. The test pieces are created using by NISE Graphic could print Braille adjusted arbitrary value of character spacing and dot spacing with high accuracy. We adopted the evaluation indices for correct rate, reading time, and subjective readability to investigate how the character spacing ratio affects Braille readability. The results showed that Braille reading beginners could read Braille accurately and quickly, when character spacing ratio is more than 1.8 and dot spacing is more than 3.0 mm. Furthermore, it is difficult to read Braille accurately and quickly for beginners, when both character spacing and dot spacing are small. For this study, suitable character spacing ratio to make reading easy for Braille beginners is revealed.Keywords: Braille, character spacing, people with visual impairments, readability
Procedia PDF Downloads 28513602 Mechanical Properties of a Soil Stabilized With a Portland Cement
Authors: Ahmed Emad Ahmed, Mostafa El Abd, Ahmed Wakeb, Moahmmed Eissa
Abstract:
Soil modification and reinforcing aims to increase soil shear strength and stiffness. In this report, different amounts of cement were added to the soil to explore its effect on shear strength and penetration using 3 tests. The first test is proctor compaction test which was conducted to determine the optimal moisture content and maximum dry density. The second test was direct shear test which was conducted to measure shear strength of soil. The third experiment was California bearing ratio test which was done to measure the penetration in soil. Each test was done different amount of times using different amounts of cement. The results from every test show that cement improve soil shear strength properties and stiffness.Keywords: soil stabilized, soil, mechanical properties of soil, soil stabilized with a portland cement
Procedia PDF Downloads 13413601 Effectiveness of the Lacey Assessment of Preterm Infants to Predict Neuromotor Outcomes of Premature Babies at 12 Months Corrected Age
Authors: Thanooja Naushad, Meena Natarajan, Tushar Vasant Kulkarni
Abstract:
Background: The Lacey Assessment of Preterm Infants (LAPI) is used in clinical practice to identify premature babies at risk of neuromotor impairments, especially cerebral palsy. This study attempted to find the validity of the Lacey assessment of preterm infants to predict neuromotor outcomes of premature babies at 12 months corrected age and to compare its predictive ability with the brain ultrasound. Methods: This prospective cohort study included 89 preterm infants (45 females and 44 males) born below 35 weeks gestation who were admitted to the neonatal intensive care unit of a government hospital in Dubai. Initial assessment was done using the Lacey assessment after the babies reached 33 weeks postmenstrual age. Follow up assessment on neuromotor outcomes was done at 12 months (± 1 week) corrected age using two standardized outcome measures, i.e., infant neurological international battery and Alberta infant motor scale. Brain ultrasound data were collected retrospectively. Data were statistically analyzed, and the diagnostic accuracy of the Lacey assessment of preterm infants (LAPI) was calculated -when used alone and in combination with the brain ultrasound. Results: On comparison with brain ultrasound, the Lacey assessment showed superior specificity (96% vs. 77%), higher positive predictive value (57% vs. 22%), and higher positive likelihood ratio (18 vs. 3) to predict neuromotor outcomes at one year of age. The sensitivity of Lacey assessment was lower than brain ultrasound (66% vs. 83%), whereas specificity was similar (97% vs. 98%). A combination of Lacey assessment and brain ultrasound results showed higher sensitivity (80%), positive (66%), and negative (98%) predictive values, positive likelihood ratio (24), and test accuracy (95%) than Lacey assessment alone in predicting neurological outcomes. The negative predictive value of the Lacey assessment was similar to that of its combination with brain ultrasound (96%). Conclusion: Results of this study suggest that the Lacey assessment of preterm infants can be used as a supplementary assessment tool for premature babies in the neonatal intensive care unit. Due to its high specificity, Lacey assessment can be used to identify those babies at low risk of abnormal neuromotor outcomes at a later age. When used along with the findings of the brain ultrasound, Lacey assessment has better sensitivity to identify preterm babies at particular risk. These findings have applications in identifying premature babies who may benefit from early intervention services.Keywords: brain ultrasound, lacey assessment of preterm infants, neuromotor outcomes, preterm
Procedia PDF Downloads 13813600 Estimates of (Co)Variance Components and Genetic Parameters for Body Weights and Growth Efficiency Traits in the New Zealand White Rabbits
Authors: M. Sakthivel, A. Devaki, D. Balasubramanyam, P. Kumarasamy, A. Raja, R. Anilkumar, H. Gopi
Abstract:
The genetic parameters of growth traits in the New Zealand White rabbits maintained at Sheep Breeding and Research Station, Sandynallah, The Nilgiris, India were estimated by partitioning the variance and covariance components. The (co)variance components of body weights at weaning (W42), post-weaning (W70) and marketing (W135) age and growth efficiency traits viz., average daily gain (ADG), relative growth rate (RGR) and Kleiber ratio (KR) estimated on a daily basis at different age intervals (1=42 to 70 days; 2=70 to 135 days and 3=42 to 135 days) from weaning to marketing were estimated by restricted maximum likelihood, fitting six animal models with various combinations of direct and maternal effects. Data were collected over a period of 15 years (1998 to 2012). A log-likelihood ratio test was used to select the most appropriate univariate model for each trait, which was subsequently used in bivariate analysis. Heritability estimates for W42, W70 and W135 were 0.42 ± 0.07, 0.40 ± 0.08 and 0.27 ± 0.07, respectively. Heritability estimates of growth efficiency traits were moderate to high (0.18 to 0.42). Of the total phenotypic variation, maternal genetic effect contributed 14 to 32% for early body weight traits (W42 and W70) and ADG1. The contribution of maternal permanent environmental effect varied from 6 to 18% for W42 and for all the growth efficiency traits except for KR2. Maternal permanent environmental effect on most of the growth efficiency traits was a carryover effect of maternal care during weaning. Direct maternal genetic correlations, for the traits in which maternal genetic effect was significant, were moderate to high in magnitude and negative in direction. Maternal effect declined as the age of the animal increased. The estimates of total heritability and maternal across year repeatability for growth traits were moderate and an optimum rate of genetic progress seems possible in the herd by mass selection. The estimates of genetic and phenotypic correlations among body weight traits were moderate to high and positive; among growth efficiency traits were low to high with varying directions; between body weights and growth efficiency traits were very low to high in magnitude and mostly negative in direction. Moderate to high heritability and higher genetic correlation in body weight traits promise good scope for genetic improvement provided measures are taken to keep the inbreeding at the lowest level.Keywords: genetic parameters, growth traits, maternal effects, rabbit genetics
Procedia PDF Downloads 44713599 Financial Fraud Prediction for Russian Non-Public Firms Using Relational Data
Authors: Natalia Feruleva
Abstract:
The goal of this paper is to develop the fraud risk assessment model basing on both relational and financial data and test the impact of the relationships between Russian non-public companies on the likelihood of financial fraud commitment. Relationships mean various linkages between companies such as parent-subsidiary relationship and person-related relationships. These linkages may provide additional opportunities for committing fraud. Person-related relationships appear when firms share a director, or the director owns another firm. The number of companies belongs to CEO and managed by CEO, the number of subsidiaries was calculated to measure the relationships. Moreover, the dummy variable describing the existence of parent company was also included in model. Control variables such as financial leverage and return on assets were also implemented because they describe the motivating factors of fraud. To check the hypotheses about the influence of the chosen parameters on the likelihood of financial fraud, information about person-related relationships between companies, existence of parent company and subsidiaries, profitability and the level of debt was collected. The resulting sample consists of 160 Russian non-public firms. The sample includes 80 fraudsters and 80 non-fraudsters operating in 2006-2017. The dependent variable is dichotomous, and it takes the value 1 if the firm is engaged in financial crime, otherwise 0. Employing probit model, it was revealed that the number of companies which belong to CEO of the firm or managed by CEO has significant impact on the likelihood of financial fraud. The results obtained indicate that the more companies are affiliated with the CEO, the higher the likelihood that the company will be involved in financial crime. The forecast accuracy of the model is about is 80%. Thus, the model basing on both relational and financial data gives high level of forecast accuracy.Keywords: financial fraud, fraud prediction, non-public companies, regression analysis, relational data
Procedia PDF Downloads 11913598 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups
Authors: Naushad Mamode Khan
Abstract:
The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood based estimating methodology. The joint generalized quasilikelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQLIII) that are based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.Keywords: longitudinal, com-Poisson, ill-conditioned, INAR(1), GLMS, GQL
Procedia PDF Downloads 35413597 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals
Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam
Abstract:
The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study
Procedia PDF Downloads 31913596 The Effects of Microsilis, Super Plasticizer and Air Entrain in Lightweight Expanded Perlite Concrete
Authors: Yousef Zandi, Hoseyn Leka, Mahin Ganadi
Abstract:
This paper presents the results of a laboratory study carried out on effect of using the simultaneous of microsilis, super plasticizer and air entrain additives on compressive strength of light weight perlite concrete. In this study, 63 test specimens with different percentage and mixtures including microsilis, super plasticizer and air entrain were used. 63 test specimens with different mixtures including microsilis and air entrain were also prepared for comparison purposes. In the mixtures, lightweight perlite aggregate, microsilis, super plasticizer, air entrain, cement type I, sand and water were used. Laboratory test results showed that workability of lightweight perlite concrete was increased and compressive strength was released by the use of super plasticizer, without any change in water/cement ratio. We know that compressive strength of concrete is depends on water/cement ratio. Since, it was expected that the use of air entrain and super plasticizer lower water/cement ratio and raised strengths, considerably. It was concluded that use of simultaneous of air entrains and super plasticizer additive were not economical and use of air entrain and microsilis is better than use of air entrain, super plasticizer and microsilis. It was concluded that the best results were obtained by using 10% microsilis and 0.5% air entrain.Keywords: perlite, microsilis, air entrain, super plasticizer
Procedia PDF Downloads 38413595 Clinical and Radiological Features of Adenomyosis and Its Histopathological Correlation
Authors: Surabhi Agrawal Kohli, Sunita Gupta, Esha Khanuja, Parul Garg, P. Gupta
Abstract:
Background: Adenomyosis is a common gynaecological condition that affects the menstruating women. Uterine enlargement, dysmenorrhoea, and menorrhagia are regarded as the cardinal clinical symptoms of adenomyosis. Classically it was thought, compared with ultrasonography, when adenomyosis is suspected, MRI enables more accurate diagnosis of the disease. Materials and Methods: 172 subjects were enrolled after an informed consent that had complaints of HMB, dyspareunia, dysmenorrhea, and chronic pelvic pain. Detailed history of the enrolled subjects was taken, followed by a clinical examination. These patients were then subjected to TVS where myometrial echo texture, presence of myometrial cysts, blurring of endomyometrial junction was noted. MRI was followed which noted the presence of junctional zone thickness and myometrial cysts. After hysterectomy, histopathological diagnosis was obtained. Results: 78 participants were analysed. The mean age was 44.2 years. 43.5% had parity of 4 or more. heavy menstrual bleeding (HMB) was present in 97.8% and dysmenorrhea in 93.48 % of HPE positive patient. Transvaginal sonography (TVS) and MRI had a sensitivity of 89.13% and 80.43%, specificity of 90.62% and 84.37%, positive likelihood ratio of 9.51 and 5.15, negative likelihood ratio of 0.12 and 0.23, positive predictive value of 93.18% and 88.1%, negative predictive value of 85.29% and 75% and a diagnostic accuracy of 89.74% and 82.5%. Comparison of sensitivity (p=0.289) and specificity (p=0.625) showed no statistically significant difference between TVS and MRI. Conclusion: Prevalence of 30.23%. HMB with dysmenorrhoea and chronic pelvic pain helps in diagnosis. TVS (Endomyometrial junction blurring) is both sensitive and specific in diagnosing adenomyosis without need for additional diagnostic tool. Both TVS and MRI are equally efficient, however because of certain additional advantages of TVS over MRI, it may be used as the first choice of imaging. MRI may be used additionally in difficult cases as well as in patients with existing co-pathologies.Keywords: adenomyosis, heavy menstrual bleeding, MRI, TVS
Procedia PDF Downloads 49813594 Inference for Compound Truncated Poisson Lognormal Model with Application to Maximum Precipitation Data
Authors: M. Z. Raqab, Debasis Kundu, M. A. Meraou
Abstract:
In this paper, we have analyzed maximum precipitation data during a particular period of time obtained from different stations in the Global Historical Climatological Network of the USA. One important point to mention is that some stations are shut down on certain days for some reason or the other. Hence, the maximum values are recorded by excluding those readings. It is assumed that the number of stations that operate follows zero-truncated Poisson random variables, and the daily precipitation follows a lognormal random variable. We call this model a compound truncated Poisson lognormal model. The proposed model has three unknown parameters, and it can take a variety of shapes. The maximum likelihood estimators can be obtained quite conveniently using Expectation-Maximization (EM) algorithm. Approximate maximum likelihood estimators are also derived. The associated confidence intervals also can be obtained from the observed Fisher information matrix. Simulation results have been performed to check the performance of the EM algorithm, and it is observed that the EM algorithm works quite well in this case. When we analyze the precipitation data set using the proposed model, it is observed that the proposed model provides a better fit than some of the existing models.Keywords: compound Poisson lognormal distribution, EM algorithm, maximum likelihood estimation, approximate maximum likelihood estimation, Fisher information, skew distribution
Procedia PDF Downloads 10813593 Effect of Taper Pin Ratio on Microstructure and Mechanical Property of Friction Stir Welded AZ31 Magnesium Alloy
Authors: N. H. Othman, N. Udin, M. Ishak, L. H. Shah
Abstract:
This study focuses on the effect of pin taper tool ratio on friction stir welding of magnesium alloy AZ31. Two pieces of AZ31 alloy with thickness of 6 mm were friction stir welded by using the conventional milling machine. The shoulder diameter used in this experiment is fixed at 18 mm. The taper pin ratio used are varied at 6:6, 6:5, 6:4, 6:3, 6:2 and 6:1. The rotational speeds that were used in this study were 500 rpm, 1000 rpm and 1500 rpm, respectively. The welding speeds used are 150 mm/min, 200 mm/min and 250 mm/min. Microstructure observation of welded area was studied by using optical microscope. Equiaxed grains were observed at the TMAZ and stir zone indicating fully plastic deformation. Tool pin diameter ratio 6/1 causes low heat input to the material because of small contact surface between tool surface and stirred materials compared to other tool pin diameter ratio. The grain size of stir zone increased with increasing of ratio of rotational speed to transverse speed due to higher heat input. It is observed that worm hole is produced when excessive heat input is applied. To evaluate the mechanical properties of this specimen, tensile test was used in this study. Welded specimens using taper pin ratio 6:1 shows higher tensile strength compared to other taper pin ratio up to 204 MPa. Moreover, specimens using taper pin ratio 6:1 showed better tensile strength with 500 rpm of rotational speed and 150mm/min welding speed.Keywords: friction stir welding, magnesium AZ31, cylindrical taper tool, taper pin ratio
Procedia PDF Downloads 28613592 Designing a Low Speed Wind Tunnel for Investigating Effects of Blockage Ratio on Heat Transfer of a Non-Circular Tube
Authors: Arash Mirabdolah Lavasani, Taher Maarefdoost
Abstract:
Effect of blockage ratio on heat transfer from non-circular tube is studied experimentally. For doing this experiment a suction type low speed wind tunnel with test section dimension of 14×14×40 and velocity in rage of 7-20 m/s was designed. The blockage ratios varied between 1.5 to 7 and Reynolds number based on equivalent diameter varies in range of 7.5×103 to 17.5×103. The results show that by increasing blockage ratio from 1.5 to 7, drag coefficient of the cam shaped tube decreased about 55 percent. By increasing Reynolds number, Nusselt number of the cam shaped tube increases about 40 to 48 percent in all ranges of blockage ratios.Keywords: wind tunnel, non-circular tube, blockage ratio, experimental heat transfer, cross-flow
Procedia PDF Downloads 34813591 Application of Golden Ratio in Contemporary Textile Industry and Its Effect on Consumer Preferences
Authors: Rafia Asghar, Abdul Hafeez
Abstract:
This research aims to determine the influence of Fibonacci numbers and golden ratio through textile designs. This study was carried out by collecting a variety of designs from different textile industries. Top textile designers were also interviewed regarding golden ratio and its application on their designs and design execution process. This study revealed that most of the designs fulfilled the golden ratio and the designs that were according to golden ratio were more favorite to the consumers.Keywords: golden ratio, Fibonacci numbers, textile design, designs
Procedia PDF Downloads 71813590 Development of Standard Thai Appetizer in Rattanakosin Era‘s Standard: Case Study of Thai Steamed Dumpling
Authors: Nunyong Fuengkajornfung, Pattama Hirunyophat, Tidarat Sanphom
Abstract:
The objectives of this research were: To study of the recipe standard of Thai steamed dumpling, to study the ratio of modified starch in Thai steamed dumpling, to study chemical elements analyzing and Escherichia coli in Thai steamed dumpling. The experimental processes were designed in two stages as follows: To study the recipe standard of Thai steamed dumpling and to study the ratio of rice flour: modify starch by three levels 90:10, 73:30, and 50:50. The evaluation test used 9 Points Hedonic Scale method by the sensory evaluation test such as color, smell, taste, texture and overall liking. An experimental by Randomized Complete Block Design (RCBD). The statistics used in data analyses were means, standard deviation, one-way ANOVA and Duncan’s New Multiple Range Test. Regression equation, at a statistically significant level of .05. The results showed that the recipe standard was studied from three recipes by the sensory evaluation test such as color, odor, taste, spicy, texture and total acceptance. The result showed that the recipe standard of second was suitably to development. The ratio of rice flour: modified starch had 3 levels 90:10, 73:30, and 50:50 which the process condition of 50:50 had well scores (like moderately to like very much; used 9 Points Hedonic Scale method for the sensory test). Chemical elements analyzing, it showed that moisture 58.63%, fat 5.45%, protein 4.35%, carbohydrate 30.45%, and Ash 1.12%. The Escherichia coli is not found in lab testing.Keywords: Thai snack in Rattanakosin era, Thai steamed dumpling, modify starch, recipe standard
Procedia PDF Downloads 32413589 Effect of Soaking Period of Clay on Its California Bearing Ratio Value
Authors: Robert G. Nini
Abstract:
The quality of road pavement is affected mostly by the type of sub-grade which is acting as road foundation. The roads degradation is related to many factors especially the climatic conditions, the quality, and the thickness of the base materials. The thickness of this layer depends on its California Bearing Ratio (CBR) test value which by its turn is highly affected by the quantity of water infiltrated under the road after heavy rain. The capacity of the base material to drain out its water is predominant factor because any change in moisture content causes change in sub-grade strength. This paper studies the effect of the soaking period of soil especially clay on its CBR value. For this reason, we collected many clayey samples in order to study the effect of the soaking period on its CBR value. On each soil, two groups of experiments were performed: main tests consisting of Proctor and CBR test from one side and from other side identification tests consisting of other tests such as Atterberg limits tests. Each soil sample was first subjected to Proctor test in order to find its optimum moisture content which will be used to perform the CBR test. Four CBR tests were performed on each soil with different soaking period. The first CBR was done without soaking the soil sample; the second one with two days soaking, the third one with four days soaking period and the last one was done under eight days soaking. By comparing the results of CBR tests performed with different soaking time, a more detailed understanding was given to the role of the water in reducing the CBR of soil. In fact, by extending the soaking period, the CBR was found to be reduced quickly the first two days and slower after. A precise reduction factor of the CBR in relation with soaking period was found at the end of this paper.Keywords: California Bearing Ratio, clay, proctor test, soaking period, sub-grade
Procedia PDF Downloads 13213588 Strength of Soft Clay Reinforced with Polypropylene Column
Authors: Muzamir Hasan, Anas Bazirgan
Abstract:
Granular columns is a technique that has the properties of improving bearing capacity, accelerating the dissipation of excess pore water pressure and reducing settlement in a weak soft soil. This research aims to investigate the role of Polypropylene column in improving the shear strength and compressibility of soft reconstituted kaolin clay by determining the effects of area replacement ratio, height penetrating ratio and volume replacement ratio of a singular Polypropylene column on the strength characteristics. Reinforced kaolin samples were subjected to Unconfined Compression (UCT) and Unconsolidated Undrained (UU) triaxial tests. The kaolin samples were 50 mm in diameter and 100 mm in height. Using the PP column reinforcement, with an area replacement ratio of 0.8, 0.5 and 0.3, shear strength increased approximately 5.27%, 26.22% and 64.28%, and 37.14%, 42.33% and 51.17%, for area replacement ratios of 25% and 10.24%. Meanwhile, UU testing showed an increase in shear strength of 24.01%, 23.17% and 23.49% and 28.79%, 27.29 and 30.81% for the same ratios. Based on the UCT results, the undrained shear strength generally increased with the decrease in height penetration ratio. However, based on the UU test results Mohr-Coulomb failure criteria, the installation of Polypropylene columns did not show any significant difference in effective friction angle. However, there was an increase in the apparent cohesion and undrained shear strength of the kaolin clay. In conclusion, Polypropylene column greatly improved the shear strength; and could therefore be implemented in reducing the cost of soil improvement as a replacement for non-renewable materials.Keywords: polypropylene, UCT, UU test, Kaolin S300, ground improvement
Procedia PDF Downloads 32913587 An Experimental Study on the Effects of Aspect Ratio of a Rectangular Microchannel on the Two-Phase Frictional Pressure Drop
Authors: J. A. Louw Coetzee, Josua P. Meyer
Abstract:
The thermodynamic properties of different refrigerants in combination with the variation in geometrical properties (hydraulic diameter, aspect ratio, and inclination angle) of a rectangular microchannel determine the two-phase frictional pressure gradient. The effect of aspect ratio on frictional pressure drop had not been investigated enough during adiabatic two-phase flow and condensation in rectangular microchannels. This experimental study was concerned with measurement of the frictional pressure gradient in a rectangular microchannel, with hydraulic diameter of 900 μm. The aspect ratio of this microchannel was varied over a range that stretched from 0.3 to 3 in order to capture the effect of aspect ratio variation. A commonly used refrigerant, R134a, was used in the tests that spanned over a mass flux range of 100 to 1000 kg m-2 s-1 as well as the whole vapour quality range. This study formed part of a refrigerant condensation experiment and was therefore conducted at a saturation temperature of 40 °C. The study found that there was little influence of the aspect ratio on the frictional pressure drop at the test conditions. The data was compared to some of the well known micro- and macro-channel two-phase pressure drop correlations. Most of the separated flow correlations predicted the pressure drop data well at mass fluxes larger than 400 kg m-2 s-1 and vapour qualities above 0.2.Keywords: aspect ratio, microchannel, two-phase, pressure gradient
Procedia PDF Downloads 36613586 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 130