Search results for: cumulative
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 364

Search results for: cumulative

334 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation

Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro

Abstract:

This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.

Keywords: acceptance, block size, mixed linear model, testing order, testing order

Procedia PDF Downloads 293
333 Stochastic Prioritization of Dependent Actuarial Risks: Preferences among Prospects

Authors: Ezgi Nevruz, Kasirga Yildirak, Ashis SenGupta

Abstract:

Comparing or ranking risks is the main motivating factor behind the human trait of making choices. Cumulative prospect theory (CPT) is a preference theory approach that evaluates perception and bias in decision making under risk and uncertainty. We aim to investigate the aggregate claims of different risk classes in terms of their comparability and amenability to ordering when the impact of risk perception is considered. For this aim, we prioritize the aggregate claims taken as actuarial risks by using various stochastic ordering relations. In order to prioritize actuarial risks, we use stochastic relations such as stochastic dominance and stop-loss dominance that are proposed in the frame of partial order theory. We take into account the dependency of the individual claims exposed to similar environmental risks. At first, we modify the zero-utility premium principle in order to obtain a solution for the stop-loss premium under CPT. Then, we propose a stochastic stop-loss dominance of the aggregate claims and find a relation between the stop-loss dominance and the first-order stochastic dominance under the dependence assumption by using properties of the familiar as well as some emerging multivariate claim distributions.

Keywords: cumulative prospect theory, partial order theory, risk perception, stochastic dominance, stop-loss dominance

Procedia PDF Downloads 293
332 Energy Loss Reduction in Oil Refineries through Flare Gas Recovery Approaches

Authors: Majid Amidpour, Parisa Karimi, Marzieh Joda

Abstract:

For the last few years, release of burned undesirable by-products has become a challenging issue in oil industries. Flaring, as one of the main sources of air contamination, involves detrimental and long-lasting effects on human health and is considered a substantial reason for energy losses worldwide. This research involves studying the implications of two main flare gas recovery methods at three oil refineries, all in Iran as the case I, case II, and case III in which the production capacities are increasing respectively. In the proposed methods, flare gases are converted into more valuable products, before combustion by the flare networks. The first approach involves collecting, compressing and converting the flare gas to smokeless fuel which can be used in the fuel gas system of the refineries. The other scenario includes utilizing the flare gas as a feed into liquefied petroleum gas (LPG) production unit already established in the refineries. The processes of these scenarios are simulated, and the capital investment is calculated for each procedure. The cumulative profits of the scenarios are evaluated using Net Present Value method. Furthermore, the sensitivity analysis based on total propane and butane mole fraction is carried out to make a rational comparison for LPG production approach, and the results are illustrated for different mole fractions of propane and butane. As the mole fraction of propane and butane contained in LPG differs in summer and winter seasons, the results corresponding to LPG scenario are demonstrated for each season. The results of the simulations show that cumulative profit in fuel gas production scenario and LPG production rate increase with the capacity of the refineries. Moreover, the investment return time in LPG production method experiences a decline, followed by a rising trend with an increase in C3 and C4 content. The minimum value of time return occurs at propane and butane sum concentration values of 0.7, 0.6, and 0.7 in case I, II, and III, respectively. Based on comparison of the time of investment return and cumulative profit, fuel gas production is the superior scenario for three case studies.

Keywords: flare gas reduction, liquefied petroleum gas, fuel gas, net present value method, sensitivity analysis

Procedia PDF Downloads 122
331 Various Factors Affecting Students Performances In A Saudi Medical School

Authors: Raneem O. Salem, Najwa Al-Mously, Nihal Mohamed Nabil, Abdulmohsen H. Al-Zalabani, Abeer F. Al-Dhawi, Nasser Al-Hamdan

Abstract:

Objective: There are various demographic and educational factors that affect the academic performance of undergraduate medical students. The objective of this study is to identify these factors and correlate them to the GPA of the students. Methods: A cross-sectional study design utilizing grade point averages (GPAs) of two cohorts of students in both levels of the pre-clinical phase. In addition, self-administered questionnaire was used to evaluate the effect of these factors on students with poor and good cumulative GPA. Results: Among the various factors studied, gender, marital status, and the transportation used to reach the faculty significantly affected academic performance of students. Students with a cumulative GPA of 3.0 or greater significantly differed than those with a GPA of less than 3.0 being higher in female students, in married students, and type of transportation used to reach the college. Factors including age, educational factors, and type of transportation used have shown to create a significant difference in GPA between male and females. Conclusion: Factors such as age, gender, marital status, learning resources, study time, and the transportation used have been shown to significantly affect medical student GPA as a whole batch as well as when they are tested for gender.

Keywords: academic performance, educational factors, learning resources, study time, gender, socio-demographic factors

Procedia PDF Downloads 240
330 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization

Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu

Abstract:

This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.

Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection

Procedia PDF Downloads 30
329 Application of Hyperbinomial Distribution in Developing a Modified p-Chart

Authors: Shourav Ahmed, M. Gulam Kibria, Kais Zaman

Abstract:

Control charts graphically verify variation in quality parameters. Attribute type control charts deal with quality parameters that can only hold two states, e.g., good or bad, yes or no, etc. At present, p-control chart is most commonly used to deal with attribute type data. In construction of p-control chart using binomial distribution, the value of proportion non-conforming must be known or estimated from limited sample information. As the probability distribution of fraction non-conforming (p) is considered in hyperbinomial distribution unlike a constant value in case of binomial distribution, it reduces the risk of false detection. In this study, a statistical control chart is proposed based on hyperbinomial distribution when prior estimate of proportion non-conforming is unavailable and is estimated from limited sample information. We developed the control limits of the proposed modified p-chart using the mean and variance of hyperbinomial distribution. The proposed modified p-chart can also utilize additional sample information when they are available. The study also validates the use of modified p-chart by comparing with the result obtained using cumulative distribution function of hyperbinomial distribution. The study clearly indicates that the use of hyperbinomial distribution in construction of p-control chart yields much accurate estimate of quality parameters than using binomial distribution.

Keywords: binomial distribution, control charts, cumulative distribution function, hyper binomial distribution

Procedia PDF Downloads 236
328 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data

Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin

Abstract:

The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.

Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test

Procedia PDF Downloads 261
327 Spatial Cluster Analysis of Human Cases of Crimean Congo Hemorrhagic Fever Reported in Pakistan

Authors: Tariq Abbas, Younus Muhammad, Sayyad Aun Muhammad

Abstract:

Background : Crimean Congo hemorrhagic fever (CCHF) is a tick born viral zoonotic disease that has been notified from almost all regions of Pakistan. The aim of this study was to investigate spatial distribution of CCHF cases reported to National Institue of Health , Islamabad during year 2013. Methods : Spatial statistics tools were applied to detect extent spatial auto-correlation and clusters of the disease based on adjusted cumulative incidence per million population for each district. Results : The data analyses revealed a large multi-district cluster of high values in the uplands of Balochistan province near Afghanistan border. Conclusion : The cluster included following districts: Pishin; Qilla Abdullah; Qilla Saifullah; Quetta, Sibi; Zhob; and Ziarat. These districts may be given priority in CCHF surveillance, control programs, and further epidemiological research . The location of the cluster close to border of Afghanistan and Iran highlight importance of the findings for organizations dealing with disease at national, regional and global levels.

Keywords: Crimean Congo hemorrhagic fever, Pakistan, spatial autocorrelation, clusters , adjusted cumulative incidence

Procedia PDF Downloads 380
326 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the PTH percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: reliability, accelerated life testing, cumulative exposure model, Bayesian estimation, progressive type-I censoring, Weibull distribution

Procedia PDF Downloads 471
325 A Study of The Factors Predicting Radiation Exposure to Contacts of Saudi Patients Treated With Low-Dose Radioactive Iodine (I-131)

Authors: Khalid A. Salman, Shereen Wagih, Tariq Munshi, Musaed Almalki, Safwan Zatari, Zahid Khan

Abstract:

Aim: To measure exposure levels to family members and caregivers of Saudi patients treated with low dose I131 therapy, and household radiation exposure rate to predict different factors that can affect radiation exposure. Patients and methods: All adult self dependent patients with hyperthyroidism or cancer thyroid referred for low dose radioactive I131 therapy on outpatient basis are included. Radiation protection procedures are given to the participant and family members in details. TLD’s were dispensed to each participant in sufficient quantity for his/her family members living in the household. TLD’s are collected at fifth days post-dispense from patients who agreed to have a home visit during which the household is inspected and level of radiation contamination of surfaces was measured. Results: Thirty-two patients were enrolled in the current study, with a mean age of 43.1± 17.1 years Out of them 25 patients (78%) are females. I131 therapy was given in twenty patients (63%) for cancer thyroid of and for toxic goiter in the remaining twelve patients (37%), with an overall mean I131 dose of 24.1 ± 7.5mCi that is relatively higher in the former. The overall number of household family members and helpers of patients are 139, out of them77 are females (55.4%) & 62 are males (44.6%) with a mean age of 29.8± 17.6. The mean period of contact with the patient is 7.6 ±5.6hours. The cumulative radiation exposure shows that radiation exposure to all family members is below the exposure constraint (1mSv), with a range of 109 to 503uSv, and a mean value of 220.9±91 uSv. Numerical data shows a little higher exposure rate for family members of those who receive higher dose of I131 (patients with thyroid cancer) and household members who spent longer time with the patient, yet, the difference is statistically insignificant (P>0.05). Besides, no significant correlation was found between the degree of cumulative exposure of the family members to their gender, age, socioeconomic standard, educational level and residential factors. In the 21 home visits all data from bedrooms, reception areas and kitchens are below hazardous limits (0.5uSv/h) apart from bathrooms that give a slightly higher reading of 0.57±0.39 uSv/h in those with cancer thyroid who receive a higher radiation dose. A statistically significant difference was found between radiation exposure rate in bathrooms used by the patient versus those used by family members only, with a mean value of exposure rate of 0.701±0.21 uSv/h and 0.17±0.82 uSv/h respectively, with a p-value of 0.018 (<0.05). Conclusion: Family members of patients treated with low dose I131 on outpatient basis have a good compliance to radiation protection instruction if given properly with a cumulative radiation exposure rate evidently beyond the radiation exposure constraints of 1 mSv. Given I131 dose, hours spent with the patient, age, gender, socioeconomic standard, educational level and residential factors have no significant correlation with the cumulative radiation exposure. The patient bathroom exhibits more radiation exposure rate, needing more strict instructions for patient bathroom use and health hygiene.

Keywords: family members, radiation exposure, radioactive iodine therapy, radiation safety

Procedia PDF Downloads 242
324 Parametric Modeling for Survival Data with Competing Risks Using the Generalized Gompertz Distribution

Authors: Noora Al-Shanfari, M. Mazharul Islam

Abstract:

The cumulative incidence function (CIF) is a fundamental approach for analyzing survival data in the presence of competing risks, which estimates the marginal probability for each competing event. Parametric modeling of CIF has the advantage of fitting various shapes of CIF and estimates the impact of covariates with maximum efficiency. To calculate the total CIF's covariate influence using a parametric model., it is essential to parametrize the baseline of the CIF. As the CIF is an improper function by nature, it is necessary to utilize an improper distribution when applying parametric models. The Gompertz distribution, which is an improper distribution, is limited in its applicability as it only accounts for monotone hazard shapes. The generalized Gompertz distribution, however, can adapt to a wider range of hazard shapes, including unimodal, bathtub, and monotonic increasing or decreasing hazard shapes. In this paper, the generalized Gompertz distribution is used to parametrize the baseline of the CIF, and the parameters of the proposed model are estimated using the maximum likelihood approach. The proposed model is compared with the existing Gompertz model using the Akaike information criterion. Appropriate statistical test procedures and model-fitting criteria will be used to test the adequacy of the model. Both models are applied to the ‘colon’ dataset, which is available in the “biostat3” package in R.

Keywords: competing risks, cumulative incidence function, improper distribution, parametric modeling, survival analysis

Procedia PDF Downloads 55
323 Enhancement of Mulberry Leaf Yield and Water Productivity in Eastern Dry Zone of Karnataka, India

Authors: Narayanappa Devakumar, Chengalappa Seenappa

Abstract:

The field experiments were conducted during Rabi 2013 and summer 2014 at College of Sericulture, Chintamani, Chickaballapur district, Karnataka, India to find out the response of mulberry to different methods, levels of irrigation and mulching. The results showed that leaf yield and water productivity of mulberry were significantly influenced by different methods, levels of irrigation and mulching. Subsurface drip with lower level of irrigation at 0.8 CPE (Cumulative Pan Evaporation) recorded higher leaf yield and water productivity (42857 kg ha-1 yr-1and 364.41 kg hacm-1) than surface drip with higher level of irrigation at 1.0 CPE (38809 kg ha-1 yr-1 and 264.10 kg hacm-1) and micro spray jet (39931 kg ha-1 yr-1 and 271.83 kg hacm-1). Further, subsurface drip recorded minimum water used to produce one kg of leaf and to earn one rupee of profit (283 L and 113 L) compared to surface drip (390 L and 156 L) and micro spray jet (379 L and 152 L) irrigation methods. Mulberry leaf yield increased and water productivity decreased with increased levels of irrigation. However, these results indicated that irrigation of mulberry with subsurface drip increased leaf yield and water productivity by saving 20% of irrigation water than surface drip and micro spray jet irrigation methods in Eastern Dry Zone (EDZ) of Karnataka.

Keywords: cumulative pan evaporation, mulaberry, subsurface drip irrigation, water productivity

Procedia PDF Downloads 243
322 Performance Effects of Demergers in India

Authors: Pavak Vyas, Hiral Vyas

Abstract:

Spin-offs commonly known as demergers in India, represents dismantling of conglomerates which is a common phenomenon in financial markets across the world. Demergers are carried out with different motives. A demerger generally refers to a corporate restructuring where, a large company divests its stake in in its subsidiary and distributes the shares of the subsidiary - demerged entity to the existing shareholders without any consideration. Demergers in Indian companies are over a decade old phenomena, with many companies opting for the same. This study examines the demerger regulations in Indian capital markets and the announcement period price reaction of demergers during year 2010-2015. We study total 97 demerger announcements by companies listed in India and try to establish that demergers results into abnormal returns for the shareholders of the parent company. Using event study methodology we have analyzed the security price performance of the announcement day effect 10 days prior to announcement to 10 days post demerger announcement. We find significant out-performance of the security over the benchmark index post demerger announcements. The cumulative average abnormal returns range from 3.71% on the day of announcement of a private demerger to 2.08% following 10 days surrounding the announcement, and cumulative average abnormal returns range from 5.67% on the day of announcement of a public demerger to 4.15% following10 days surrounding the announcement.

Keywords: demergers, event study, spin offs, stock returns

Procedia PDF Downloads 266
321 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 116
320 The Disruptive Effect of COVID-19 on the Informativeness of Dividend Increases: Some Evidence from Johannesburg Stock Exchange-Listed Companies

Authors: Faustina Masocha

Abstract:

This study sought to determine if the Covid-19 pandemic played a disruptive role in the signalling effect of dividend increases for the Top 40 companies listed on the Johannesburg Stock Exchange. With the use of Event Study Methodologies, it was found that dividend increases that were announced in the 2018 and 2019 financial years resulted in Cumulative Abnormal Returns (CARs) that were significantly different from zero, as confirmed by a p-value of 0,0300. This resulted in the conclusion that, under normal circumstances, dividend increases follow the precepts outlined in signalling theories which indicate that the announcement of dividend increases sent positive signals about the expected financial performance of a company. To prove the notion that Covid-19 plays a disruptive role on the signalling hypothesis, it was found from both parametric and non-parametric tests of significance that CARs related to dividend increases that were announced during the 2020 and 2021 financial years, when the Covid-19 pandemic was at its peak, were not significantly different from zero. Therefore, although the dividend increases still resulted in some CARs, such CARs were not statistically different from zero to confirm the signalling hypothesis. A p-value of 0.9830 from parametric t-tests and a p-value of 0.8971 from the Wilcoxon signed-rank test were used as a gauge that led to the conclusion that Covid-19 plays a disruptive effect on the signalling process of dividend increases.

Keywords: cumulative abnormal returns, dividend increases, event study methodology, signalling

Procedia PDF Downloads 81
319 Qualitative and Quantitative Research Methodology Theoretical Framework and Descriptive Theory: PhD Construction Management

Authors: Samuel Quashie

Abstract:

PhDs in Construction Management often designs their methods based on those established in social sciences using theoretical models, to collect, gather and analysis data to answer research questions. Work aim is to apply qualitative and quantitative as a data analysis method, and as part of the theoretical framework - descriptive theory. To improve the ability to replicate the contribution to knowledge the research. Using practical triangulation approach, which covers, interviews and observations, literature review and (archival) document studies, project-based case studies, questionnaires surveys and review of integrated systems used in, construction and construction related industries. The clarification of organisational context and management delivery that influences organizational performance and quality of product and measures are achieved. Results illustrate improved reliability in this research approach when interpreting real world phenomena; cumulative results of research can be applied with confidence under similar environments. Assisted validity of the PhD research outcomes and strengthens the confidence to apply cumulative results of research under similar conditions in the Built Environment research systems, which have been criticised for the lack of reliability in approaches when interpreting real world phenomena.

Keywords: case studies, descriptive theory, theoretical framework, qualitative and quantitative research

Procedia PDF Downloads 341
318 UV Functionalised Short Implants as an Alternative to Avoid Crestal Sinus Lift Procedure: Controlled Case Series

Authors: Naira Ghambaryan, Gagik Hakobyan

Abstract:

Purpose:The study was to evaluate the survival rate of short implants (5-6 mm) functionalized with UV radiation placed in the posterior segments of the atrophied maxilla. Materials and Methods:The study included 47 patients with unilateral/bilateral missing teeth and vertical atrophy of the posterior maxillary area. A total of 64 short UV-functionalized implants and 62 standard implants over 10 mm in length were placed in patients. The clinical indices included the following parameters: ISQБ MBL, OHIP-G scale. Results: For short implants, the median ISQ at placement was 62.2 for primary stability, and the median ISQ at 5 months was 69.6 ISQ. For standart implant, the mean ISQ at placement was 64.3 ISQ, and ISQ after 5 months was 71.6 ISQ. Аfter 6 months mean MBL short implants 0.87 mm, after 1 year, 1.13 mm, after 5 year was 1.48 mm. Аfter 6 months, mean MBL standard implants 0.84 mm, after 1 year, 1.24 mm, after 5 year was 1.58 mm. Mean OHIP-G scores -patients satisfaction with the implant at 4.8 ± 0.3, satisfaction with the operation 4.6 ± 0.4; satisfaction with prosthetics 4.7 ± 0.5. Cumulative 5-year short implants rates was 96.7%, standard implants was 97.4%, and prosthesis cumulative survival rate was 97.2%. Conclusions: Short implants with ultraviolet functionalization for prosthetic rehabilitation of the posterior resorbed maxilla region is a reliable, reasonable alternative to sinus lift, demonstrating fewer complications, satisfactory survival of a 5-year follow-up period, and reducing the number of additional surgical interventions and postoperative complications.

Keywords: short implant, ultraviolet functionalization, atrophic posterior maxilla, prosthodontic rehabilitation

Procedia PDF Downloads 48
317 Investigation of Influence of Maize Stover Components and Urea Treatment on Dry Matter Digestibility and Fermentation Kinetics Using in vitro Gas Techniques

Authors: Anon Paserakung, Chaloemphon Muangyen, Suban Foiklang, Yanin Opatpatanakit

Abstract:

Improving nutritive values and digestibility of maize stover is an alternative way to increase their utilization in ruminant and reduce air pollution from open burning of maize stover in the northern Thailand. The present study, 2x3 factorial arrangements in completely randomized design was conducted to investigate the effect of maize stover components (whole and upper stover; cut above 5th node). Urea treatment at levels 0, 3, and 6% DM on dry matter digestibility and fermentation kinetics of maize stover using in vitro gas production. After 21 days of urea treatment, results illustrated that there was no interaction between maize stover components and urea treatment on 48h in vitro dry matter digestibility (IVDMD). IVDMD was unaffected by maize stover components (P > 0.05), average IVDMD was 55%. However, using whole maize stover gave higher cumulative gas and gas kinetic parameters than those of upper stover (P<0.05). Treating maize stover by ensiling with urea resulted in a significant linear increase in IVDMD (P<0.05). IVDMD increased from 42.6% to 53.9% when increased urea concentration from 0 to 3% and maximum IVDMD (65.1%) was observed when maize stover was ensiled with 6% urea. Maize stover treated with urea at levels of 0, 3, and 6% linearly increased cumulative gas production at 96h (31.1 vs 50.5 and 59.1 ml, respectively) and all gas kinetic parameters excepted the gas production from the immediately soluble fraction (P<0.50). The results indicate that maize stover treated with 6% urea enhance in vitro dry matter digestibility and fermentation kinetics. This study provides a practical approach to increasing utilization of maize stover in feeding ruminant animals.

Keywords: maize stover, urea treatment, ruminant feed, gas production

Procedia PDF Downloads 186
316 Exploring Students’ Self-Evaluation on Their Learning Outcomes through an Integrated Cumulative Grade Point Average Reporting Mechanism

Authors: Suriyani Ariffin, Nor Aziah Alias, Khairil Iskandar Othman, Haslinda Yusoff

Abstract:

An Integrated Cumulative Grade Point Average (iCGPA) is a mechanism and strategy to ensure the curriculum of an academic programme is constructively aligned to the expected learning outcomes and student performance based on the attainment of those learning outcomes that is reported objectively in a spider web. Much effort and time has been spent to develop a viable mechanism and trains academics to utilize the platform for reporting. The question is: How well do learners conceive the idea of their achievement via iCGPA and whether quality learner attributes have been nurtured through the iCGPA mechanism? This paper presents the architecture of an integrated CGPA mechanism purported to address a holistic evaluation from the evaluation of courses learning outcomes to aligned programme learning outcomes attainment. The paper then discusses the students’ understanding of the mechanism and evaluation of their achievement from the generated spider web. A set of questionnaires were distributed to a group of students with iCGPA reporting and frequency analysis was used to compare the perspectives of students on their performance. In addition, the questionnaire also explored how they conceive the idea of an integrated, holistic reporting and how it generates their motivation to improve. The iCGPA group was found to be receptive to what they have achieved throughout their study period. They agreed that the achievement level generated from their spider web allows them to develop intervention and enhance the programme learning outcomes before they graduate.

Keywords: learning outcomes attainment, iCGPA, programme learning outcomes, spider web, iCGPA reporting skills

Procedia PDF Downloads 181
315 The Long-Term Impact of Health Conditions on Social Mobility Outcomes: A Modelling Study

Authors: Lise Retat, Maria Carmen Huerta, Laura Webber, Franco Sassi

Abstract:

Background: Intra-generational social mobility (ISM) can be defined as the extent to which individuals change their socio-economic position over a period of time or during their entire life course. The relationship between poor health and ISM is established. Therefore, quantifying the impact that potential health policies have on ISM now and into the future would provide evidence for how social inequality could be reduced. This paper takes the condition of overweight and obesity as an example and estimates the mean earning change per individual if the UK were to introduce policies to effectively reduce overweight and obesity. Methods: The HealthLumen individual-based model was used to estimate the impact of obesity on social mobility measures, such as earnings, occupation, and wealth. The HL tool models each individual's probability of experiencing downward ISM as a result of their overweight and obesity status. For example, one outcome of interest was the cumulative mean earning per person of implementing a policy which would reduce adult overweight and obesity by 1% each year between 2020 and 2030 in the UK. Results: Preliminary analysis showed that by reducing adult overweight and obesity by 1% each year between 2020 and 2030, the cumulative additional mean earnings would be ~1,000 Euro per adult by 2030. Additional analysis will include other social mobility indicators. Conclusions: These projections are important for illustrating the role of health in social mobility and for providing evidence for how health policy can make a difference to social mobility outcomes and, in turn, help to reduce inequality.

Keywords: modelling, social mobility, obesity, health

Procedia PDF Downloads 91
314 Analysis in Mexico on Workers Performing Highly Repetitive Movements with Sensory Thermography in the Surface of the Wrist and Elbows

Authors: Sandra K. Enriquez, Claudia Camargo, Jesús E. Olguín, Juan A. López, German Galindo

Abstract:

Currently companies have increased the number of disorders of cumulative trauma (CTDs), these are increasing significantly due to the Highly Repetitive Movements (HRM) performed in workstations, which causes economic losses to businesses, due to temporary and permanent disabilities of workers. This analysis focuses on the prevention of disorders caused by: repeatability, duration and effort; And focuses on reducing cumulative trauma disorders such as occupational diseases using sensory thermography as a noninvasive method, the above is to evaluate the injuries could have workers to perform repetitive motions. Objectives: The aim is to define rest periods or job rotation before they generate a CTD, this sensory thermography by analyzing changes in temperature patterns on wrists and elbows when the worker is performing HRM over a period of time 2 hours and 30 minutes. Information on non-work variables such as wrist and elbow injuries, weight, gender, age, among others, and work variables such as temperature workspace, repetitiveness and duration also met. Methodology: The analysis to 4 industrial designers, 2 men and 2 women to be specific was conducted in a business in normal health for a period of 12 days, using the following time ranges: the first day for every 90 minutes continuous work were asked to rest 5 minutes, the second day for every 90 minutes of continuous work were asked to rest 10 minutes, the same to work 60 and 30 minutes straight. Each worker was tested with 6 different ranges at least twice. This analysis was performed in a controlled room temperature between 20 and 25 ° C, and a time to stabilize the temperature of the wrists and elbows than 20 minutes at the beginning and end of the analysis. Results: The range time of 90 minutes working continuous and a rest of 5 minutes of activity is where the maximum temperature (Tmax) was registered in the wrists and elbows in the office, we found the Tmax was 35.79 ° C with a difference of 2.79 ° C between the initial and final temperature of the left elbow presented at the individual 4 during the 86 minutes, in of range in 90 minutes continuously working and rested for 5 minutes of your activity. Conclusions: It is possible with this alternative technology is sensory thermography predict ranges of rotation or rest for the prevention of CTD to perform HRM work activities, obtaining with this reduce occupational disease, quotas by health agencies and increasing the quality of life of workers, taking this technology a cost-benefit acceptable in the future.

Keywords: sensory thermography, temperature, cumulative trauma disorder (CTD), highly repetitive movement (HRM)

Procedia PDF Downloads 390
313 Working Children and Adolescents and the Vicious Circle of Poverty from the Perspective of Gunnar Myrdal’s Theory of Circular Cumulative Causation: Analysis and Implementation of a Probit Model to Brazil

Authors: J. Leige Lopes, L. Aparecida Bastos, R. Monteiro da Silva

Abstract:

The objective of this paper is to study the work of children and adolescents and the vicious circle of poverty from the perspective of Guinar Myrdal’s Theory of Circular Cumulative Causation. The objective is to show that if a person starts working in the juvenile phase of life they will be classified as poor or extremely poor when they are adult, which can to be observed in the case of Brazil, more specifically in the north and northeast. To do this, the methodology used was statistical and econometric analysis by applying a probit model. The main results show that: if people reside in the northeastern region of Brazil, and if they have a low educational level and if they start their professional life before the age 18, they will increase the likelihood that they will be poor or extremely poor. There is a consensus in the literature that one of the causes of the intergenerational transmission of poverty is related to child labor, this because when one starts their professional life while still in the toddler or adolescence stages of life, they end up sacrificing their studies. Because of their low level of education, children or adolescents are forced to perform low-paid functions and abandon school, becoming in the future, people who will be classified as poor or extremely poor. As a result of poverty, parents may be forced to send their children out to work when they are young, so that in the future they will also become poor adults, a process that is characterized as the "vicious circle of poverty."

Keywords: children, adolescents, Gunnar Myrdal, poverty, vicious circle

Procedia PDF Downloads 241
312 The Effect of Magnetite Particle Size on Methane Production by Fresh and Degassed Anaerobic Sludge

Authors: E. Al-Essa, R. Bello-Mendoza, D. G. Wareham

Abstract:

Anaerobic batch experiments were conducted to investigate the effect of magnetite-supplementation (7 mM) on methane production from digested sludge undergoing two different microbial growth phases, namely fresh sludge (exponential growth phase) and degassed sludge (endogenous decay phase). Three different particle sizes were assessed: small (50 - 150 nm), medium (168 – 490 nm) and large (800 nm - 4.5 µm) particles. Results show that, in the case of the fresh sludge, magnetite significantly enhanced the methane production rate (up to 32%) and reduced the lag phase (by 15% - 41%) as compared to the control, regardless of the particle size used. However, the cumulative methane produced at the end of the incubation was comparable in all treatment and control bottles. In the case of the degassed sludge, only the medium-sized magnetite particles increased significantly the methane production rate (12% higher) as compared to the control. Small and large particles had little effect on the methane production rate but did result in an extended lag phase which led to significantly lower cumulative methane production at the end of the incubation period. These results suggest that magnetite produces a clear and positive effect on methane production only when an active and balanced microbial community is present in the anaerobic digester. It is concluded that, (i) the effect of magnetite particle size on increasing the methane production rate and reducing lag phase duration is strongly influenced by the initial metabolic state of the microbial consortium, and (ii) the particle size would positively affect the methane production if it is provided within the nanometer size range.

Keywords: anaerobic digestion, iron oxide, methanogenesis, nanoparticle

Procedia PDF Downloads 110
311 Land Use Influence on the 2014 Catastrophic Flood in the Northeast of Peninsular Malaysia

Authors: Zulkifli Yusop

Abstract:

The severity of December 2014 flood on the east coast of Peninsular Malaysia has raised concern over the adequacy of existing land use practices and policies. This article assesses flood responses to selective logging, plantation establishment (oil palm and rubber) and their subsequent management regimes. The hydrological impacts were evaluated on two levels: on-site (mostly in the upstream) and off-site to reflect the cumulative impact at downstream. Results of experimental catchment studies suggest that on-site impact of flood could be kept to a minimum when selecting logging strictly adhere to the existing guidelines. However, increases in flood potential and sedimentation rate were observed with logging intensity and slope steepness. Forest conversion to plantation show the highest impacts. Except on the heavily compacted surfaces, the ground revegetation is usually rapid within two years upon the cessation of the logging operation. The hydrological impacts of plantation opening and replanting could be significantly reduced once the cover crop has fully established which normally takes between three to six months after sowing. However, as oil palms become taller and the canopy gets closer, the cover crop tends to die off due to light competition, and its protecting function gradually diminishes. The exposed soil is further compacted by harvesting machinery which subsequently leads to greater overland flow and erosion rates. As such, the hydrological properties of matured oil palm plantations are generally poorer than in young plantation. In hilly area, the undergrowth in rubber plantation is usually denser compared to under oil palm. The soil under rubber trees is also less compacted as latex collection is done manually. By considering the cumulative effects of land-use over space and time, selective logging seems to pose the least impact on flood potential, followed by planting rubber for latex, oil palm and Latex Timber Clone (LTC). The cumulative hydrological impact of LTC plantation is the most severe because of its shortest replanting rotation (12 to 15 years) compared to oil palm (25 years) and rubber for latex (35 years). Furthermore, the areas gazetted for LTC are mostly located on steeper slopes which are more susceptible to landslide and erosion. Forest has limited capability to store excess rainfall and is only effective in attenuating regular floods. Once the hydrologic storage is exceeded, the excess rainfall will appear as flood water. Therefore, for big floods, rainfall regime has a much bigger influence than land use.

Keywords: selective logging, plantation, extreme rainfall, debris flow

Procedia PDF Downloads 317
310 Detection of Trends and Break Points in Climatic Indices: The Case of Umbria Region in Italy

Authors: A. Flammini, R. Morbidelli, C. Saltalippi

Abstract:

The increase of air surface temperature at global scale is a fact, with values around 0.85 ºC since the late nineteen century, as well as a significant change in main features of rainfall regime. Nevertheless, the detected climatic changes are not equally distributed all over the world, but exhibit specific characteristics in different regions. Therefore, studying the evolution of climatic indices in different geographical areas with a prefixed standard approach becomes very useful in order to analyze the existence of climatic trend and compare results. In this work, a methodology to investigate the climatic change and its effects on a wide set of climatic indices is proposed and applied at regional scale in the case study of a Mediterranean area, Umbria region in Italy. From data of the available temperature stations, nine temperature indices have been obtained and the existence of trends has been checked by applying the non-parametric Mann-Kendall test, while the non-parametric Pettitt test and the parametric Standard Normal Homogeneity Test (SNHT) have been applied to detect the presence of break points. In addition, aimed to characterize the rainfall regime, data from 11 rainfall stations have been used and a trend analysis has been performed on cumulative annual rainfall depth, daily rainfall, rainy days, and dry periods length. The results show a general increase in any temperature indices, even if with a trend pattern dependent of indices and stations, and a general decrease of cumulative annual rainfall and average daily rainfall, with a time rainfall distribution over the year different from the past.

Keywords: climatic change, temperature, rainfall regime, trend analysis

Procedia PDF Downloads 82
309 Weibull Cumulative Distribution Function Analysis with Life Expectancy Endurance Test Result of Power Window Switch

Authors: Miky Lee, K. Kim, D. Lim, D. Cho

Abstract:

This paper presents the planning, rationale for test specification derivation, sampling requirements, test facilities, and result analysis used to conduct lifetime expectancy endurance tests on power window switches (PWS) considering thermally induced mechanical stress under diurnal cyclic temperatures during normal operation (power cycling). The detail process of analysis and test results on the selected PWS set were discussed in this paper. A statistical approach to ‘life time expectancy’ was given to the measurement standards dealing with PWS lifetime determination through endurance tests. The approach choice, within the framework of the task, was explained. The present task was dedicated to voltage drop measurement to derive lifetime expectancy while others mostly consider contact or surface resistance. The measurements to perform and the main instruments to measure were fully described accordingly. The failure data from tests were analyzed to conclude lifetime expectancy through statistical method using Weibull cumulative distribution function. The first goal of this task is to develop realistic worst case lifetime endurance test specification because existing large number of switch test standards cannot induce degradation mechanism which makes the switches less reliable. 2nd goal is to assess quantitative reliability status of PWS currently manufactured based on test specification newly developed thru this project. The last and most important goal is to satisfy customer’ requirement regarding product reliability.

Keywords: power window switch, endurance test, Weibull function, reliability, degradation mechanism

Procedia PDF Downloads 209
308 Earthquake Forecasting Procedure Due to Diurnal Stress Transfer by the Core to the Crust

Authors: Hassan Gholibeigian, Kazem Gholibeigian

Abstract:

In this paper, our goal is determination of loading versus time in crust. For this goal, we present a computational procedure to propose a cumulative strain energy time profile which can be used to predict the approximate location and time of the next major earthquake (M > 4.5) along a specific fault, which we believe, is more accurate than many of the methods presently in use. In the coming pages, after a short review of the research works presently going on in the area of earthquake analysis and prediction, earthquake mechanisms in both the jerk and sequence earthquake direction is discussed, then our computational procedure is presented using differential equations of equilibrium which govern the nonlinear dynamic response of a system of finite elements, modified with an extra term to account for the jerk produced during the quake. We then employ Von Mises developed model for the stress strain relationship in our calculations, modified with the addition of an extra term to account for thermal effects. For calculation of the strain energy the idea of Pulsating Mantle Hypothesis (PMH) is used. This hypothesis, in brief, states that the mantle is under diurnal cyclic pulsating loads due to unbalanced gravitational attraction of the sun and the moon. A brief discussion is done on the Denali fault as a case study. The cumulative strain energy is then graphically represented versus time. At the end, based on some hypothetic earthquake data, the final results are verified.

Keywords: pulsating mantle hypothesis, inner core’s dislocation, outer core’s bulge, constitutive model, transient hydro-magneto-thermo-mechanical load, diurnal stress, jerk, fault behaviour

Procedia PDF Downloads 247
307 The Triple Threat: Microplastic, Nanoplastic, and Macroplastic Pollution and Their Cumulative Impacts on Marine Ecosystem

Authors: Tabugbo B. Ifeyinwa, Josephat O. Ogbuagu, Okeke A. Princewill, Victor C. Eze

Abstract:

The increasing amount of plastic pollution in maritime settings poses a substantial risk to the functioning of ecosystems and the preservation of biodiversity. This comprehensive analysis combines the most recent data on the environmental effects of pollution from macroplastics, microplastics, and nanoplastics within marine ecosystems. Our goal is to provide a comprehensive understanding of the cumulative impacts that plastic waste accumulates on marine life by outlining the origins, processes, and ecological repercussions connected with each size category of plastic debris. Microplastics and nanoplastics have more sneaky effects that are controlled by chemicals. These effects can get through biological barriers and affect the health of cells and the whole body. Compared to macroplastics, which primarily contribute to physical harm through entanglement and ingestion by marine fauna, microplastics, and nanoplastics are associated with non-physical effects. The review underlines a vital need for research that crosses disciplinary boundaries to untangle the intricate interactions that the various sizes of plastic pollution have with marine animals, evaluate the long-term ecological repercussions, and identify effective measures for mitigating the effects of plastic pollution. Additionally, we urge governmental interventions and worldwide cooperation to solve this pervasive environmental concern. Specifically, we identify significant knowledge gaps in the detection and effect assessment of nanoplastics. To protect marine biodiversity and preserve ecosystem services, this review highlights how urgent it is to address the broad spectrum of plastic pollution.

Keywords: macroplastic pollution, marine ecosystem, microplastic pollution, nanoplastic pollution

Procedia PDF Downloads 17
306 Risk Factors for Postoperative Recurrence in Indian Patients with Crohn’s Disease

Authors: Choppala Pratheek, Vineet Ahuja

Abstract:

Background: Crohn's disease (CD) recurrence following surgery is a common challenge, and current detection methods rely on risk factors identified in Western populations. This study aimed to investigate the risk factors and rates of postoperative CD recurrence in a tuberculosis-endemic region like India. Retrospective data was collected from a structured database from a specialty IBD clinic by reviewing case files from January 2005 to December 2021. Inclusion criteria involved CD patients diagnosed based on the ECCO-ESGAR consensus guidelines, who had undergone at least one intestinal resection and had a minimum follow-up period of one year at the IBD clinic. Results: A total of 90 patients were followed up for a median period of 45 months (IQR, 20.75 - 72.00). Out of the 90 patients, 61 received ATT prior to surgery, with a mean delay in diagnosis of 2.5 years, although statistically non-significant (P=0.078). Clinical recurrence occurred in 50% of patients, with the cumulative rate increasing from 13.3% at one year to 40% at three years. Among 63 patients who underwent endoscopy, 65.7% showed evidence of endoscopic recurrence, with the cumulative rate increasing from 31.7% at one year to 55.5% at four years. Smoking was identified as a significant risk factor for early endoscopic recurrence (P=0.001) by Cox regression analysis, but no other risk factors were identified. Initiating post-operative medications prior to clinical recurrence delayed its onset (P=0.004). Subgroup analysis indicated that endoscopic monitoring aided in the early identification of recurrence (P=0.001). The findings contribute to enhancing post-operative CD management strategies in such regions where the disease burden is escalating.

Keywords: crohns, post operative, tuberculosis-endemic, risk factors

Procedia PDF Downloads 36
305 Evaluation of Immunology of Asthma Chronic Obstructive

Authors: Milad Gholizadeh

Abstract:

Asthma and chronic obstructive pulmonary disease (COPD) are very shared inflammatory diseases of the airlines. They togethercause airway tapering and are cumulative in occurrence throughout the world, imposing huge burdens on health care. It is currently recognized that some asthmatic inflammation is neutrophilic, controlled by the TH17 subset of helper T cells, and that some eosinophilic inflammation is controlled by type 2 innate lymphoid cells (ILC2 cells) temporary together with basophils. Patients who have plain asthma or are asthmatic patients who smoke with topographies of COPD-induced inflammation and might advantage from treatments targeting neutrophils, countingmacrolides, CXCR2 antagonists, phosphodiesterase 4 inhibitors, p38 mitogen-activating protein kinase inhibitors, and antibodies in contradiction of IL-1 and IL-17.Viral and bacterial infections, not only reason acute exacerbations of COPD, but also intensify and continue chronic inflammation in steady COPD through pathogen-associated molecular patterns. Present treatment plans are absorbed on titration of inhaled therapies such as long-acting bronchodilators, with cumulative interest in the usage of beleaguered biologic therapies meant at the underlying inflammatory devices. Educationssuggest that the mucosal IgA reply is abridged in COPD, and a lacking conveyance of IgA across the bronchial epithelium in COPD has been recognized, perhaps involving neutrophil proteinases, which may damage the Ig receptor mediating this transepithelialdirection-finding. Future instructions for investigation will emphasis elucidating the diverse inflammatory signatures foremost to asthma and chronic obstrucive, the development of reliable analytic standards and biomarkers of illness, and refining the clinical organization with an eye toward targeted therapies.

Keywords: imminology, asthma, COPD, CXCR2 antagonists

Procedia PDF Downloads 131