Search results for: intervals of normality
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 699

Search results for: intervals of normality

639 Nonstationary Increments and Casualty in the Aluminum Market

Authors: Andrew Clark

Abstract:

McCauley, Bassler, and Gunaratne show that integration I(d) processes as used in economics and finance do not necessarily produce stationary increments, which are required to determine causality in both the short term and the long term. This paper follows their lead and shows I(d) aluminum cash and futures log prices at daily and weekly intervals do not have stationary increments, which means prior causality studies using I(d) processes need to be re-examined. Wavelets based on undifferenced cash and futures log prices do have stationary increments and are used along with transfer entropy (versus cointegration) to measure causality. Wavelets exhibit causality at most daily time scales out to 1 year, and weekly time scales out to 1 year and more. To determine stationarity, localized stationary wavelets are used. LSWs have the benefit, versus other means of testing for stationarity, of using multiple hypothesis tests to determine stationarity. As informational flows exist between cash and futures at daily and weekly intervals, the aluminum market is efficient. Therefore, hedges used by producers and consumers of aluminum need not have a big concern in terms of the underestimation of hedge ratios. Questions about arbitrage given efficiency are addressed in the paper.

Keywords: transfer entropy, nonstationary increments, wavelets, localized stationary wavelets, localized stationary wavelets

Procedia PDF Downloads 170
638 Computer Simulations of Stress Corrosion Studies of Quartz Particulate Reinforced ZA-27 Metal Matrix Composites

Authors: K. Vinutha

Abstract:

The stress corrosion resistance of ZA-27 / TiO2 metal matrix composites (MMC’s) in high temperature acidic media has been evaluated using an autoclave. The liquid melt metallurgy technique using vortex method was used to fabricate MMC’s. TiO2 particulates of 50-80 µm in size are added to the matrix. ZA-27 containing 2,4,6 weight percentage of TiO2 are prepared. Stress corrosion tests were conducted by weight loss method for different exposure time, normality and temperature of the acidic medium. The corrosion rates of composites were lower to that of matrix ZA-27 alloy under all conditions.

Keywords: autoclave, MMC’s, stress corrosion, vortex method

Procedia PDF Downloads 435
637 An Analysis of the Relation between Need for Psychological Help and Psychological Symptoms

Authors: İsmail Ay

Abstract:

In this study, it was aimed to determine the relations between need for psychological help and psychological symptoms. The sample of the study consists of 530 university students getting educated in University of Atatürk in 2015-2016 academic years. Need for Psychological Help Scale and Brief Symptom Inventory were used to collect data in the study. In data analysis, correlation analysis and structural equation model with latent variables were used. Normality and homogeneity analyses were used to analyze the basic conditions of parametric tests. The findings obtained from the study show that as the psychological symptoms increase, need for psychological help also increases. The findings obtained through the study were approached according to the literature.

Keywords: psychological symptoms, need for psychological help, structural equation model, correlation

Procedia PDF Downloads 335
636 Still Hepatocellular Carcinoma Risk Despite Proper Treatment of Chronic Viral Hepatitis

Authors: Sila Akhan, Muge Toygar, Murat Sayan, Simge Fidan

Abstract:

Chronic viral hepatitis B, C, and D can cause hepatocellular carcinoma (HCC), cirrhosis and death. The proper treatment reduce the risk of development of HCC importantly, but not to zero point. Materials and Methods: We analysed retrospectively our chronic viral hepatitis B, C and D patients who attended to our Infectious Diseases policlinic between 2004-2018. From 589 biopsy-proven chronic hepatitis patients 3 have hepatocellular carcinoma on our follow up. First case is 74 years old patient. His HCV infection diagnosis was made 8 years ago. First treatment was pegylated interferon plus ribavirin only 28 weeks, because of HCV RNA breakthrough under treatment. In 2013 he was retreated with telaprevir, pegylated interferon plus ribavirin 24 weeks. But at the end of the therapy HCV RNA was found 1.290.000 IU/mL. He has abdominal ultrasonography (US) controls and alpha-fetoprotein (AFP) at 6 months intervals. All seemed normal until 2015 then he has an abdominal magnetic resonance imaging (MRI) and found HCC by chance. His treatment began in Oncology Clinic after verified with biopsy of HCC. And then sofosbuvir/ledipasvir was given to him for HCV 24 weeks. Sustained virologic response (SVR) was obtained. He is on cure for HCV infection and under control of Oncology for HCC. Second patient is 36 years old man. He knows his HBV infection since 2008. HBsAg and HBeAg positive; HDV RNA negative. Liver biopsy revealed grade:4, stage 3-4 according modified Knodell scoring system. In 2010 tenofovir treatment was began. His abdominal US and AFP were normal. His controls took place at 6 months intervals and HBV DNA negative, US, and AFP were normal until 2016 continuously. AFP found 37 above the normal range and then HCC was found in MRI. Third patient is 57 years old man. As hepatitis B infection was first diagnosed; he has cirrhosis and was began tenofovir as treatment. In short time he has HCC despite normal AFP values. Conclusion: In Mediterranian countries including Turkey naturally occurring pre-S/S variants are more than 75% of all chronic hepatitis B patients. This variants may contribute to the development of progressive liver damage and hepatocarcinogenesis. HCV-induced development of HCC is a gradual process and is affected by the duration of disease and viral genotype. All the chronic viral hepatitis patients should be followed up in 6 months intervals not only with US and AFP for HCC. Despite they have proper treatment there is always the risk development of HCC. Chronic hepatitis patients cannot be dropped from follow up even treated well.

Keywords: HCC, HCV, HBV, DAA

Procedia PDF Downloads 108
635 Oral Biofilm and Stomatitis Denture: Local Implications and Cardiovascular Risks

Authors: Adriana B. Ribeiro, Camila B. Araujo, Frank L. Bueno, Luiz Eduardo V. Silva, Caroline V. Fortes, Helio C. Salgado, Rubens Fazan Jr., Claudia H. L. da Silva

Abstract:

Denture-related stomatitis (DRS) has recently been associated with deleterious cardiovascular effects, including hypertension. This study evaluated salivary parameters, blood pressure (BP) and heart rate variability (HRV), before and after DRS treatment in edentulous patients (n=14). Collection of unstimulated and stimulated saliva, as well as blood pressure (BP) measurements and electrocardiogram recordings were performed before and after 10 days of DRS treatment. The salivary flow (mL/min) was found similar at both times while pH was smaller (more neutral) after treatment (7.3 ± 2.2 vs. 7.1 ± 0.24). Systolic BP (mmHg) showed a trend, but not a significant reduction after DRS treatment (158 ± 25.68 vs. 148 ± 16,72, p=0,062) while diastolic BP was found similar in both times (86 ± 13.93 and 84 ± 9.38). Overall HRV, measured by standard deviation of RR intervals was not affected by DRS treatment (24 ± 4 vs 18 ± 2 ms), but differences of successive RR intervals (an index of parasympathetic cardiac modulation) increased after the treatment (26 ± 4 vs 19 ± 2 ms). Moreover, another index of vagal modulation of the heart, the power of RR interval spectra at high-frequency, was also markedly higher after DRS treatment (236 ± 63 vs 135 ± 32 ms²). Such findings strongly suggest that DRS is linked to an autonomic imbalance with sympathetic overactivity, which is markedly deleterious, increasing cardiovascular risk and the incidence of diseases such as hypertension. Acknowledgment: This study is supported by FAPESP, CNPq.

Keywords: biofilm, denture stomatitis, HRV, blood pressure

Procedia PDF Downloads 198
634 High-Resolution ECG Automated Analysis and Diagnosis

Authors: Ayad Dalloo, Sulaf Dalloo

Abstract:

Electrocardiogram (ECG) recording is prone to complications, on analysis by physicians, due to noise and artifacts, thus creating ambiguity leading to possible error of diagnosis. Such drawbacks may be overcome with the advent of high resolution Methods, such as Discrete Wavelet Analysis and Digital Signal Processing (DSP) techniques. This ECG signal analysis is implemented in three stages: ECG preprocessing, features extraction and classification with the aim of realizing high resolution ECG diagnosis and improved detection of abnormal conditions in the heart. The preprocessing stage involves removing spurious artifacts (noise), due to such factors as muscle contraction, motion, respiration, etc. ECG features are extracted by applying DSP and suggested sloping method techniques. These measured features represent peak amplitude values and intervals of P, Q, R, S, R’, and T waves on ECG, and other features such as ST elevation, QRS width, heart rate, electrical axis, QR and QT intervals. The classification is preformed using these extracted features and the criteria for cardiovascular diseases. The ECG diagnostic system is successfully applied to 12-lead ECG recordings for 12 cases. The system is provided with information to enable it diagnoses 15 different diseases. Physician’s and computer’s diagnoses are compared with 90% agreement, with respect to physician diagnosis, and the time taken for diagnosis is 2 seconds. All of these operations are programmed in Matlab environment.

Keywords: ECG diagnostic system, QRS detection, ECG baseline removal, cardiovascular diseases

Procedia PDF Downloads 269
633 Geotechnical and Mineralogical Properties of Clay Soils in the Second Organized Industrial Region, Konya, Turkey

Authors: Mustafa Yıldız, Ali Ulvi Uzer, Murat Olgun

Abstract:

In this study, geotechnical and mineralogical properties of gypsum containing clay basis which form the ground of Second Organized Industrial Zone in Konya province have been researched through comprehensive field and laboratory experiments. Although sufficient geotechnical research has not been performed yet, an intensive structuring in the region continues at present. The study area consists of mid-lake sediments formed by gypsum containing soft silt-clay basis which evolves to a large area. To determine the soil profile and geotechnical specifications; 18 drilling holes were opened and disturbed / undisturbed soil samples have been taken through shelby tubes within 1.5m intervals. Tests have been performed on these samples to designate the index and strength properties of soil. Besides, at all drilling holes Standart Penetration Tests have been done within 1.5m intervals. For the purpose of determining the mineralogical characteristics of the soil; all rock and X-RD analysis have been carried out on 6 samples which were taken from various depths through the soil profile. Strength and compressibility characteristics of the soil were defined with correlations using laboratory and field test results. Unconfined compressive strength, undrained cohesion, compression index varies between 16 kN/m2 and 405.4 kN/m2, 6.5 kN/m2 and 72 kN/m2, 0.066 and 0.864, respectively.

Keywords: Konya second organized industrial region, strength, compressibility, soft clay

Procedia PDF Downloads 268
632 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 556
631 Gender and Total Compensation, in an ‘Age’ of Disruption

Authors: Daniel J. Patricio Jiménez

Abstract:

The term 'total compensation’ refers to salary, training, innovation, and development, and of course, motivation; total compensation is an open and flexible system which must facilitate personal and family conciliation and therefore cannot be isolated from social reality. Today, the challenge for any company that wants to have a future is to be sustainable, and women play a ‘special’ role in this. Spain, in its statutory and conventional development, has not given sufficient response to new phenomena such as ‘bonuses’, ‘stock options’ or ‘fringe benefits’ (constructed dogmatically and by court decisions), the new digital reality, where cryptocurrency, new collaborative models and service provision -such as remote work-, are always ahead of the law. To talk about compensation is to talk about the gender gap, and with the entry into force of RD.902 /2020 on 14 April 2021, certain measures are necessary under the principle of salary transparency; the valuation of jobs, the pay register (Rd. 6/2019) and the pay audit, are an example of this. Analyzing the methodologies, and in particular the determination and weight of the factors -so that the system itself is not discriminatory- is essential. The wage gap in Spain is smaller than in Europe, but the sources do not reflect the reality, and since the beginning of the pandemic, there has been a clear stagnation. A living wage is not the minimum wage; it is identified with rights and needs; it is that which, based on internal equity, reflects the competitiveness of the company in terms of human capital. Spain has lost and has not recovered the relative weight of its wages; this is having a direct impact on our competitiveness, consequently on the precariousness of employment and undoubtedly on the levels of extreme poverty. Training is becoming more than ever a strategic factor; the new digital reality requires that each component of the system is connected, the transversality is imposed on us, this forces us to redefine content, to give answers to the new demands that the new normality requires because technology and robotization are changing the concept of employability. The presence of women in this context is necessary, and there is a long way to go. The so-called emotional compensation becomes particularly relevant at a time when pandemics, silence, and disruption, are leaving after-effects; technostress (in all its manifestations) is just one of them. Talking about motivation today makes no sense without first being aware that mental health is a priority, that it must be treated and communicated in an inclusive way because it increases satisfaction, productivity, and engagement. There is a clear conclusion to all this: compensation systems do not respond to the ‘new normality’: diversity, and in particular women, cannot be invisible in human resources policies if the company wants to be sustainable.

Keywords: diversity, gender gap, human resources, sustainability.

Procedia PDF Downloads 132
630 Modelling Phytoremediation Rates of Aquatic Macrophytes in Aquaculture Effluent

Authors: E. A. Kiridi, A. O. Ogunlela

Abstract:

Pollutants from aquacultural practices constitute environmental problems and phytoremediation could offer cheaper environmentally sustainable alternative since equipment using advanced treatment for fish tank effluent is expensive to import, install, operate and maintain, especially in developing countries. The main objective of this research was, therefore, to develop a mathematical model for phytoremediation by aquatic plants in aquaculture wastewater. Other objectives were to evaluate the retention times on phytoremediation rates using the model and to measure the nutrient level of the aquaculture effluent and phytoremediation rates of three aquatic macrophytes, namely; water hyacinth (Eichornia crassippes), water lettuce (Pistial stratoites) and morning glory (Ipomea asarifolia). A completely randomized experimental design was used in the study. Approximately 100 g of each macrophyte were introduced into the hydroponic units and phytoremediation indices monitored at 8 different intervals from the first to the 28th day. The water quality parameters measured were pH and electrical conductivity (EC). Others were concentration of ammonium–nitrogen (NH₄⁺ -N), nitrite- nitrogen (NO₂⁻ -N), nitrate- nitrogen (NO₃⁻ -N), phosphate –phosphorus (PO₄³⁻ -P), and biomass value. The biomass produced by water hyacinth was 438.2 g, 600.7 g, 688.2 g and 725.7 g at four 7–day intervals. The corresponding values for water lettuce were 361.2 g, 498.7 g, 561.2 g and 623.7 g and for morning glory were 417.0 g, 567.0 g, 642.0 g and 679.5g. Coefficient of determination was greater than 80% for EC, TDS, NO₂⁻ -N, NO₃⁻ -N and 70% for NH₄⁺ -N using any of the macrophytes and the predicted values were within the 95% confidence interval of measured values. Therefore, the model is valuable in the design and operation of phytoremediation systems for aquaculture effluent.

Keywords: aquaculture effluent, macrophytes, mathematical model, phytoremediation

Procedia PDF Downloads 187
629 Sublethal Effects of Entomopathogenic Nematodes and Fungus against the Red Palm Weevil, Rhynchophorus Ferrugineus (Olivier) (Curculionidae: Coleoptera)

Authors: M. Manzoor, J. N. Ahmad, R. M. Giblin Davis, N. Javed, M. S. Haider

Abstract:

The invasive Red Palm Weevil (RPW) (Rhynchophorus ferrugineus [Olivier] (Coleoptera: Curculionidae) is one of the most destructive palm pests in the world. Synthetic pesticides are environmentally hazardous pest control strategies being used in the past with emerging need of eco-friendly biological approaches including microbial entomopathogens for RPW management. The sublethal effects of a single entomopathogenic fungus (EPF) Beauveria bassiana (WG-11) (Ascomycota: Hypocreales) and two entomopathogenic nematode (EPN) species Heterorhabditis bacteriophora (Poinar) and Steinernema carpocapsae (Weiser) (Nematoda: Rhabditida) were evaluated in various combinations against laboratory-reared 3rd, 5th and 8th instar larvae of RPW in laboratory assays. Individual and combined effects of both entomopathogens (EP) were observed after the pre-application of B. bassiana fungus at 1-2-week intervals. A number of parameters were measured after the application of sub-lethal doses of EPF such as diet consumption, development, frass production, mortality, and weight gain. Combined treatments were tested for additive and synergistic effects. Synergism was more frequently observed in B. bassiana and S. carpocapsae combined treatments than in B. bassiana and H. bacteriophora combinations. Early instar larvae of RPW were more susceptible than older instars. Synergistic effects were observed in the 3rd and 5th instars exposed to B. bassiana and S. carpocapsae at 0, 7 and 14-day intervals. Whereas, in 8th instar larvae, the synergistic effect was observed only in B. bassiana and S. carpocapsae treatments after 0 and 7 days intervals. EPN treatments decreased pupation, egg hatching and emergence of adults. Lethal effects of nematodes were also observed in all growth stages of R. ferrugineus. Reduced larval weight, increased larval, pre-pupal and pupal duration, reduced adult weight and life span were observed. Sub-lethal concentrations of both entomopathogens induced variations in the different developmental stages and reduced food consumption, frass production, growth, and weight gain. So, on the basis of results, it is concluded that synthetic pesticides should be replaced with environmentally friendly sustainable biopesticides.

Keywords: H. bacteriophora, S. carpocapsae, B. bassiana, mortality

Procedia PDF Downloads 125
628 Gaussian Probability Density for Forest Fire Detection Using Satellite Imagery

Authors: S. Benkraouda, Z. Djelloul-Khedda, B. Yagoubi

Abstract:

we present a method for early detection of forest fires from a thermal infrared satellite image, using the image matrix of the probability of belonging. The principle of the method is to compare a theoretical mathematical model to an experimental model. We considered that each line of the image matrix, as an embodiment of a non-stationary random process. Since the distribution of pixels in the satellite image is statistically dependent, we divided these lines into small stationary and ergodic intervals to characterize the image by an adequate mathematical model. A standard deviation was chosen to generate random variables, so each interval behaves naturally like white Gaussian noise. The latter has been selected as the mathematical model that represents a set of very majority pixels, which we can be considered as the image background. Before modeling the image, we made a few pretreatments, then the parameters of the theoretical Gaussian model were extracted from the modeled image, these settings will be used to calculate the probability of each interval of the modeled image to belong to the theoretical Gaussian model. The high intensities pixels are regarded as foreign elements to it, so they will have a low probability, and the pixels that belong to the background image will have a high probability. Finally, we did present the reverse of the matrix of probabilities of these intervals for a better fire detection.

Keywords: forest fire, forest fire detection, satellite image, normal distribution, theoretical gaussian model, thermal infrared matrix image

Procedia PDF Downloads 111
627 Pharmacokinetics, Dosage Regimen and in Vitro Plasma Protein Binding of Danofloxacin following Intravenous Administration in Adult Buffaloes

Authors: Zahid Manzoor, Shaukat Hussain Munawar, Zahid Iqbal, Imran Ahmad Khan, Abdul Aziz, Hafiz Muhammad Qasim

Abstract:

The present study was aimed to investigate the pharmacokinetics behavior and optimal dosage regimen of danofloxacin in 8 adult healthy buffaloes of local breed (Nili Ravi) following single intravenous administration at the dose of 2.5 mg/kg body weight. Plasma drug concentrations at various time intervals were measured by HPLC method. In vitro plasma protein binding was determined employing the ultrafiltration technique. The distribution and elimination of danofloxacin was rapid, as indicated by the values (Mean±SD) of distribution half-life (t1/2α = 0.25±0.09 hours) and elimination half life (t1/2β = 3.26±0.43 hours), respectively. Volume of distribution at steady state (Vss) was 1.14±0.12 L/kg, displaying its extensive distribution into various body fluids and tissues. The high value of AUC (9.80±2.14 µg/ml.hr) reflected the vast area of the body covered by drug concentration. The mean residence time was noted to be 4.78±0.52 hours. On the basis of pharmacokinetic parameters, a suitable intravenous regimen for danofloxacin in adult buffaloes would be 6.5 mg/kg to be repeated after 12 hours intervals. The present study is the foremost pharmacokinetic study of danofloxacin in the local species which would provide the valueable contribution in the local manufacturing of danofloxacin in Pakistan in future.

Keywords: danofloxacin, pharmacokinetics, plasma protein binding, buffaloes, dosage regimen

Procedia PDF Downloads 574
626 Therapeutic Application of Light and Electromagnetic Fields to Reduce Hyper-Inflammation Triggered by COVID-19

Authors: Blanche Aguida, Marootpong Pooam, Nathalie Jourdan, Margaret Ahmad

Abstract:

COVID-19-related morbidity is associated with exaggerated inflammation and cytokine production in the lungs, leading to acute respiratory failure. The cellular mechanisms underlying these so-called ‘cytokine storms’ are regulated through the Toll-like receptor 4 (TLR4) signaling pathway and by reactive oxygen species (ROS). Both light (photobiomodulation) and magnetic fields (e.g., pulsed electromagnetic field) stimulation are non-invasive therapies known to confer anti-inflammatory effects and regulate ROS signaling pathways. Here we show that daily exposure to two 10-minute intervals of moderate-intensity infra-red light significantly lowered the inflammatory response induced via the TLR4 receptor signaling pathway in human cell cultures. Anti-inflammatory effects were likewise achieved by electromagnetic field exposure of cells to daily 10-minute intervals of either pulsed electromagnetic fields (PEMF) or to low-level static magnetic fields. Because current illumination and electromagnetic field therapies have no known side effects and are already approved for some medical uses, we have here developed protocols for verification in clinical trials of COVID 19 infection. These treatments are affordable, simple to implement, and may help to resolve the acute respiratory distress of COVID 19 patients both in the home and in the hospital.

Keywords: COVID 19, electromagnetic fields therapy, inflammation, photobiomodulation therapy

Procedia PDF Downloads 116
625 Effect of Microwave Radiations on Natural Dyes’ Application on Cotton

Authors: Rafia Asghar, Abdul Hafeez

Abstract:

The current research was related with natural dyes’ extraction from the powder of Neem (Azadirachta indica) bark and studied characterization of this dye under microwave radiation’s influence. Both cotton fabric and dyeing powder were exposed to microwave rays for different time intervals (2minutes, 4 minutes, 6 minutes, 8 minutes and 10 minutes) using conventional oven. Aqueous, 60% Methanol and Ethyl Acetate solubilized extracts obtained from Neem (Azadirachta indica) bark were also exposed to different time intervals (2minutes, 4 minutes, 6 minutes, 8 minutes and 10 minutes) of microwave rays exposure. Pre, meta and post mordanting with Alum (2%, 4%, 6%, 8%, and 10%) was done to improve color strength of the extracted dye. Exposure of Neem (Azadirachta indica) bark extract and cotton to microwave rays enhanced the extraction process and dyeing process by reducing extraction time, dyeing time and dyeing temperature. Microwave rays treatment had a very strong influence on color fastness and color strength properties of cotton that was dyes using Neem (Azadirachta indica) bark for 30 minutes and dyeing cotton with that Neem bark extract for 75 minutes at 30°C. Among pre, meta and post mordanting, results indicated that 5% concentration of Alum in meta mordanting exhibited maximum color strength.

Keywords: dyes, natural dyeing, ecofriendly dyes, microwave treatment

Procedia PDF Downloads 658
624 Bringing the Confidence Intervals into Choropleth Mortality Map: An Example of Tainan, Taiwan

Authors: Tzu-Jung Tseng, Pei-Hsuen Han, Tsung-Hsueh Lu

Abstract:

Background: Choropleth mortality map is commonly used to identify areas with higher mortality risk. However, the use of choropleth map alone might result in the misinterpretation of differences in mortality rates between areas. Two areas with different color shades might not actually have a significant difference in mortality rates. The mortality rates estimated for an area with a small population would be less stable. We suggest of bringing the 95% confidence intervals (CI) into the choropleth mortality map to help users interpret the areal mortality rate difference more properly. Method: In the first choropleth mortality map, we used only three color to indicate standardized mortality ratio (SMR) for each district in Tainan, Taiwan. The red color denotes that the SMR of that district was significantly higher than the Tainan average; on the contrary, the green color suggests that the SMR of that district was significantly lower than the Tainan average. The yellow color indicates that the SMR of that district was not statistically significantly different from the Tainan average. In the second choropleth mortality map, we used traditional sequential color scheme (color ramp) for different SMR in 37 districts in Tainan City with bar chart of each SMR with 95% CI in which the users could examine if the line of 95% CI of SMR of two districts overlapped (nonsignificant difference). Results: The all-causes SMR of each district in Tainan for 2008 to 2013 ranged from 0.77 (95% CI 0.75 to 0.80) in East District to 1.39 Beimen (95% CI 1.25 to 1.52). In the first choropleth mortality map, only 16 of 37 districts had red color and 8 districts had green color. For different causes of death, the number of districts with red color differed. In the first choropleth mortality map we added a bar chart with line of 95% CI of SMR in each district, in which the users could visualize the SMR differences between districts. Conclusion: Through the use of 95% CI the users could interpret the aral mortality differences more properly.

Keywords: choropleth map, small area variation, standardized mortality ratio (SMR), Taiwan

Procedia PDF Downloads 289
623 Classification of EEG Signals Based on Dynamic Connectivity Analysis

Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović

Abstract:

In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.

Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients

Procedia PDF Downloads 168
622 Characterization of Petrophysical Properties of Reservoirs in Bima Formation, Northeastern Nigeria: Implication for Hydrocarbon Exploration

Authors: Gabriel Efomeh Omolaiye, Jimoh Ajadi, Olatunji Seminu, Yusuf Ayoola Jimoh, Ubulom Daniel

Abstract:

Identification and characterization of petrophysical properties of reservoirs in the Bima Formation were undertaken to understand their spatial distribution and impacts on hydrocarbon saturation in the highly heterolithic siliciclastic sequence. The study was carried out using nine well logs from Maiduguri and Baga/Lake sub-basins within the Borno Basin. The different log curves were combined to decipher the lithological heterogeneity of the serrated sand facies and to aid the geologic correlation of sand bodies within the sub-basins. Evaluation of the formation reveals largely undifferentiated to highly serrated and lenticular sand bodies from which twelve reservoirs named Bima Sand-1 to Bima Sand-12 were identified. The reservoir sand bodies are bifurcated by shale beds, which reduced their thicknesses variably from 0.61 to 6.1 m. The shale content in the sand bodies ranged from 11.00% (relatively clean) to high shale content of 88.00%. The formation also has variable porosity values, with calculated total porosity ranged as low as 10.00% to as high as 35.00%. Similarly, effective porosity values spanned between 2.00 to 24.00%. The irregular porosity values also accounted for a wide range of field average permeability estimates computed for the formation, which measured between 0.03 to 319.49 mD. Hydrocarbon saturation (Sh) in the thin lenticular sand bodies also varied from 40.00 to 78.00%. Hydrocarbon was encountered in three intervals in Ga-1, four intervals in Da-1, two intervals in Ar-1, and one interval in Ye-1. Ga-1 well encountered 30.78 m thick of hydrocarbon column in 14 thin sand lobes in Bima Sand-1, with thicknesses from 0.60 m to 5.80 m and average saturation of 51.00%, while Bima Sand-2 intercepted 45.11 m thick of hydrocarbon column in 12 thin sand lobes with an average saturation of 61.00% and Bima Sand-9 has 6.30 m column in 4 thin sand lobes. Da-1 has hydrocarbon in Bima Sand-8 (5.30 m, Sh of 58.00% in 5 sand lobes), Bima Sand-10 (13.50 m, Sh of 52.00% in 6 sand lobes), Bima Sand-11 (6.20 m, Sh of 58.00% in 2 sand lobes) and Bima Sand-12 (16.50 m, Sh of 66% in 6 sand lobes). In the Ar-1 well, hydrocarbon occurs in Bima Sand-3 (2.40 m column, Sh of 48% in a sand lobe) and Bima Sand-9 (6.0 m, Sh of 58% in a sand lobe). Ye-1 well only intersected 0.5 m hydrocarbon in Bima Sand-1 with 78% saturation. Although Bima Formation has variable saturation of hydrocarbon, mainly gas in Maiduguri, and Baga/Lake sub-basins of the research area, its highly thin serrated sand beds, coupled with very low effective porosity and permeability in part, would pose a significant exploitation challenge. The sediments were deposited in a fluvio-lacustrine environment, resulting in a very thinly laminated or serrated alternation of sand and shale beds lithofacies.

Keywords: Bima, Chad Basin, fluvio-lacustrine, lithofacies, serrated sand

Procedia PDF Downloads 135
621 Memory and Narratives Rereading before and after One Week

Authors: Abigail M. Csik, Gabriel A. Radvansky

Abstract:

As people read through event-based narratives, they construct an event model that captures information about the characters, goals, location, time, and causality. For many reasons, memory for such narratives is represented at different levels, namely, the surface form, textbase, and event model levels. Rereading has been shown to decrease surface form memory, while, at the same time, increasing textbase and event model memories. More generally, distributed practice has consistently shown memory benefits over massed practice for different types of materials, including texts. However, little research has investigated distributed practice of narratives at different inter-study intervals and these effects on these three levels of memory. Recent work in our lab has indicated that there may be dramatic changes in patterns of forgetting around one week, which may affect the three levels of memory. The present experiment aimed to determine the effects of rereading on the three levels of memory as a factor of whether the texts were reread before versus after one week. Participants (N = 42) read a set of stories, re-read them either before or after one week (with an inter-study interval of three days, seven days, or fourteen days), and then took a recognition test, from which the three levels of representation were derived. Signal detection results from this study reveal that differential patterns at the three levels as a factor of whether the narratives were re-read prior to one week or after one week. In particular, an ANOVA revealed that surface form memory was lower (p = .08) while textbase (p = .02) and event model memory (p = .04) were greater if narratives were re-read 14 days later compared to memory when narratives were re-read 3 days later. These results have implications for what type of memory benefits from distributed practice at various inter-study intervals.

Keywords: memory, event cognition, distributed practice, consolidation

Procedia PDF Downloads 190
620 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 375
619 Core Stability Index for Healthy Young Sri Lankan Population

Authors: V. M. B. K. T. Malwanage, S. Samita

Abstract:

Core stability is one of the major determinants that contribute to preventing injuries, enhance performance, and improve quality of life of the human. Endurance of the four major muscle groups of the central ‘core’ of the human body is identified as the most reliable determinant of core stability amongst the other numerous causes which contribute to readily make one’s core stability. This study aimed to develop a ‘Core Stability Index’ to confer a single value for an individual’s core stability based on the four endurance test scores. Since it is possible that at least some of the test scores are not independent, possibility of constructing a single index using the multivariate method exploratory factor analysis was investigated in the study. The study sample was consisted of 400 healthy young individuals with the mean age of 23.74 ± 1.51 years and mean BMI (Body Mass Index) of 21.1 ± 4.18. The correlation analysis revealed highly significant (P < 0.0001) correlations between test scores and thus construction an index using these highly inter related test scores using the technique factor analysis was justified. The mean values of all test scores were significantly different between males and females (P < 0.0001), and therefore two separate core stability indices were constructed for the two gender groups. Moreover, having eigen values 3.103 and 2.305 for males and females respectively, indicated one factor exists for all four test scores and thus a single factor based index was constructed. The 95% reference intervals constructed using the index scores were -1.64 to 2.00 and -1.56 to 2.29 for males and females respectively. These intervals can effectively be used to diagnose those who need improvement in core stability. The practitioners should find that with a single value measure, they could be more consistent among themselves.

Keywords: construction of indices, endurance test scores, muscle endurance, quality of life

Procedia PDF Downloads 131
618 Extensions to Chen's Minimizing Equal Mass Paralellogram Solutions

Authors: Abdalla Manur, Daniel Offin, Alessandro Arsie

Abstract:

In this paper, we study the extension of the minimizing equal mass parallelogram solutions which was derived by Chen in 2001. Chen’s solution was minimizing for one quarter of the period [0; T], where numerical integration had been used in his proof. This paper focuses on extending the minimization property to intervals of time [0; 2T] and [0; 4T].

Keywords: action, Hamiltoian, N-body, symmetry

Procedia PDF Downloads 1643
617 A Two-Week and Six-Month Stability of Cancer Health Literacy Classification Using the CHLT-6

Authors: Levent Dumenci, Laura A. Siminoff

Abstract:

Health literacy has been shown to predict a variety of health outcomes. Reliable identification of persons with limited cancer health literacy (LCHL) has been proved questionable with existing instruments using an arbitrary cut point along a continuum. The CHLT-6, however, uses a latent mixture modeling approach to identify persons with LCHL. The purpose of this study was to estimate two-week and six-month stability of identifying persons with LCHL using the CHLT-6 with a discrete latent variable approach as the underlying measurement structure. Using a test-retest design, the CHLT-6 was administered to cancer patients with two-week (N=98) and six-month (N=51) intervals. The two-week and six-month latent test-retest agreements were 89% and 88%, respectively. The chance-corrected latent agreements estimated from Dumenci’s latent kappa were 0.62 (95% CI: 0.41 – 0.82) and .47 (95% CI: 0.14 – 0.80) for the two-week and six-month intervals, respectively. High levels of latent test-retest agreement between limited and adequate categories of cancer health literacy construct, coupled with moderate to good levels of change-corrected latent agreements indicated that the CHLT-6 classification of limited versus adequate cancer health literacy is relatively stable over time. In conclusion, the measurement structure underlying the instrument allows for estimating classification errors circumventing limitations due to arbitrary approaches adopted by all other instruments. The CHLT-6 can be used to identify persons with LCHL in oncology clinics and intervention studies to accurately estimate treatment effectiveness.

Keywords: limited cancer health literacy, the CHLT-6, discrete latent variable modeling, latent agreement

Procedia PDF Downloads 146
616 The Destruction of Confucianism and Socialism in Chinese Popular Comedy Films

Authors: Shu Hui

Abstract:

Since 2010, the genre of comedy became predominant in film market in China. However, compared with the huge commercial success, these films received severe public criticism. These films are referred as trash (lan pian) by the public because of the fragment narrative, the non-professional photographing and advocating money warship. The paper aims to explain the contradictive phenomena between the higher box office and the lower mouth of word within hegemony theory. Four popular comedies that ranked top 20 in domestic revenue in the year the film released will be chosen to analyze their popularity in general. Differing from other popular films, these comedies’ popularity is generated from their disruptive pleasures instead of good stories or photographing. The destruction in Confucianism and socialism formulated the public consent or popularity, and caused the public criticism as well. Moreover, the happy-endings restore the normality at the superficial level.

Keywords: Confucianism, destruction, reconcilation, socialism

Procedia PDF Downloads 100
615 More Precise: Patient-Reported Outcomes after Stroke

Authors: Amber Elyse Corrigan, Alexander Smith, Anna Pennington, Ben Carter, Jonathan Hewitt

Abstract:

Background and Purpose: Morbidity secondary to stroke is highly heterogeneous, but it is important to both patients and clinicians in post-stroke management and adjustment to life after stroke. The consideration of post-stroke morbidity clinically and from the patient perspective has been poorly measured. The patient-reported outcome measures (PROs) in morbidity assessment help improve this knowledge gap. The primary aim of this study was to consider the association between PRO outcomes and stroke predictors. Methods: A multicenter prospective cohort study assessed 549 stroke patients at 19 hospital sites across England and Wales during 2019. Following a stroke event, demographic, clinical, and PRO measures were collected. Prevalence of morbidity within PRO measures was calculated with associated 95% confidence intervals. Predictors of domain outcome were calculated using a multilevel generalized linear model. Associated P -values and 95% confidence intervals are reported. Results: Data were collected from 549 participants, 317 men (57.7%) and 232 women (42.3%) with ages ranging from 25 to 97 (mean 72.7). PRO morbidity was high post-stroke; 93.2% of the cohort report post-stroke PRO morbidity. Previous stroke, diabetes, and gender are associated with worse patient-reported outcomes across both the physical and cognitive domains. Conclusions: This large-scale multicenter cohort study illustrates the high proportion of morbidity in PRO measures. Further, we demonstrate key predictors of adverse outcomes (Diabetes, previous stroke, and gender) congruence with clinical predictors. The PRO has been demonstrated to be an informative and useful stroke when considering patient-reported outcomes and has wider implications for considerations of PROs in clinical management. Future longitudinal follow-up with PROs is needed to consider association of long-term morbidity.

Keywords: morbidity, patient-reported outcome, PRO, stroke

Procedia PDF Downloads 100
614 Reading as Moral Afternoon Tea: An Empirical Study on the Compensation Effect between Literary Novel Reading and Readers’ Moral Motivation

Authors: Chong Jiang, Liang Zhao, Hua Jian, Xiaoguang Wang

Abstract:

The belief that there is a strong relationship between reading narrative and morality has generally become the basic assumption of scholars, philosophers, critics, and cultural critics. The virtuality constructed by literary novels inspires readers to regard the narrative as a thinking experiment, creating the distance between readers and events so that they can freely and morally experience the positions of different roles. Therefore, the virtual narrative combined with literary characteristics is always considered as a "moral laboratory." Well-established findings revealed that people show less lying and deceptive behaviors in the morning than in the afternoon, called the morning morality effect. As a limited self-regulation resource, morality will be constantly depleted with the change of time rhythm under the influence of the morning morality effect. It can also be compensated and restored in various ways, such as eating, sleeping, etc. As a common form of entertainment in modern society, literary novel reading gives people more virtual experience and emotional catharsis, just as a relaxing afternoon tea that helps people break away from fast-paced work, restore physical strength, and relieve stress in a short period of leisure. In this paper, inspired by the compensation control theory, we wonder whether reading literary novels in the digital environment could replenish a kind of spiritual energy for self-regulation to compensate for people's moral loss in the afternoon. Based on this assumption, we leverage the social annotation text content generated by readers in digital reading to represent the readers' reading attention. We then recognized the semantics and calculated the readers' moral motivation expressed in the annotations and investigated the fine-grained dynamics of the moral motivation changing in each time slot within 24 hours of a day. Comprehensively comparing the division of different time intervals, sufficient experiments showed that the moral motivation reflected in the annotations in the afternoon is significantly higher than that in the morning. The results robustly verified the hypothesis that reading compensates for moral motivation, which we called the moral afternoon tea effect. Moreover, we quantitatively identified that such moral compensation can last until 14:00 in the afternoon and 21:00 in the evening. In addition, it is interesting to find that the division of time intervals of different units impacts the identification of moral rhythms. Dividing the time intervals by four-hour time slot brings more insights of moral rhythms compared with that of three-hour and six-hour time slot.

Keywords: digital reading, social annotation, moral motivation, morning morality effect, control compensation

Procedia PDF Downloads 120
613 Does Creatine Supplementation Improve Swimming Performance?

Authors: Catrin Morgan, Atholl Johnston

Abstract:

Creatine supplementation should theoretically increase total muscle creatine and so enhance the generation of intramuscular phosphocreatine and subsequent ATP formation. The use of creatine as a potential ergogenic aid in sport has been an area of significant scientific research for a number of years. However the effect of creatine supplementation and swimming performance is a relatively new area of research and is the subject of this review. In swimming creatine supplementation could help maintain maximal power output, aid recovery and increase lean body mass. After investigating the underlying theory and science behind creatine supplementation, a literature review was conducted to identify the best evidence looking at the effect of creatine supplementation on swimming performance. The search identified 27 potential studies, and of these 17 were selected for review. The studies were then categorised into single sprint performance, which involves swimming a short distance race, or repeated interval performance, which involves swimming a series of sprints with intervals of rest between them. None of the studies on the effect of creatine controlled for the multiple confounding factors associated with measurement of swimming performance. The sample size in the studies was limited and this reduced the reliability of the studies and introduced the possibility of bias. The studies reviewed provided insufficient evidence to determine if creatine supplementation is beneficial to swimming performance. However, what data there was supported the use of creatine supplementation in repeated interval swimming rather than in single sprint swimming. From a review of the studies, it was calculated on average, there was a 1.37% increase in swimming performance with the use of creatine for repeated intervals and a 0.86% increase in performance for single sprint. While this may seem minor, it should be remembered that swimming races are often won by much smaller margins. In the 2012 London Olympics the Men’s 100 metres freestyle race was won by a margin of only 0.01 of a second. Therefore any potential benefit could make a dramatic difference to the final outcome of the race. Overall more research is warranted before the benefits of creatine supplementation in swimming performance can be further clarified.

Keywords: creatine supplementation, repeated interval, single sprint, swimming performance

Procedia PDF Downloads 393
612 Pomegranate Peel Based Edible Coating Treatment for Safety and Quality of Chicken Nuggets

Authors: Muhammad Sajid Arshad, Sadaf Bashir

Abstract:

In this study, the effects of pomegranate peel based edible coating were determined on safety and quality of chicken nuggets. Four treatment groups were prepared as control (without coating), coating with sodium alginate (SA) (1.5%), pomegranate peel powder (PPP) (1.5%), and combination of SA and PPP. There was a significant variation observed with respect to coating treatments and storage intervals. The chicken nuggets were subjected to refrigerated storage (40C) and were analyzed at regular intervals of 0, 7, 14 1 and 21 days. The microbiological quality was determined by total aerobic and coliform counts. Total aerobic (5.09±0.05 log CFU/g) and coliforms (3.91±0.06 log CFU/g) counts were higher in uncoated chicken nuggets whereas lower was observed in coated chicken nuggets having combination of SA and PPP. Likewise, antioxidants potential of chicken nuggets was observed by assessing total phenolic contents (TPC) and DPPH activity. Higher TPC (135.66 GAE/100g) and DPPH (64.65%) were found in combination with SA and PPP, whereas minimum TPC (91.38) and DPPH (41.48) was observed in uncoated chicken nuggets. Regarding the stability analysis of chicken nuggets, thiobarbituric acid reactive substances (TBARS) and peroxide value (POV) were determined. Higher TBARS (1.62±0.03 MDA/Kg) and POV (0.92±0.03 meq peroxide/kg) were found in uncoated chicken nuggets. Hunter color values were also observed in both uncoated and coated chicken nuggets. Sensorial attributes were also observed by the trained panelists. The higher sensory score for appearance, color, taste, texture and overall acceptability were observed in control (uncoated) while in coated treatments, it was found within acceptable limits. In nutshell, the combination of SA and PPP enhanced the overall quality, antioxidant potential, and stability of chicken nuggets.

Keywords: chicken nuggets, edible coatings, pomegranate peel powder, sodium alginate

Procedia PDF Downloads 109
611 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 57
610 The Determination of the Phosphorous Solubility in the Iron by the Function of the Other Components

Authors: Andras Dezső, Peter Baumli, George Kaptay

Abstract:

The phosphorous is the important components in the steels, because it makes the changing of the mechanical properties and possibly modifying the structure. The phosphorous can be create the Fe3P compounds, what is segregated in the ferrite grain boundary in the intervals of the nano-, or microscale. This intermetallic compound is decreasing the mechanical properties, for example it makes the blue brittleness which means that the brittle created by the segregated particles at 200 ... 300°C. This work describes the phosphide solubility by the other components effect. We make calculations for the Ni, Mo, Cu, S, V, C, Si, Mn, and the Cr elements by the Thermo-Calc software. We predict the effects by approximate functions. The binary Fe-P system has a solubility line, which has a determinating equation. The result is below: lnwo = -3,439 – 1.903/T where the w0 means the weight percent of the maximum soluted concentration of the phosphorous, and the T is the temperature in Kelvin. The equation show that the P more soluble element when the temperature increasing. The nickel, molybdenum, vanadium, silicon, manganese, and the chromium make dependence to the maximum soluted concentration. These functions are more dependent by the elements concentration, which are lower when we put these elements in our steels. The copper, sulphur and carbon do not make effect to the phosphorous solubility. We predict that all of cases the maximum solubility concentration increases when the temperature more and more high. Between 473K and 673 K, in the phase diagram, these systems contain mostly two or three phase eutectoid, and the singe phase, ferritic intervals. In the eutectoid areas the ferrite, the iron-phosphide, and the metal (III)-phospide are in the equilibrium. In these modelling we predicted that which elements are good for avoid the phosphide segregation or not. These datas are important when we make or choose the steels, where the phosphide segregation stopping our possibilities.

Keywords: phosphorous, steel, segregation, thermo-calc software

Procedia PDF Downloads 596