Search results for: interval coefficients
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1688

Search results for: interval coefficients

1328 Genetic Structure Analysis through Pedigree Information in a Closed Herd of the New Zealand White Rabbits

Authors: M. Sakthivel, A. Devaki, D. Balasubramanyam, P. Kumarasamy, A. Raja, R. Anilkumar, H. Gopi

Abstract:

The New Zealand White breed of rabbit is one of the most commonly used, well adapted exotic breeds in India. Earlier studies were limited only to analyze the environmental factors affecting the growth and reproductive performance. In the present study, the population of the New Zealand White rabbits in a closed herd was evaluated for its genetic structure. Data on pedigree information (n=2508) for 18 years (1995-2012) were utilized for the study. Pedigree analysis and the estimates of population genetic parameters based on gene origin probabilities were performed using the software program ENDOG (version 4.8). The analysis revealed that the mean values of generation interval, coefficients of inbreeding and equivalent inbreeding were 1.489 years, 13.233 percent and 17.585 percent, respectively. The proportion of population inbred was 100 percent. The estimated mean values of average relatedness and the individual increase in inbreeding were 22.727 and 3.004 percent, respectively. The percent increase in inbreeding over generations was 1.94, 3.06 and 3.98 estimated through maximum generations, equivalent generations, and complete generations, respectively. The number of ancestors contributing the most of 50% genes (fₐ₅₀) to the gene pool of reference population was 4 which might have led to the reduction in genetic variability and increased amount of inbreeding. The extent of genetic bottleneck assessed by calculating the effective number of founders (fₑ) and the effective number of ancestors (fₐ), as expressed by the fₑ/fₐ ratio was 1.1 which is indicative of the absence of stringent bottlenecks. Up to 5th generation, 71.29 percent pedigree was complete reflecting the well-maintained pedigree records. The maximum known generations were 15 with an average of 7.9 and the average equivalent generations traced were 5.6 indicating of a fairly good depth in pedigree. The realized effective population size was 14.93 which is very critical, and with the increasing trend of inbreeding, the situation has been assessed to be worse in future. The proportion of animals with the genetic conservation index (GCI) greater than 9 was 39.10 percent which can be used as a scale to use such animals with higher GCI to maintain balanced contribution from the founders. From the study, it was evident that the herd was completely inbred with very high inbreeding coefficient and the effective population size was critical. Recommendations were made to reduce the probability of deleterious effects of inbreeding and to improve the genetic variability in the herd. The present study can help in carrying out similar studies to meet the demand for animal protein in developing countries.

Keywords: effective population size, genetic structure, pedigree analysis, rabbit genetics

Procedia PDF Downloads 271
1327 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: customer value, Huff's Gravity Model, POS, Retailer

Procedia PDF Downloads 99
1326 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations

Authors: Kuniyoshi Abe

Abstract:

Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.

Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant

Procedia PDF Downloads 139
1325 Effect of Microstructure on Transition Temperature of Austempered Ductile Iron (ADI)

Authors: A. Ozel

Abstract:

The ductile to brittle transition temperature is a very important criterion that is used for selection of materials in some applications, especially in low-temperature conditions. For that reason, in this study transition temperature of as-cast and austempered unalloyed ductile iron in the temperature interval from -60 to +100 degrees C have been investigated. The microstructures of samples were examined by light microscope. The impact energy values obtained from the experiments were found to depend on the austempering time and temperature.

Keywords: Austempered Ductile Iron (ADI), Charpy test, microstructure, transition temperature

Procedia PDF Downloads 480
1324 Application of an Analytical Model to Obtain Daily Flow Duration Curves for Different Hydrological Regimes in Switzerland

Authors: Ana Clara Santos, Maria Manuela Portela, Bettina Schaefli

Abstract:

This work assesses the performance of an analytical model framework to generate daily flow duration curves, FDCs, based on climatic characteristics of the catchments and on their streamflow recession coefficients. According to the analytical model framework, precipitation is considered to be a stochastic process, modeled as a marked Poisson process, and recession is considered to be deterministic, with parameters that can be computed based on different models. The analytical model framework was tested for three case studies with different hydrological regimes located in Switzerland: pluvial, snow-dominated and glacier. For that purpose, five time intervals were analyzed (the four meteorological seasons and the civil year) and two developments of the model were tested: one considering a linear recession model and the other adopting a nonlinear recession model. Those developments were combined with recession coefficients obtained from two different approaches: forward and inverse estimation. The performance of the analytical framework when considering forward parameter estimation is poor in comparison with the inverse estimation for both, linear and nonlinear models. For the pluvial catchment, the inverse estimation shows exceptional good results, especially for the nonlinear model, clearing suggesting that the model has the ability to describe FDCs. For the snow-dominated and glacier catchments the seasonal results are better than the annual ones suggesting that the model can describe streamflows in those conditions and that future efforts should focus on improving and combining seasonal curves instead of considering single annual ones.

Keywords: analytical streamflow distribution, stochastic process, linear and non-linear recession, hydrological modelling, daily discharges

Procedia PDF Downloads 138
1323 Information Communication Technology (ICT) Using Management in Nursing College under the Praboromarajchanok Institute

Authors: Suphaphon Udomluck, Pannathorn Chachvarat

Abstract:

Information Communication Technology (ICT) using management is essential for effective decision making in organization. The Concerns Based Adoption Model (CBAM) was employed as the conceptual framework. The purposes of the study were to assess the situation of Information Communication Technology (ICT) using management in College of Nursing under the Praboromarajchanok Institute. The samples were multi – stage sampling of 10 colleges of nursing that participated include directors, vice directors, head of learning groups, teachers, system administrator and responsible for ICT. The total participants were 280; the instrument used were questionnaires that include 4 parts, general information, Information Communication Technology (ICT) using management, the Stage of concern Questionnaires (SoC), and the Levels of Use (LoU) ICT Questionnaires respectively. Reliability coefficients were tested; alpha coefficients were 0.967for Information Communication Technology (ICT) using management, 0.884 for SoC and 0.945 for LoU. The data were analyzed by frequency, percentage, mean, standard deviation, Pearson Product Moment Correlation and Multiple Regression. They were founded as follows: The high level overall score of Information Communication Technology (ICT) using management and issue were administration, hardware, software, and people. The overall score of the Stage of concern (SoC)ICTis at high level and the overall score of the Levels of Use (LoU) ICTis at moderate. The Information Communication Technology (ICT) using management had the positive relationship with the Stage of concern (SoC)ICTand the Levels of Use (LoU) ICT(p < .01). The results of Multiple Regression revealed that administration hardwear, software and people ware could predict SoC of ICT (18.5%) and LoU of ICT (20.8%).The factors that were significantly influenced by SoCs were people ware. The factors that were significantly influenced by LoU of ICT were administration hardware and people ware.

Keywords: information communication technology (ICT), management, the concerns-based adoption model (CBAM), stage of concern(SoC), the levels of use(LoU)

Procedia PDF Downloads 287
1322 Macroeconomic Policy Coordination and Economic Growth Uncertainty in Nigeria

Authors: Ephraim Ugwu, Christopher Ehinomen

Abstract:

Despite efforts by the Nigerian government to harmonize the macroeconomic policy implementations by establishing various committees to resolve disputes between the fiscal and monetary authorities, it is still evident that the federal government had continued its expansionary policy by increasing spending, thus creating huge budget deficit. This study evaluates the effect of macroeconomic policy coordination on economic growth uncertainty in Nigeria from 1980 to 2020. Employing the Auto regressive distributed lag (ARDL) bound testing procedures, the empirical results shows that the error correction term, ECM(-1), indicates a negative sign and is significant statistically with the t-statistic value of (-5.612882 ). Therefore, the gap between long run equilibrium value and the actual value of the dependent variable is corrected with speed of adjustment equal to 77% yearly. The long run coefficient results showed that the estimated coefficients of the intercept term indicates that other things remains the same (ceteris paribus), the economics growth uncertainty will continue reduce by 7.32%. The coefficient of the fiscal policy variable, PUBEXP, indicates a positive sign and significant statistically. This implies that as the government expenditure increases by 1%, economic growth uncertainty will increase by 1.67%. The coefficient of monetary policy variable MS also indicates a positive sign and insignificant statistically. The coefficients of merchandise trade variable, TRADE and exchange rate EXR show negative signs and significant statistically. This indicate that as the country’s merchandise trade and the rate of exchange increases by 1%, the economic growth uncertainty reduces by 0.38% and 0.06%, respectively. This study, therefore, advocate for proper coordination of monetary, fiscal and exchange rate policies in order to actualize the goal of achieving a stable economic growth.

Keywords: macroeconomic, policy coordination, growth uncertainty, ARDL, Nigeria

Procedia PDF Downloads 65
1321 Cognitive Function and Coping Behavior in the Elderly: A Population-Based Cross-Sectional Study

Authors: Ryo Shikimoto, Hidehito Niimura, Hisashi Kida, Kota Suzuki, Yukiko Miyasaka, Masaru Mimura

Abstract:

Introduction: In Japan, the most aged country in the world, it is important to explore predictive factors of cognitive function among the elderly. Coping behavior relieves chronic stress and improves lifestyle, and consequently may reduce the risk of cognitive impairment. One of the most widely investigated frameworks evaluated in previous studies is approach-oriented and avoidance-oriented coping strategies. The purpose of this study is to investigate the relationship between cognitive function and coping strategies among elderly residents in urban areas of Japan. Method: This is a part of the cross-sectional Arakawa geriatric cohort study for 1,099 residents (aged 65 to 86 years; mean [SD] = 72.9 [5.2]). Participants were assessed for cognitive function using the Mini-Mental State Examination (MMSE) and diagnosed by psychiatrists in face-to-face interviews. They were then investigated for their each coping behaviors and coping strategies (approach- and avoidance-oriented coping) using stress and coping inventory. A multiple regression analysis was used to investigate the relationship between MMSE score and each coping strategy. Results: Of the 1,099 patients, the mean MMSE score of the study participants was 27.2 (SD = 2.7), and the numbers of the diagnosis of normal, mild cognitive impairment (MCI), and dementia were 815 (74.2%), 248 (22.6%), and 14 (1.3%), respectively. Approach-oriented coping score was significantly associated with MMSE score (B [partial regression coefficient] = 0.12, 95% confidence interval = 0.05 to 0.19) after adjusting for confounding factors including age, sex, and education. Avoidance-oriented coping did not show a significant association with MMSE score (B [partial regression coefficient] = -0.02, 95% confidence interval = -0.09 to 0.06). Conclusion: Approach-oriented coping was clearly associated with neurocognitive function in the Japanese population. A future longitudinal trial is warranted to investigate the protective effects of coping behavior on cognitive function.

Keywords: approach-oriented coping, cognitive impairment, coping behavior, dementia

Procedia PDF Downloads 107
1320 Investigation on Correlation of Earthquake Intensity Parameters with Seismic Response of Reinforced Concrete Structures

Authors: Semra Sirin Kiris

Abstract:

Nonlinear dynamic analysis is permitted to be used for structures without any restrictions. The important issue is the selection of the design earthquake to conduct the analyses since quite different response may be obtained using ground motion records at the same general area even resulting from the same earthquake. In seismic design codes, the method requires scaling earthquake records based on site response spectrum to a specified hazard level. Many researches have indicated that this limitation about selection can cause a large scatter in response and other charecteristics of ground motion obtained in different manner may demonstrate better correlation with peak seismic response. For this reason influence of eleven different ground motion parameters on the peak displacement of reinforced concrete systems is examined in this paper. From conducting 7020 nonlinear time history analyses for single degree of freedom systems, the most effective earthquake parameters are given for the range of the initial periods and strength ratios of the structures. In this study, a hysteresis model for reinforced concrete called Q-hyst is used not taken into account strength and stiffness degradation. The post-yielding to elastic stiffness ratio is considered as 0.15. The range of initial period, T is from 0.1s to 0.9s with 0.1s time interval and three different strength ratios for structures are used. The magnitude of 260 earthquake records selected is higher than earthquake magnitude, M=6. The earthquake parameters related to the energy content, duration or peak values of ground motion records are PGA(Peak Ground Acceleration), PGV (Peak Ground Velocity), PGD (Peak Ground Displacement), MIV (Maximum Increamental Velocity), EPA(Effective Peak Acceleration), EPV (Effective Peak Velocity), teff (Effective Duration), A95 (Arias Intensity-based Parameter), SPGA (Significant Peak Ground Acceleration), ID (Damage Factor) and Sa (Spectral Response Spectrum).Observing the correlation coefficients between the ground motion parameters and the peak displacement of structures, different earthquake parameters play role in peak displacement demand related to the ranges formed by the different periods and the strength ratio of a reinforced concrete systems. The influence of the Sa tends to decrease for the high values of strength ratio and T=0.3s-0.6s. The ID and PGD is not evaluated as a measure of earthquake effect since high correlation with displacement demand is not observed. The influence of the A95 is high for T=0.1 but low related to the higher values of T and strength ratio. The correlation of PGA, EPA and SPGA shows the highest correlation for T=0.1s but their effectiveness decreases with high T. Considering all range of structural parameters, the MIV is the most effective parameter.

Keywords: earthquake parameters, earthquake resistant design, nonlinear analysis, reinforced concrete

Procedia PDF Downloads 132
1319 Removal of Polycyclic Aromatic Hydrocarbons Present in Tyre Pyrolytic Oil Using Low Cost Natural Adsorbents

Authors: Neha Budhwani

Abstract:

Polycyclic aromatic hydrocarbons (PAHs) are formed during the pyrolysis of scrap tyres to produce tyre pyrolytic oil (TPO). Due to carcinogenic, mutagenic, and toxic properties PAHs are priority pollutants. Hence it is essential to remove PAHs from TPO before utilising TPO as a petroleum fuel alternative (to run the engine). Agricultural wastes have promising future to be utilized as biosorbent due to their cost effectiveness, abundant availability, high biosorption capacity and renewability. Various low cost adsorbents were prepared from natural sources. Uptake of PAHs present in tyre pyrolytic oil was investigated using various low-cost adsor¬bents of natural origin including sawdust (shiham), coconut fiber, neem bark, chitin, activated charcol. Adsorption experiments of different PAHs viz. naphthalene, acenaphthalene, biphenyl and anthracene have been carried out at ambient temperature (25°C) and at pH 7. It was observed that for any given PAH, the adsorption capacity increases with the lignin content. Freundlich constant kf and 1/n have been evaluated and it was found that the adsorption isotherms of PAHs were in agreement with a Freundlich model, while the uptake capacity of PAHs followed the order: activated charcoal> saw dust (shisham) > coconut fiber > chitin. The partition coefficients in acetone-water, and the adsorption constants at equilibrium, could be linearly correlated with octanol–water partition coefficients. It is observed that natural adsorbents are good alternative for PAHs removal. Sawdust of Dalbergia sissoo, a by-product of sawmills was found to be a promising adsorbent for the removal of PAHs present in TPO. It is observed that adsorbents studied were comparable to those of some conventional adsorbents.

Keywords: natural adsorbent, PAHs, TPO, coconut fiber, wood powder (shisham), naphthalene, acenaphthene, biphenyl and anthracene

Procedia PDF Downloads 206
1318 Injunctions, Disjunctions, Remnants: The Reverse of Unity

Authors: Igor Guatelli

Abstract:

The universe of aesthetic perception entails impasses about sensitive divergences that each text or visual object may be subjected to. If approached through intertextuality that is not based on the misleading notion of kinships or similarities a priori admissible, the possibility of anachronistic, heterogeneous - and non-diachronic - assemblies can enhance the emergence of interval movements, intermediate, and conflicting, conducive to a method of reading, interpreting, and assigning meaning that escapes the rigid antinomies of the mere being and non-being of things. In negative, they operate in a relationship built by the lack of an adjusted meaning set by their positive existences, with no remainders; the generated interval becomes the remnant of each of them; it is the opening that obscures the stable positions of each one. Without the negative of absence, of that which is always missing or must be missing in a text, concept, or image made positive by history, nothing is perceived beyond what has been already given. Pairings or binary oppositions cannot lead only to functional syntheses; on the contrary, methodological disturbances accumulated by the approximation of signs and entities can initiate a process of becoming as an opening to an unforeseen other, transformation until a moment when the difficulties of [re]conciliation become the mainstay of a future of that sign/entity, not envisioned a priori. A counter-history can emerge from these unprecedented, misadjusted approaches, beginnings of unassigned injunctions and disjunctions, in short, difficult alliances that open cracks in a supposedly cohesive history, chained in its apparent linearity with no remains, understood as a categorical historical imperative. Interstices are minority fields that, because of their opening, are capable of causing opacity in that which, apparently, presents itself with irreducible clarity. Resulting from an incomplete and maladjusted [at the least dual] marriage between the signs/entities that originate them, this interval may destabilize and cause disorder in these entities and their own meanings. The interstitials offer a hyphenated relationship: a simultaneous union and separation, a spacing between the entity’s identity and its otherness or, alterity. One and the other may no longer be seen without the crack or fissure that now separates them, uniting, by a space-time lapse. Ontological, semantic shifts are caused by this fissure, an absence between one and the other, one with and against the other. Based on an improbable approximation between some conceptual and semantic shifts within the design production of architect Rem Koolhaas and the textual production of the philosopher Jacques Derrida, this article questions the notion of unity, coherence, affinity, and complementarity in the process of construction of thought from these ontological, epistemological, and semiological fissures that rattle the signs/entities and their stable meanings. Fissures in a thought that is considered coherent, cohesive, formatted are the negativity that constitutes the interstices that allow us to move towards what still remains as non-identity, which allows us to begin another story.

Keywords: clearing, interstice, negative, remnant, spectrum

Procedia PDF Downloads 115
1317 Surface Thermodynamics Approach to Mycobacterium tuberculosis (M-TB) – Human Sputum Interactions

Authors: J. L. Chukwuneke, C. H. Achebe, S. N. Omenyi

Abstract:

This research work presents the surface thermodynamics approach to M-TB/HIV-Human sputum interactions. This involved the use of the Hamaker coefficient concept as a surface energetics tool in determining the interaction processes, with the surface interfacial energies explained using van der Waals concept of particle interactions. The Lifshitz derivation for van der Waals forces was applied as an alternative to the contact angle approach which has been widely used in other biological systems. The methodology involved taking sputum samples from twenty infected persons and from twenty uninfected persons for absorbance measurement using a digital Ultraviolet visible Spectrophotometer. The variables required for the computations with the Lifshitz formula were derived from the absorbance data. The Matlab software tools were used in the mathematical analysis of the data produced from the experiments (absorbance values). The Hamaker constants and the combined Hamaker coefficients were obtained using the values of the dielectric constant together with the Lifshitz equation. The absolute combined Hamaker coefficients A132abs and A131abs on both infected and uninfected sputum samples gave the values of A132abs = 0.21631x10-21Joule for M-TB infected sputum and Ã132abs = 0.18825x10-21Joule for M-TB/HIV infected sputum. The significance of this result is the positive value of the absolute combined Hamaker coefficient which suggests the existence of net positive van der waals forces demonstrating an attraction between the bacteria and the macrophage. This however, implies that infection can occur. It was also shown that in the presence of HIV, the interaction energy is reduced by 13% conforming adverse effects observed in HIV patients suffering from tuberculosis.

Keywords: absorbance, dielectric constant, hamaker coefficient, lifshitz formula, macrophage, mycobacterium tuberculosis, van der waals forces

Procedia PDF Downloads 250
1316 Evaluation of Hepatic Metabolite Changes for Differentiation Between Non-Alcoholic Steatohepatitis and Simple Hepatic Steatosis Using Long Echo-Time Proton Magnetic Resonance Spectroscopy

Authors: Tae-Hoon Kim, Kwon-Ha Yoon, Hong Young Jun, Ki-Jong Kim, Young Hwan Lee, Myeung Su Lee, Keum Ha Choi, Ki Jung Yun, Eun Young Cho, Yong-Yeon Jeong, Chung-Hwan Jun

Abstract:

Purpose: To assess the changes of hepatic metabolite for differentiation between non-alcoholic steatohepatitis (NASH) and simple steatosis on proton magnetic resonance spectroscopy (1H-MRS) in both humans and animal model. Methods: The local institutional review board approved this study and subjects gave written informed consent. 1H-MRS measurements were performed on a localized voxel of the liver using a point-resolved spectroscopy (PRESS) sequence and hepatic metabolites of alanine (Ala), lactate/triglyceride (Lac/TG), and TG were analyzed in NASH, simple steatosis and control groups. The group difference was tested with the ANOVA and Tukey’s post-hoc tests, and diagnostic accuracy was tested by calculating the area under the receiver operating characteristics (ROC) curve. The associations between metabolic concentration and pathologic grades or non-alcoholic fatty liver disease(NAFLD) activity scores were assessed by the Pearson’s correlation. Results: Patient with NASH showed the elevated Ala(p<0.001), Lac/TG(p < 0.001), TG(p < 0.05) concentration when compared with patients who had simple steatosis and healthy controls. The NASH patients were higher levels in Ala(mean±SEM, 52.5±8.3 vs 2.0±0.9; p < 0.001), Lac/TG(824.0±168.2 vs 394.1±89.8; p < 0.05) than simple steatosis. The area under the ROC curve to distinguish NASH from simple steatosis was 1.00 (95% confidence interval; 1.00, 1.00) with Ala and 0.782 (95% confidence interval; 0.61, 0.96) with Lac/TG. The Ala and Lac/TG levels were well correlated with steatosis grade, lobular inflammation, and NAFLD activity scores. The metabolic changes in human were reproducible to a mice model induced by streptozotocin injection and a high-fat diet. Conclusion: 1H-MRS would be useful for differentiation of patients with NASH and simple hepatic steatosis.

Keywords: non-alcoholic fatty liver disease, non-alcoholic steatohepatitis, 1H MR spectroscopy, hepatic metabolites

Procedia PDF Downloads 305
1315 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling

Authors: Vibha Devi, Shabina Khanam

Abstract:

Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.

Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation

Procedia PDF Downloads 122
1314 Postoperative Budesonide Nasal Irrigation vs Normal Saline Irrigation for Chronic Rhinosinusitis: A Systematic Review and Meta-Analysis

Authors: Rakan Hassan M. Alzahrani, Ziyad Alzahrani, Bader Bashrahil, Abdulrahman Elyasi, Abdullah a Ghaddaf, Rayan Alzahrani, Mohammed Alkathlan, Nawaf Alghamdi, Dakheelallah Almutairi

Abstract:

Background: Corticosteroid irrigations, which regularly involve the off-label use of budesonide mixed with normal saline in high volume Sino-nasal irrigations, have been more commonly used in the management of post-operative chronic rhinosinusitis (CRS). Objective: This article attempted to measure the efficacy of post-operative budesonide nasal irrigation compared to normal saline-alone nasal irrigation in the management of chronic rhinosinusitis (CRS) through a systematic review and meta-analysis of randomized controlled trials (RCTs). Methods: The databases PubMed, Embase, and Cochrane Central Register of Controlled Trials were searched by two independent authors. Only RCTs comparing budesonide irrigation to normal saline alone irrigation for CRS with or without polyposis after functional endoscopic sinus surgery (FESS) were eligible. A random effect analysis model of the reported CRS-related quality of life (QOL) measures and the objective endoscopic assessment scales of the disease was done. Results: Only 6 RCTs met the eligibility criteria, with a total number of participants of 356. Compared to normal saline irrigation, budesonide nasal irrigation showed statically significant improvements in both the CRS-related quality of life (QOL) and the endoscopic findings (MD= -4.22 confidence interval [CI]: -5.63, -2.82 [P < 0.00001]), (SMD= -0.50 confidence interval [CI]: -0.93, -0.06 [P < 0.03]) respectively. Conclusion: Both intervention arms showed improvements in CRS-related QOL and endoscopic findings in post-FESS chronic rhinosinusitis with or without polyposis. However, budesonide irrigation seems to have a slight edge over conventional normal saline irrigation with no reported serious side effects, including hypothalamic-pituitary-adrenal (HPA) axis suppression.

Keywords: Budesonide, chronic rhinosinusitis, corticosteroids, nasal irrigation, normal saline

Procedia PDF Downloads 55
1313 Using Derivative Free Method to Improve the Error Estimation of Numerical Quadrature

Authors: Chin-Yun Chen

Abstract:

Numerical integration is an essential tool for deriving different physical quantities in engineering and science. The effectiveness of a numerical integrator depends on different factors, where the crucial one is the error estimation. This work presents an error estimator that combines a derivative free method to improve the performance of verified numerical quadrature.

Keywords: numerical quadrature, error estimation, derivative free method, interval computation

Procedia PDF Downloads 435
1312 Effect of Temperature on the Binary Mixture of Imidazolium Ionic Liquid with Pyrrolidin-2-One: Volumetric and Ultrasonic Study

Authors: T. Srinivasa Krishna, K. Narendra, K. Thomas, S. S. Raju, B. Munibhadrayya

Abstract:

The densities, speeds of sound and refractive index of the binary mixture of ionic liquid (IL) 1-Butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([BMIM][Imide]) and Pyrrolidin-2-one(PY) was measured at atmospheric pressure, and over the range of temperatures T= (298.15 -323.15)K. The excess molar volume, excess isentropic compressibility, excess speed of sound, partial molar volumes, and isentropic partial molar compressibility were calculated from the values of the experimental density and speed of sound. From the experimental data excess thermal expansion coefficients and isothermal pressure coefficient of excess molar enthalpy at 298.15K were calculated. The results were analyzed and were discussed from the point of view of structural changes. Excess properties were calculated and correlated by the Redlich–Kister and the Legendre polynomial equation and binary coefficients were obtained. Values of excess partial volumes at infinite dilution for the binary system at different temperatures were calculated from the adjustable parameters obtained from Legendre polynomial and Redlich–Kister smoothing equation. Deviation in refractive indices ΔnD and deviation in molar refraction, ΔRm were calculated from the measured refractive index values. Equations of state and several mixing rules were used to predict refractive indices of the binary mixtures and compared with the experimental values by means of the standard deviation and found to be in excellent agreement. By using Prigogine–Flory–Patterson (PFP) theory, the above thermodynamic mixing functions have been calculated and the results obtained from this theory were compared with experimental results.

Keywords: density, refractive index, speeds of sound, Prigogine-Flory-Patterson theory

Procedia PDF Downloads 382
1311 The Effect of Multiple Environmental Conditions on Acacia senegal Seedling’s Carbon, Nitrogen, and Hydrogen Contents: An Experimental Investigation

Authors: Abdelmoniem A. Attaelmanan, Ahmed A. H. Siddig

Abstract:

This study was conducted in light of continual global climate changes that projected increasing aridity, changes in soil fertility, and pollution. Plant growth and development largely depend on the combination of availing water and nutrients in the soil. Changes in the climate and atmospheric chemistry can cause serious effects on these growth factors. Plant carbon (C), nitrogen (N), and hydrogen (H) play a fundamental role in the maintenance of ecosystem structure and function. Hashab (Acacia senegal), which produces gum Arabic, supports dryland ecosystems in tropical zones by its potentiality to restore degraded soils; hence it is ecologically and economically important for the dry areas of sub-Saharan Africa. The study aims at investigating the effects of water stress (simulated drought) and poor soil type on Acacia senegal C, N, and H contents. Seven days old seedlings were assigned to the treatments in Split- plot design for four weeks. The main plot is irrigation interval (well-watered and water-stressed), and the subplot is soil types (silt and sand soils). Seedling's C%, N%, and H% were measured using CHNS-O Analyzer and applying Standard Test Method. Irrigation intervals and soil types had no effects on seedlings and leaves C%, N%, and H%, irrigation interval had affected stem C and H%, both irrigation intervals and soil types had affected root N% and interaction effect of water and soil was found on leaves and root's N%. Synthesis application of well-watered irrigation with soil that is rich in N and other nutrients would result in the greatest seedling C, N, and H content which will enhance growth and biomass accumulation and can play a crucial role in ecosystem productivity and services in the dryland regions.

Keywords: Acacia senegal, Africa, climate change, drylands, nutrients biomass, Sub-Saharan, Sudan

Procedia PDF Downloads 90
1310 The Study of Heat and Mass Transfer for Ferrous Materials' Filtration Drying

Authors: Dmytro Symak

Abstract:

Drying is a complex technologic, thermal and energy process. Energy cost of drying processes in many cases is the most costly stage of production, and can be over 50% of total costs. As we know, in Ukraine over 85% of Portland cement is produced moist, and the finished product energy costs make up to almost 60%. During the wet cement production, energy costs make up over 5500 kJ / kg of clinker, while during the dry only 3100 kJ / kg, that is, switching to a dry Portland cement will allow result into double cutting energy costs. Therefore, to study raw materials drying process in the manufacture of Portland cement is very actual task. The fine ferrous materials drying (small pyrites, red mud, clay Kyoko) is recommended to do by filtration method, that is one of the most intense. The essence of filtration method drying lies in heat agent filtering through a stationary layer of wet material, which is located on the perforated partition, in the "layer-dispersed material - perforated partition." For the optimum drying purposes, it is necessary to establish the dependence of pressure loss in the layer of dispersed material, and the values of heat and mass transfer, depending on the speed of the gas flow filtering. In our research, the experimentally determined pressure loss in the layer of dispersed material was generalized based on dimensionless complexes in the form and coefficients of heat exchange. We also determined the relation between the coefficients of mass and heat transfer. As a result of theoretic and experimental investigations, it was possible to develop a methodology for calculating the optimal parameters for the thermal agent and the main parameters for the filtration drying installation. The comparison of calculated by known operating expenses methods for the process of small pyrites drying in a rotating drum and filtration method shows to save up to 618 kWh per 1,000 kg of dry material and 700 kWh during filtration drying clay.

Keywords: drying, cement, heat and mass transfer, filtration method

Procedia PDF Downloads 239
1309 Extension and Closure of a Field for Engineering Purpose

Authors: Shouji Yujiro, Memei Dukovic, Mist Yakubu

Abstract:

Fields are important objects of study in algebra since they provide a useful generalization of many number systems, such as the rational numbers, real numbers, and complex numbers. In particular, the usual rules of associativity, commutativity and distributivity hold. Fields also appear in many other areas of mathematics; see the examples below. When abstract algebra was first being developed, the definition of a field usually did not include commutativity of multiplication, and what we today call a field would have been called either a commutative field or a rational domain. In contemporary usage, a field is always commutative. A structure which satisfies all the properties of a field except possibly for commutativity, is today called a division ring ordivision algebra or sometimes a skew field. Also non-commutative field is still widely used. In French, fields are called corps (literally, body), generally regardless of their commutativity. When necessary, a (commutative) field is called corps commutative and a skew field-corps gauche. The German word for body is Körper and this word is used to denote fields; hence the use of the blackboard bold to denote a field. The concept of fields was first (implicitly) used to prove that there is no general formula expressing in terms of radicals the roots of a polynomial with rational coefficients of degree 5 or higher. An extension of a field k is just a field K containing k as a subfield. One distinguishes between extensions having various qualities. For example, an extension K of a field k is called algebraic, if every element of K is a root of some polynomial with coefficients in k. Otherwise, the extension is called transcendental. The aim of Galois Theory is the study of algebraic extensions of a field. Given a field k, various kinds of closures of k may be introduced. For example, the algebraic closure, the separable closure, the cyclic closure et cetera. The idea is always the same: If P is a property of fields, then a P-closure of k is a field K containing k, having property, and which is minimal in the sense that no proper subfield of K that contains k has property P. For example if we take P (K) to be the property ‘every non-constant polynomial f in K[t] has a root in K’, then a P-closure of k is just an algebraic closure of k. In general, if P-closures exist for some property P and field k, they are all isomorphic. However, there is in general no preferable isomorphism between two closures.

Keywords: field theory, mechanic maths, supertech, rolltech

Procedia PDF Downloads 343
1308 Simulation of Multistage Extraction Process of Co-Ni Separation Using Ionic Liquids

Authors: Hongyan Chen, Megan Jobson, Andrew J. Masters, Maria Gonzalez-Miquel, Simon Halstead, Mayri Diaz de Rienzo

Abstract:

Ionic liquids offer excellent advantages over conventional solvents for industrial extraction of metals from aqueous solutions, where such extraction processes bring opportunities for recovery, reuse, and recycling of valuable resources and more sustainable production pathways. Recent research on the use of ionic liquids for extraction confirms their high selectivity and low volatility, but there is relatively little focus on how their properties can be best exploited in practice. This work addresses gaps in research on process modelling and simulation, to support development, design, and optimisation of these processes, focusing on the separation of the highly similar transition metals, cobalt, and nickel. The study exploits published experimental results, as well as new experimental results, relating to the separation of Co and Ni using trihexyl (tetradecyl) phosphonium chloride. This extraction agent is attractive because it is cheaper, more stable and less toxic than fluorinated hydrophobic ionic liquids. This process modelling work concerns selection and/or development of suitable models for the physical properties, distribution coefficients, for mass transfer phenomena, of the extractor unit and of the multi-stage extraction flowsheet. The distribution coefficient model for cobalt and HCl represents an anion exchange mechanism, supported by the literature and COSMO-RS calculations. Parameters of the distribution coefficient models are estimated by fitting the model to published experimental extraction equilibrium results. The mass transfer model applies Newman’s hard sphere model. Diffusion coefficients in the aqueous phase are obtained from the literature, while diffusion coefficients in the ionic liquid phase are fitted to dynamic experimental results. The mass transfer area is calculated from the surface to mean diameter of liquid droplets of the dispersed phase, estimated from the Weber number inside the extractor. New experiments measure the interfacial tension between the aqueous and ionic phases. The empirical models for predicting the density and viscosity of solutions under different metal loadings are also fitted to new experimental data. The extractor is modelled as a continuous stirred tank reactor with mass transfer between the two phases and perfect phase separation of the outlet flows. A multistage separation flowsheet simulation is set up to replicate a published experiment and compare model predictions with the experimental results. This simulation model is implemented in gPROMS software for dynamic process simulation. The results of single stage and multi-stage flowsheet simulations are shown to be in good agreement with the published experimental results. The estimated diffusion coefficient of cobalt in the ionic liquid phase is in reasonable agreement with published data for the diffusion coefficients of various metals in this ionic liquid. A sensitivity study with this simulation model demonstrates the usefulness of the models for process design. The simulation approach has potential to be extended to account for other metals, acids, and solvents for process development, design, and optimisation of extraction processes applying ionic liquids for metals separations, although a lack of experimental data is currently limiting the accuracy of models within the whole framework. Future work will focus on process development more generally and on extractive separation of rare earths using ionic liquids.

Keywords: distribution coefficient, mass transfer, COSMO-RS, flowsheet simulation, phosphonium

Procedia PDF Downloads 160
1307 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings

Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier

Abstract:

Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.

Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests

Procedia PDF Downloads 185
1306 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine

Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta

Abstract:

The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.

Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient

Procedia PDF Downloads 80
1305 Examination of Porcine Gastric Biomechanics in the Antrum Region

Authors: Sif J. Friis, Mette Poulsen, Torben Strom Hansen, Peter Herskind, Jens V. Nygaard

Abstract:

Gastric biomechanics governs a large range of scientific and engineering fields, from gastric health issues to interaction mechanisms between external devices and the tissue. Determination of mechanical properties of the stomach is, thus, crucial, both for understanding gastric pathologies as well as for the development of medical concepts and device designs. Although the field of gastric biomechanics is emerging, advances within medical devices interacting with the gastric tissue could greatly benefit from an increased understanding of tissue anisotropy and heterogeneity. Thus, in this study, uniaxial tensile tests of gastric tissue were executed in order to study biomechanical properties within the same individual as well as across individuals. With biomechanical tests in the strain domain, tissue from the antrum region of six porcine stomachs was tested using eight samples from each stomach (n = 48). The samples were cut so that they followed dominant fiber orientations. Accordingly, from each stomach, four samples were longitudinally oriented, and four samples were circumferentially oriented. A step-wise stress relaxation test with five incremental steps up to 25 % strain with 200 s rest periods for each step was performed, followed by a 25 % strain ramp test with three different strain rates. Theoretical analysis of the data provided stress-strain/time curves as well as 20 material parameters (e.g., stiffness coefficients, dissipative energy densities, and relaxation time coefficients) used for statistical comparisons between samples from the same stomach as well as in between stomachs. Results showed that, for the 20 material parameters, heterogeneity across individuals, when extracting samples from the same area, was in the same order of variation as the samples within the same stomach. For samples from the same stomach, the mean deviation percentage for all 20 parameters was 21 % and 18 % for longitudinal and circumferential orientations, compared to 25 % and 19 %, respectively, for samples across individuals. This observation was also supported by a nonparametric one-way ANOVA analysis, where results showed that the 20 material parameters from each of the six stomachs came from the same distribution with a level of statistical significance of P > 0.05. Direction-dependency was also examined, and it was found that the maximum stress for longitudinal samples was significantly higher than for circumferential samples. However, there were no significant differences in the 20 material parameters, with the exception of the equilibrium stiffness coefficient (P = 0.0039) and two other stiffness coefficients found from the relaxation tests (P = 0.0065, 0.0374). Nor did the stomach tissue show any significant differences between the three strain-rates used in the ramp test. Heterogeneity within the same region has not been examined earlier, yet, the importance of the sampling area has been demonstrated in this study. All material parameters found are essential to understand the passive mechanics of the stomach and may be used for mathematical and computational modeling. Additionally, an extension of the protocol used may be relevant for compiling a comparative study between the human stomach and the pig stomach.

Keywords: antrum region, gastric biomechanics, loading-unloading, stress relaxation, uniaxial tensile testing

Procedia PDF Downloads 400
1304 Recurrence of Pterygium after Surgery and the Effect of Surgical Technique on the Recurrence of Pterygium in Patients with Pterygium

Authors: Luksanaporn Krungkraipetch

Abstract:

A pterygium is an eye surface lesion that begins in the limbal conjunctiva and progresses to the cornea. The lesion is more common in the nasal limbus than in the temporal, and it has a distinctive wing-like aspect. Indications for surgery, in decreasing order of significance, are grown over the corneal center, decreased vision due to corneal deformation, documented growth, sensations of discomfort, and aesthetic concerns. Recurrent pterygium results in the loss of time, the expense of therapy, and the potential for vision impairment. The objective of this study is to find out how often the recurrence of pterygium after surgery occurs, what effect the surgery technique has, and what causes them to come back in people with pterygium. Materials and Methods: Observational case control in retrospect: the study involves a retrospective analysis of 164 patient samples. Data analysis is descriptive statistics analysis, i.e., basic data details about pterygium surgery and the risk of recurrent pterygium. For factor analysis, the inferential statistics odds ratio (OR) and 95% confidence interval (CI) ANOVA are utilized. A p-value of 0.05 was deemed statistically important. Results: The majority of patients, according to the results, were female (60.4%). Twenty-four of the 164 (14.6%) patients who underwent surgery exhibited recurrent pterygium. The average age is 55.33 years old. Postoperative recurrence was reported in 19 cases (79.3%) of bare sclera techniques and five cases (20.8%) of conjunctival autograft techniques. The recurrence interval is 10.25 months, with the most common (54.17 percent) being 12 months. In 91.67 percent of cases, all follow-ups are successful. The most common recurrence level is 1 (25%). A surgical complication is a subconjunctival hemorrhage (33.33 percent). Comparing the surgeries done on people with recurrent pterygium didn't show anything important (F = 1.13, p = 0.339). Age significantly affected the recurrence of pterygium (95% CI, 6.79-63.56; OR = 20.78, P 0.001). Conclusion: This study discovered a 14.6% rate of pterygium recurrence after pterygium surgery. Across all surgeries and patients, the rate of recurrence was four times higher with the bare sclera method than with conjunctival autograft. The researchers advise selecting a more conventional surgical technique to avoid a recurrence.

Keywords: pterygium, recurrence pterygium, pterygium surgery, excision pterygium

Procedia PDF Downloads 69
1303 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure

Authors: Esra Zengin, Sinan Akkar

Abstract:

Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.

Keywords: ground motion selection, scaling, uncertainty, fragility curve

Procedia PDF Downloads 564
1302 Kinetics of Sugar Losses in Hot Water Blanching of Water Yam (Dioscorea alata)

Authors: Ayobami Solomon Popoola

Abstract:

Yam is majorly a carbohydrate food grown in most parts of the world. It could be boiled, fried or roasted for consumption in a variety of ways. Blanching is an established heat pre-treatment given to fruits and vegetables prior to further processing such as dehydration, canning, freezing etc. Losses of soluble solids during blanching has been a great problem because a reasonable quantity of the water-soluble nutrients are inevitably leached into the blanching water. Without blanching, the high residual levels of reducing sugars after extended storage produce a dark, bitter-tasting product because of the Maillard reactions of reducing sugars at frying temperature. Measurement and prediction of such losses are necessary for economic efficiency in production and to establish the level of effluent treatment of the blanching water. This paper aims at resolving this problem by investigating the effects of cube size and temperature on the rate of diffusional losses of reducing sugars and total sugars during hot water blanching of water-yam. The study was carried out using four temperature levels (65, 70, 80 and 90 °C) and two cubes sizes (0.02 m³ and 0.03 m³) at 4 times intervals (5, 10, 15 and 20 mins) respectively. Obtained data were fitted into Fick’s non-steady equation from which diffusion coefficients (Da) were obtained. The Da values were subsequently fitted into Arrhenius plot to obtain activation energies (Ea-values) for diffusional losses. The diffusion co-efficient were independent of cube size and time but highly temperature dependent. The diffusion coefficients were ≥ 1.0 ×10⁻⁹ m²s⁻¹ for reducing sugars and ≥ 5.0 × 10⁻⁹ m²s⁻¹ for total sugars. The Ea values ranged between 68.2 to 73.9 KJmol⁻¹ and 7.2 to 14.30 KJmol⁻¹ for reducing sugars and total sugars losses respectively. Predictive equations for estimating amount of reducing sugars and total sugars with blanching time of water-yam at various temperatures were also presented. The equation could be valuable in process design and optimization. However, amount of other soluble solids that might have leached into the water along with reducing and total sugars during blanching was not investigated in the study.

Keywords: blanching, kinetics, sugar losses, water yam

Procedia PDF Downloads 137
1301 When Conducting an Analysis of Workplace Incidents, It Is Imperative to Meticulously Calculate Both the Frequency and Severity of Injuries Sustain

Authors: Arash Yousefi

Abstract:

Experts suggest that relying exclusively on parameters to convey a situation or establish a condition may not be adequate. Assessing and appraising incidents in a system based on accident parameters, such as accident frequency, lost workdays, or fatalities, may not always be precise and occasionally erroneous. The frequency rate of accidents is a metric that assesses the correlation between the number of accidents causing work-time loss due to injuries and the total working hours of personnel over a year. Traditionally, this has been calculated based on one million working hours, but the American Occupational Safety and Health Organization has updated its standards. The new coefficient of 200/000 working hours is now used to compute the frequency rate of accidents. It's crucial to ensure that the total working hours of employees are equally represented when calculating individual event and incident numbers. The accident severity rate is a metric used to determine the amount of time lost or wasted during a given period, often a year, in relation to the total number of working hours. It measures the percentage of work hours lost or wasted compared to the total number of useful working hours, which provides valuable insight into the number of days lost or wasted due to work-related incidents for each working hour. Calculating the severity of an incident can be difficult if a worker suffers permanent disability or death. To determine lost days, coefficients specified in the "tables of days equivalent to OSHA or ANSI standards" for disabling injuries are used. The accident frequency coefficient denotes the rate at which accidents occur, while the accident severity coefficient specifies the extent of damage and injury caused by these accidents. These coefficients are crucial in accurately assessing the magnitude and impact of accidents.

Keywords: incidents, safety, analysis, frequency, severity, injuries, determine

Procedia PDF Downloads 60
1300 Effectiveness of High-Intensity Interval Training in Overweight Individuals between 25-45 Years of Age Registered in Sports Medicine Clinic, General Hospital Kalutara

Authors: Dimuthu Manage

Abstract:

Introduction: The prevalence of obesity and obesity-related non-communicable diseases are becoming a massive health concern in the whole world. Physical activity is recognized as an effective solution for this matter. The published data on the effectiveness of High-Intensity Interval Training (HIIT) in improving health parameters in overweight and obese individuals in Sri Lanka is sparse. Hence this study is conducted. Methodology: This is a quasi-experimental study that was conducted at the Sports medicine clinic, General Hospital, Kalutara. Participants have engaged in a programme of HIIT three times per week for six weeks. Data collection was based on precise measurements by using structured and validated methods. Ethical clearance was obtained. Results: Registered number for the study was 48, and only 52% have completed the study. The mean age was 32 (SD=6.397) years, with 64% males. All the anthropometric measurements which were assessed (i.e. waist circumference(P<0.001), weight(P<0.001) and BMI(P<0.001)), body fat percentage(P<0.001), VO2 max(P<0.001), and lipid profile (ie. HDL(P=0.016), LDL(P<0.001), cholesterol(P<0.001), triglycerides(P<0.010) and LDL: HDL(P<0.001)) had shown statistically significant improvement after the intervention with the HIIT programme. Conclusions: This study confirms HIIT as a time-saving and effective exercise method, which helps in preventing obesity as well as non-communicable diseases. HIIT ameliorates body anthropometry, fat percentage, cardiopulmonary status, and lipid profile in overweight and obese individuals markedly. As with the majority of studies, the design of the current study is subject to some limitations. The first is the study focused on a correlational study. If it is a comparative study, comparing it with other methods of training programs would have given more validity. Although the validated tools used to measure variables and the same tools used in pre and post-exercise occasions with the available facilities, it would have been better to measure some of them using gold-standard methods. However, this evidence should be further assessed in larger-scale trials using comparative groups to generalize the efficacy of the HIIT exercise program.

Keywords: HIIT, lipid profile, BMI, VO2 max

Procedia PDF Downloads 38
1299 Adhesion Enhancement of Boron Carbide Coatings on Aluminum Substrates Utilizing an Intermediate Adhesive Layer

Authors: Sharon Waichman, Shahaf Froim, Ido Zukerman, Shmuel Barzilai, Shmual Hayun, Avi Raveh

Abstract:

Boron carbide is a ceramic material with superior properties such as high chemical and thermal stability, high hardness and high wear resistance. Moreover, it has a big cross section for neutron absorption and therefore can be employed in nuclear based applications. However, an efficient attachment of boron carbide to a metal such as aluminum can be very challenging, mainly because of the formation of aluminum-carbon bonds that are unstable in humid environment, the affinity of oxygen to the metal and the different thermal expansion coefficients of the two materials that may cause internal stresses and a subsequent failure of the bond. Here, we aimed to achieving a strong and a durable attachment between the boron carbide coating and the aluminum substrate. For this purpose, we applied Ti as a thin intermediate layer that provides a gradual change in the thermal expansion coefficients of the configured layers. This layer is continuous and therefore prevents the formation of aluminum-carbon bonds. Boron carbide coatings with a thickness of 1-5 µm were deposited on the aluminum substrate by pulse-DC magnetron sputtering. Prior to the deposition of the boron carbide layer, the surface was pretreated by energetic ion plasma followed by deposition of the Ti intermediate adhesive layer in a continuous process. The properties of the Ti intermediate layer were adjusted by the bias applied to the substrate. The boron carbide/aluminum bond was evaluated by various methods and complementary techniques, such as SEM/EDS, XRD, XPS, FTIR spectroscopy and Glow Discharge Spectroscopy (GDS), in order to explore the structure, composition and the properties of the layers and to study the adherence mechanism of the boron carbide/aluminum contact. Based on the interfacial bond characteristics, we propose a desirable solution for improved adhesion of boron carbide to aluminum using a highly efficient intermediate adhesive layer.

Keywords: adhesion, boron carbide coatings, ceramic/metal bond, intermediate layer, pulsed-DC magnetron sputtering

Procedia PDF Downloads 142