Search results for: lower hopper knuckle
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5752

Search results for: lower hopper knuckle

4852 The Contribution of Hip Strategy in Dynamic Postural Control in Recurrent Ankle Sprain

Authors: Radwa El Shorbagy, Alaa El Din Balbaa, Khaled Ayad, Waleed Reda

Abstract:

Introduction: Ankle sprain is a common lower limb injury that is complicated by high recurrence rate. The cause of recurrence is not clear; however, changes in motor control have been postulated. Objective: to determine the contribution of proximal hip strategy to dynamic postural control in patients with recurrent ankle sprain. Methods: Fifteen subjects with recurrent ankle sprain (group A) and fifteen healthy control subjects (group B) participated in this study. Abductor-adductors as well as flexor-extensor hip musculatures control was abolished by fatigue using the Biodex Isokinetic System. Dynamic postural control was measured before and after fatigue by the Biodex Balance System Results: Repeated measures MANOVA was used to compare between and within group differences, In group A fatiguing of hip muscles (flexors-extensors and abductors-adductors) increased overall stability index (OASI), anteroposterior stability index (APSI) and mediolateral stability index (MLSI) significantly (p= 0.00) whereas; in group B fatiguing of hip flexors-extensors increased significantly OASI and APSI only (p= 0.017, 0.010; respectively) while fatiguing of hip abductors-adductors has no significant effect on these variables. Moreover, patients with ankle sprain had significantly lower dynamic balance after hip muscles fatigue compared to the control group. Specifically, after hip flexor-extensor fatigue, the OASI, APSI and MLSI were increased significantly than those of the control values (p= 0.002, 0.011, and 0.003, respectively) whereas fatiguing of hip abductors-adductors increased significantly in OASI and APSI only (p=0.012, 0.026, respectively). Conclusion: To maintain dynamic balance, patients with recurrent ankle sprain seem to relay more on the hip strategy. This means that those patients depend on a top to down instead of down to top strategy clinical relevance: patients with recurrent ankle sprain less efficient in maintaining the dynamic postural control due to the change in motor strategies. Indicating that health care providers and rehabilitation specialists should treat CAI as a global/central and not just as a simple local or peripheral injury.

Keywords: ankle sprain, fatigue hip muscles, dynamic balance

Procedia PDF Downloads 300
4851 Modelling Retirement Outcomes: An Australian Case Study

Authors: Colin O’Hare, Zili Zho, Thomas Sneddon

Abstract:

The Australian superannuation system has received high praise for its participation rates and level of funding in retirement yet it is only 25 years old. In recent years, with increasing longevity and persistent lower rates of investment return, how adequate will the funds accumulated through a superannuation system be? In this paper we take Australia as a case study and build a stochastic model of accumulation and decummulation of funds and determine the expected number of years a fund may last an individual in retirement.

Keywords: component, mortality, stochastic models, superannuation

Procedia PDF Downloads 245
4850 [Keynote Talk]: Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method

Authors: Lina Wu, Jia Liu, Ye Li

Abstract:

The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.

Keywords: Bochner formula, Calculus Stokes' Theorem, Cauchy-Schwarz Inequality, first and second variation formulas, Liouville-type problem, p-harmonic map

Procedia PDF Downloads 274
4849 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength

Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph

Abstract:

Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.

Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage

Procedia PDF Downloads 236
4848 One Pot Synthesis of Ultrasmall NiMo Catalysts Supported on Amorphous Alumina with Enhanced type 2 Sites for Hydrodesulfurization Reaction: A Combined Experimental and Theoretical Study

Authors: Shalini Arora, Sri Sivakumar

Abstract:

The deep removal of high molecular weight sulphur compounds (e.g., 4,6, dimethyl dibenzothiophene) is challenging due to their steric hindrance. Hydrogenation desulfurization (HYD) pathway is the main pathway to remove these sulfur compounds, and it is mainly governed by the number of type 2 sites. The formation of type 2 sites can be enhanced by modulating the pore structure and the interaction between the active metal and support. To this end, we report the enhanced HDS catalytic activity of ultrasmall NiMo supported on amorphous alumina (A-Al₂O₃) catalysts by one pot colloidal synthesis method followed by calcination and sulfidation. The amorphous alumina (A-Al₂O₃) was chosen as the support due to its lower surface energy, better physicochemical properties, and enhanced acidic sites (due to the dominance of tetra and penta coordinated [Al] sites) than crystalline alumina phase. At 20% metal oxide composition, NiMo supported on A-Al₂O₃ catalyst showed 1.4 and 1.2 times more reaction rate constant and turn over frequency (TOF) respectively than the conventional catalyst (wet impregnated NiMo catalysts) for HDS reaction of dibenzothiophene reactant molecule. A-Al₂O₃ supported catalysts represented enhanced type 2 sites formation (because this catalystpossesses higher sulfidation degree (80%) and NiMoS sites (19.3 x 10¹⁷ sites/mg) with desired optimum stacking degree (2.5) than wet impregnated catalyst at same metal oxide composition 20%) along with higher active metal dispersion, Mo edge site fraction. The experimental observations were also supported by DFT simulations. Lower heat of adsorption (< 4.2 ev for MoS2 interaction and < 3.15 ev for Ni doped MoS2 interaction) values for A-Al₂O₃ confirmed the presence of weaker metal-support interaction in A-Al₂O₃ in contrast to crystalline ℽ-Al₂O3. The weak metal-support interaction for prepared catalysts clearly suggests the higher formation of type 2 sites which leads to higher catalytic activity for HDS reaction.

Keywords: amorphous alumina, colloidal, desulfurization, metal-support interaction

Procedia PDF Downloads 267
4847 Exploring Social and Economic Barriers in Adoption and Expansion of Agricultural Technologies in Woliatta Zone, Southern Ethiopia

Authors: Akalework Mengesha

Abstract:

The adoption of improved agricultural technologies has been connected with higher earnings and lower poverty, enhanced nutritional status, lower staple food prices, and increased employment opportunities for landless laborers. The adoption and extension of the technologies are vastly crucial in that it enables the countries to achieve the millennium development goals (MDG) of reducing extreme poverty and hunger. There are efforts which directed to the enlargement and provision of modern crop varieties in sub-Saharan Africa in the past 30 years. Nevertheless, by and large, the adoption and expansion of rates for improved technologies have insulated behind other regions. This research aims to assess social and economic barriers in the adoption and expansion of agricultural technologies by local communities living around a private agricultural farm in Woliatta Zone, Southern Ethiopia. The study has been carried out among rural households which are located in the three localities selected for the study in the Woliatta zone. Across sectional mixed method, the design was used to address the study objective. The qualitative method was employed (in-depth interview, key informant, and focus group discussion) involving a total of 42 in-depth informants, 17 key-informant interviews, 2 focus group discussions comprising of 10 individuals in each group through purposive sampling techniques. The survey method was mainly used in the study to examine the impact of attitudinal, demographic, and socioeconomic variables on farmers’ adoption of agricultural technologies for quantitative data. The finding of the study revealed that Amibara commercial farm has not made a resolute and well-organized effort to extend agricultural technology to the surrounding local community. A comprehensive agricultural technology transfer scheme hasn’t been put in place by the commercial farm ever since it commenced operating in the study area. Besides, there is an ongoing conflict of interest between the farm and the community, which has kept on widening through time, bounds to be irreversible.

Keywords: adoption, technology transfer, agriculture, barriers

Procedia PDF Downloads 153
4846 Comparing Trastuzumab-Related Cardiotoxicity between Elderly and Younger Patients with Breast Cancer: A Prospective Cohort Study

Authors: Afrah Aladwani, Alexander Mullen, Mohammad AlRashidi, Omamah Alfarisi, Faisal Alterkit, Abdulwahab Aladwani, Asit Kumar, Emad Eldosouky

Abstract:

Introduction: Trastuzumab is a HER-2 targeted humanized monoclonal antibody that significantly improves the therapeutic outcomes of metastatic and non-metastatic breast cancer. However, it is associated with increased risk of cardiotoxicity that ranges from mild decline in the cardiac ejection fraction to permanent cardiomyopathy. Concerns have been raised in treating eligible older patients. This study compares trastuzumab outcomes between two age cohorts in the Kuwait Cancer Control Centre (KCCC). Methods: In a prospective comparative observational study, 93 HER-2 positive breast cancer patients undergoing different chemotherapy protocols + trastuzumab were included and divided into two cohorts based on their age (˂60 and ≥60 years old). The baseline left ventricular ejection fraction (LVEF) was assessed and monitored every three months during trastuzumab treatment. Event of cardiotoxicity was defined as ≥10% decline in the LVEF from the baseline. The lower accepted normal limit of the LVEF was 50%. Results: The median baseline LVEF was 65% in both age cohorts (IQR 8% and 9% for older and younger patients respectively). Whereas, the median LVEF post-trastuzumab treatment was 51% and 55% in older and younger patients respectively (IQR 8%; p-value = 0.22), despite the fact that older patients had significantly lower exposure to anthracyclines compared to younger patients (60% and 84.1% respectively; p-value ˂0.001). 86.7% and 55.6% of older and younger patients, respectively, developed ≥10% decline in their LVEF from the baseline. Among those, only 29% of older and 27% of younger patients reached a LVEF value below 50% (p-value = 0.88). Statistically, age was the only factor that significantly correlated with trastuzumab induced cardiotoxicity (OR 4; p-value ˂0.012), but it did not increase the requirement for permanent discontinuation of treatment. A baseline LVEF value below 60% contributed to developing a post-treatment value below normal ranges (50%). Conclusion: Breast cancer patients aged 60 years and above in Kuwait were at 4-fold higher risk of developing ≥10% decline in their LVEF from the baseline than younger patients during trastuzumab treatment. Surprisingly, previous exposure to anthracyclines and multiple comorbidities were not associated with significant increased risk of cardiotoxicity.

Keywords: breast cancer, elderly, Trastuzumab, cardiotoxicity

Procedia PDF Downloads 205
4845 Diagonal Vector Autoregressive Models and Their Properties

Authors: Usoro Anthony E., Udoh Emediong

Abstract:

Diagonal Vector Autoregressive Models are special classes of the general vector autoregressive models identified under certain conditions, where parameters are restricted to the diagonal elements in the coefficient matrices. Variance, autocovariance, and autocorrelation properties of the upper and lower diagonal VAR models are derived. The new set of VAR models is verified with empirical data and is found to perform favourably with the general VAR models. The advantage of the diagonal models over the existing models is that the new models are parsimonious, given the reduction in the interactive coefficients of the general VAR models.

Keywords: VAR models, diagonal VAR models, variance, autocovariance, autocorrelations

Procedia PDF Downloads 116
4844 A Case Study on the Estimation of Design Discharge for Flood Management in Lower Damodar Region, India

Authors: Susmita Ghosh

Abstract:

Catchment area of Damodar River, India experiences seasonal rains due to the south-west monsoon every year and depending upon the intensity of the storms, floods occur. During the monsoon season, the rainfall in the area is mainly due to active monsoon conditions. The upstream reach of Damodar river system has five dams store the water for utilization for various purposes viz, irrigation, hydro-power generation, municipal supplies and last but not the least flood moderation. But, in the downstream reach of Damodar River, known as Lower Damodar region, is severely and frequently suffering from flood due to heavy monsoon rainfall and also release from upstream reservoirs. Therefore, an effective flood management study is required to know in depth the nature and extent of flood, water logging, and erosion related problems, affected area, and damages in the Lower Damodar region, by conducting mathematical model study. The design flood or discharge is needed to decide to assign the respective model for getting several scenarios from the simulation runs. The ultimate aim is to achieve a sustainable flood management scheme from the several alternatives. there are various methods for estimating flood discharges to be carried through the rivers and their tributaries for quick drainage from inundated areas due to drainage congestion and excess rainfall. In the present study, the flood frequency analysis is performed to decide the design flood discharge of the study area. This, on the other hand, has limitations in respect of availability of long peak flood data record for determining long type of probability density function correctly. If sufficient past records are available, the maximum flood on a river with a given frequency can safely be determined. The floods of different frequency for the Damodar has been calculated by five candidate distributions i.e., generalized extreme value, extreme value-I, Pearson type III, Log Pearson and normal. Annual peak discharge series are available at Durgapur barrage for the period of 1979 to 2013 (35 years). The available series are subjected to frequency analysis. The primary objective of the flood frequency analysis is to relate the magnitude of extreme events to their frequencies of occurrence through the use of probability distributions. The design flood for return periods of 10, 15 and 25 years return period at Durgapur barrage are estimated by flood frequency method. It is necessary to develop flood hydrographs for the above floods to facilitate the mathematical model studies to find the depth and extent of inundation etc. Null hypothesis that the distributions fit the data at 95% confidence is checked with goodness of fit test, i.e., Chi Square Test. It is revealed from the goodness of fit test that the all five distributions do show a good fit on the sample population and is therefore accepted. However, it is seen that there is considerable variation in the estimation of frequency flood. It is therefore considered prudent to average out the results of these five distributions for required frequencies. The inundated area from past data is well matched using this flood.

Keywords: design discharge, flood frequency, goodness of fit, sustainable flood management

Procedia PDF Downloads 201
4843 Mechanical and Tribological Performances of (Nb: H-D: a-C) Thin Films for Biomedical Applications

Authors: Sara Khamseh, Kambiz Javanruee, Hamid Khorsand

Abstract:

Plenty of metallic materials are used for biomedical applications like hip joints and screws. Besides, it is reported that metal platforms such as stainless steel show significant deterioration because of wear and friction. The surface of metal substrates has been coated with a variety of multicomponent coatings to prevail these problems. The carbon-based multicomponent coatings such as metal-added amorphous carbon and diamond coatings are crucially important because of their remarkable tribological performance and chemical stability. In the current study, H-D contained Nb: (a-C) multicomponent coatings (H-D: hexagonal diamond, a-C: amorphous carbon) coated on A 304 steel substrates using an unbalanced magnetron (UBM) sputtering system. The effects of Nb and H-D content and ID/IG ratio on microstructure, mechanical and tribological characteristics of (Nb: H-D: a-C) composite coatings were investigated. The results of Raman spectroscopy represented that a-C phase with a Graphite-like structure (GLC with high value of sp2 carbon bonding) is formed, and its domain size increased with increasing Nb content of the coatings. Moreover, the Nb played a catalyst for the formation of the H-D phase. The nanoindentation hardness value of the coatings ranged between ~17 to ~35 GPa and (Nb: H-D: a-C) composite coatings with more H-D content represented higher hardness and plasticity index. It seems that the existence of extra-hard H-D particles straightly increased hardness. The tribological performance of the coatings was evaluated using the pin-on-disc method under the wet environment of SBF (Simulated Body Fluid). The COF value of the (Nb: H-D: a-C) coatings decreased with an increasing ID/IG ratio. The lower coefficient of friction is a result of the lamelliform array of graphitic domains. Also, the wear rate of the coatings decreased with increasing H-D content of the coatings. Based on the literature, a-C coatings with high hardness and H3/E2 ratio represent lower wear rates and better tribological performance. According to the nanoindentation analysis, hardness and H3/E2 ratio of (Nb: H-D: a-C) multicomponent coatings increased with increasing H-D content, which in turn decreased the wear rate of the coatings. The mechanical and tribological potency of (Nb: H-D: a-C) composite coatings on A 304 steel substrates paved the way for the development of innovative advanced coatings to ameliorate the performance of A 304 steel for biomedical applications.

Keywords: COF, mechanical properties, (Nb: H-D: a-C) coatings, wear rate

Procedia PDF Downloads 103
4842 Evaluation of Flow Alteration under Climate Change Scenarios for Disaster Risk Management in Lower Mekong Basin: A Case Study in Prek Thnot River in Cambodia

Authors: Vathanachannbo Veth, Ilan Ich, Sophea Rom Phy, Ty Sok, Layheang Song, Sophal Try, Chantha Oeurng

Abstract:

Climate change is one of the major global challenges inducing disaster risks and threatening livelihoods and communities through adverse impacts on food and water security, ecosystems, and services. Prek Thnot River Basin of Cambodia is one of the largest tributaries in the Lower Mekong that has been exposed to hazards and disasters, particularly floods and is said to be the effect of climate change. Therefore, the assessment of precipitation and streamflow changes under the effect of climate change was proposed in this river basin using Soil Water Assessment Tool (SWAT) model and different flow indices under baseline (1997 to 2011) and climate change scenarios (RCP2.6 and RCP8.5 with three General Circulation Models (GCMs): GFDL, GISS, and IPSL) in two time-horizons: near future (the 2030s: 2021 to 2040) and medium future (2060s: 2051 to 2070). Both intensity and frequency indices compared with the historical extreme rainfall indices significantly change in the GFDL under the RCP8.5 for both 2030s and 2060s. The average rate change of Rx1day, Rx10day, SDII, and R20mm in the 2030s and 2060s of both RCP2.6 and RCP8.5 was found to increase in GFDL and decrease in both GISS and IPSL. The mean percentage change of the flow analyzed in the IHA tool (Group1) indicated that the flow in the Prek Thnot River increased in GFDL for both RCP2.6 and RCP8.5 in both 2030s and 2060s, oppositely in GISS, the flow decreases. Moreover, the IPSL affected the flow by increasing in five months (January, February, October, November, and December), and in the other seven months, the flow decreased accordingly. This study provides water resources managers and policymakers with a wide range of precipitation and water flow projections within the Prek Thnot River Basin in the context of plausible climate change scenarios.

Keywords: IHA, climate change, disaster risk, Prek Thnot River Basin, Cambodia

Procedia PDF Downloads 102
4841 Effect of Lowering the Proportion of Chlorella vulgaris in Fish Feed on Tilapia's Immune System

Authors: Hamza A. Pantami, Khozizah Shaari, Intan S. Ismail, Chong C. Min

Abstract:

Introduction: Tilapia is the second-highest harvested freshwater fish species in Malaysia, available in almost all fish farms and markets. Unfortunately, tilapia culture in Malaysia is highly affected by Aeromonas hydrophila and Streptococcus agalactiae, which affect the production rate and consequently pose a direct negative economic impact. Reliance on drugs to control or reduce bacterial infections has been led to contamination of water bodies and development of drug resistance, as well as gave rise to toxicity issues in downstream fish products. Resorting to vaccines have helped curb the problem to a certain extent, but a more effective solution is still required. Using microalgae-based feed to enhance the fish immunity against bacterial infection offers a promising alternative. Objectives: This study aims to evaluate the efficacy of Chlorella vulgaris at lower percentage incorporation in feeds for an immune boost of tilapia in a shorter time. Methods: The study was in two phases. The safety concentration studies at 500 mg/kg-1 and the administration of cultured C. vulgaris biomass via incorporation into fish feed for five different groups in three weeks. Group 1 was the control (0% incorporation), whereas group 2, 3, 4 and 5 received 0.625%, 1.25%, 2.5% and 5% incorporation respectively. The parameters evaluated were the blood profile, serum lysozyme activity (SLA), serum bactericidal activity (SBA), phagocytosis activity (PA), respiratory burst activity (RBA), and lymphoproliferation activity (LPA). The data were analyzed via ANOVA using SPSS (version 16). Further testing was done using Tukey’s test. All tests were performed at the 95% confidence interval (p < 0.05). Results: There were no toxic signs in tilapia fish at 500 mg/kg-1. Treated groups showed significantly better immune parameters compared to the control group (p < 0.05). Conclusions: C. vulgaris crude biomass in a fish meal at a lower incorporation level of 5% can increase specific and non-specific immunity in tilapia fish in a shorter time duration.

Keywords: Chlorella vulgaris, hematology profile, immune boost, lymphoproliferation

Procedia PDF Downloads 110
4840 Application of Flue Gas Recirculation in Fluidized Bed Combustor for Energy Efficiency Enhancement

Authors: Chien-Song Chyang

Abstract:

For a fluidized-bed combustion system, excess air ratio (EAR) and superficial velocity are major operating parameters affecting combustion behaviors, and these 2 factors are dependent variables since both fluidizing gas and combustion-supporting agent are air. EAR will change when superficial velocity alters, so that the effect of superficial velocity and/or EAR on combustion behaviors cannot be examined under a specific condition. When stage combustion is executed, one can discuss the effect of EAR under a certain specific superficial velocity, but the flow rate of secondary air and EAR are dependent. In order to investigate the effect of excess air ratio on the combustion behavior of a fluidized combustion system, the flue gas recirculation was adapted by the author in 2007. We can maintain a fixed flow rate of primary gas or secondary gas and change excess oxygen as an independent variable by adjusting the recirculated flue gas appropriately. In another word, we can investigate the effect of excess oxygen on the combustion behavior at a certain primary gas flow, or at a certain hydrodynamics conditions. This technique can be used at a lower turndown ratio to maintain the residual oxygen in the flue gas at a certain value. All the experiments were conducted in a pilot scale fluidized bed combustor. The fluidized bed combustor can be divided into four parts, i.e., windbox, distributor, combustion chamber, and freeboard. The combustion chamber with a cross-section of 0.8 m × 0.4 m was constructed of 6 mm carbon steel lined with 150 mm refractory to reduce heat loss. Above the combustion chamber, the freeboard is 0.64 m in inner diameter. A total of 27 tuyeres with orifices of 5 and 3 mm inside diameters mounted on a 6 mm stainless-steel plate were used as the gas distributor with an open-area-ratio of 0.52%. The Primary gas and secondary gas were fixed at 3 Nm3/min and 1 Nm3/min respectively. The bed temperature was controlled by three heat transfer tubes inserted into the bubbling bed zone. The experimental data shows that bed temperature, CO and NO emissions increase with the stoichiometric oxygen of the primary gas. NO emissions decrease with the stoichiometric oxygen of the primary. Compared with part of primary air substituted with nitrogen, a lower NO emission can be obtained while flue gas recirculation applies as part of primary air.

Keywords: fluidized bed combustion, flue gas circulation, NO emission, recycle

Procedia PDF Downloads 179
4839 The Characteristics of Static Plantar Loading in the First-Division College Sprint Athletes

Authors: Tong-Hsien Chow

Abstract:

Background: Plantar pressure measurement is an effective method for assessing plantar loading and can be applied to evaluating movement performance of the foot. The purpose of this study is to explore the sprint athletes’ plantar loading characteristics and pain profiles in static standing. Methods: Experiments were undertaken on 80 first-division college sprint athletes and 85 healthy non-sprinters. ‘JC Mat’, the optical plantar pressure measurement was applied to examining the differences between both groups in the arch index (AI), three regional and six distinct sub-regional plantar pressure distributions (PPD), and footprint characteristics. Pain assessment and self-reported health status in sprint athletes were examined for evaluating their common pain areas. Results: Findings from the control group, the males’ AI fell into the normal range. Yet, the females’ AI was classified as the high-arch type. AI values of the sprint group were found to be significantly lower than the control group. PPD were higher at the medial metatarsal bone of both feet and the lateral heel of the right foot in the sprint group, the males in particular, whereas lower at the medial and lateral longitudinal arches of both feet. Footprint characteristics tended to support the results of the AI and PPD, and this reflected the corresponding pressure profiles. For the sprint athletes, the lateral knee joint and biceps femoris were the most common musculoskeletal pains. Conclusions: The sprint athletes’ AI were generally classified as high arches, and that their PPD were categorized between the features of runners and high-arched runners. These findings also correspond to the profiles of patellofemoral pain syndrome (PFPS)-related plantar pressure. The pain profiles appeared to correspond to the symptoms of high-arched runners and PFPS. The findings reflected upon the possible link between high arches and PFPS. The correlation between high-arched runners and PFPS development is worth further studies.

Keywords: sprint athletes, arch index, plantar pressure distributions, high arches, patellofemoral pain syndrome

Procedia PDF Downloads 339
4838 Evaluation of Non-Pharmacological Method-Transcervical Foley Catheter and Misoprostol to Intravaginal Misoprostol for Preinduction Cervical Ripening

Authors: Krishna Dahiya, Esha Charaya

Abstract:

Induction of labour is a common obstetrical intervention. Around 1 in every 4 patient undergo induction of labour for different indications Purpose: To study the efficacy of the combination of Foley bulb and vaginal misoprostol in comparison to vaginal misoprostol alone for cervical ripening and induction of labour. Methods: A prospective randomised study was conducted on 150 patients with term singleton pregnancy admitted for induction of labour. Seventy-five patients were induced with both Foley bulb, and vaginal misoprostol and another 75 were given vaginal misoprostol alone for induction of labour. Both groups were then compared with respect to change in Bishop score, induction to the active phase of labour interval, induction delivery interval, duration of labour, maternal complications and neonatal outcomes. Data was analysed using statistical software SPSS version 11.5. Tests with P,.05 were considered significant. Results: The two groups were comparable with respect to maternal age, parity, gestational age, indication for induction, and initial Bishop scores. Both groups had a significant change in Bishop score (2.99 ± 1.72 and 2.17 ± 1.48 respectively with statistically significant difference (p=0.001 S, 95% C.I. -0.1978 to 0.8378). Mean induction to delivery interval was significantly lower in the combination group (11.76 ± 5.89 hours) than misoprostol group (14.54 ± 7.32 hours). Difference was of 2.78 hours (p=0.018,S, 95% CI -5.1042 to -0.4558). Induction to delivery interval was significantly lower in nulliparous women of combination group (13.64 ± 5.75 hours) than misoprostol group (18.4±7.09 hours), and the difference was of 4.76 hours (p=0.002, S, 95% CI 1.0465 to 14.7335). There was no difference between the groups in the mode of delivery, infant weight, Apgar score and intrapartum complications. Conclusion: From the present study it was concluded that addition of Foley catheter to vaginal misoprostol have the synergistic effect and results in early cervical ripening and delivery. These results suggest that the combination may be used to achieve timely and safe delivery in the presence of an unfavorable cervix. A combination of the Foley bulb and vaginal misoprostol resulted in a shorter induction-to-delivery time when compared with vaginal misoprostol alone without increasing labor complications.

Keywords: Bishop score, Foley catheter, induction of labor, misoprostol

Procedia PDF Downloads 306
4837 Effects of Magnetization Patterns on Characteristics of Permanent Magnet Linear Synchronous Generator for Wave Energy Converter Applications

Authors: Sung-Won Seo, Jang-Young Choi

Abstract:

The rare earth magnets used in synchronous generators offer many advantages, including high efficiency, greatly reduced the size, and weight. The permanent magnet linear synchronous generator (PMLSG) allows for direct drive without the need for a mechanical device. Therefore, the PMLSG is well suited to translational applications, such as wave energy converters and free piston energy converters. This manuscript compares the effects of different magnetization patterns on the characteristics of double-sided PMLSGs in slotless stator structures. The Halbach array has a higher flux density in air-gap than the Vertical array, and the advantages of its performance and efficiency are widely known. To verify the advantage of Halbach array, we apply a finite element method (FEM) and analytical method. In general, a FEM and an analytical method are used in the electromagnetic analysis for determining model characteristics, and the FEM is preferable to magnetic field analysis. However, the FEM is often slow and inflexible. On the other hand, the analytical method requires little time and produces accurate analysis of the magnetic field. Therefore, the flux density in air-gap and the Back-EMF can be obtained by FEM. In addition, the results from the analytical method correspond well with the FEM results. The model of the Halbach array reveals less copper loss than the model of the Vertical array, because of the Halbach array’s high output power density. The model of the Vertical array is lower core loss than the model of Halbach array, because of the lower flux density in air-gap. Therefore, the current density in the Vertical model is higher for identical power output. The completed manuscript will include the magnetic field characteristics and structural features of both models, comparing various results, and specific comparative analysis will be presented for the determination of the best model for application in a wave energy converting system.

Keywords: wave energy converter, permanent magnet linear synchronous generator, finite element method, analytical method

Procedia PDF Downloads 301
4836 Measuring the Effect of Ventilation on Cooking in Indoor Air Quality by Low-Cost Air Sensors

Authors: Andres Gonzalez, Adam Boies, Jacob Swanson, David Kittelson

Abstract:

The concern of the indoor air quality (IAQ) has been increasing due to its risk to human health. The smoking, sweeping, and stove and stovetop use are the activities that have a major contribution to the indoor air pollution. Outdoor air pollution also affects IAQ. The most important factors over IAQ from cooking activities are the materials, fuels, foods, and ventilation. The low-cost, mobile air quality monitoring (LCMAQM) sensors, is reachable technology to assess the IAQ. This is because of the lower cost of LCMAQM compared to conventional instruments. The IAQ was assessed, using LCMAQM, during cooking activities in a University of Minnesota graduate-housing evaluating different ventilation systems. The gases measured are carbon monoxide (CO) and carbon dioxide (CO2). The particles measured are particle matter (PM) 2.5 micrometer (µm) and lung deposited surface area (LDSA). The measurements are being conducted during April 2019 in Como Student Community Cooperative (CSCC) that is a graduate housing at the University of Minnesota. The measurements are conducted using an electric stove for cooking. The amount and type of food and oil using for cooking are the same for each measurement. There are six measurements: two experiments measure air quality without any ventilation, two using an extractor as mechanical ventilation, and two using the extractor and windows open as mechanical and natural ventilation. 3The results of experiments show that natural ventilation is most efficient system to control particles and CO2. The natural ventilation reduces the concentration in 79% for LDSA and 55% for PM2.5, compared to the no ventilation. In the same way, CO2 reduces its concentration in 35%. A well-mixed vessel model was implemented to assess particle the formation and decay rates. Removal rates by the extractor were significantly higher for LDSA, which is dominated by smaller particles, than for PM2.5, but in both cases much lower compared to the natural ventilation. There was significant day to day variation in particle concentrations under nominally identical conditions. This may be related to the fat content of the food. Further research is needed to assess the impact of the fat in food on particle generations.

Keywords: cooking, indoor air quality, low-cost sensor, ventilation

Procedia PDF Downloads 113
4835 Teaching the Temperature Dependence of Electrical Resistance of Materials through Arduino Investigation

Authors: Vinit Srivastava, Abhay Singh Thakur, Shivam Dubey, Rahul Vaish, Bharat Singh Rajpurohit

Abstract:

This study examines the problem of students' poor comprehension of the thermal dependence of resistance by investigating this idea using an evidence-based inquiry approach. It suggests a practical exercise to improve secondary school students' comprehension of how materials' resistance to temperature changes. The suggested exercise uses an Arduino and Peltier device to test the resistance of aluminum and graphite at various temperatures. The study attempts to close the knowledge gap between the theoretical and practical facets of the subject, which students frequently find difficult to grasp. With the help of a variety of resistors made of various materials and pencils of varying grades, the Arduino experiment investigates the resistance of a metallic conductor (aluminum) and a semiconductor (graphite) at various temperatures. The purpose of the research is to clarify for students the relationship between temperature and resistance and to emphasize the importance of resistor material choice and measurement methods in obtaining precise and stable resistance values over dynamic temperature variations. The findings show that while the resistance of graphite decreases with temperature, the resistance of metallic conductors rises with temperature. The results also show that as softer lead pencils or pencils of a lower quality are used, the resistance values of the resistors drop. In addition, resistors showed greater stability at lower temperatures when their temperature coefficients of resistance (TCR) were smaller. Overall, the results of this article show that the suggested experiment is a useful and practical method for teaching students about resistance's relationship to temperature. It emphasizes how crucial it is to take into account the resistor material selection and the resistance measurement technique when designing and picking out resistors for various uses. The results of the study are anticipated to guide the creation of more efficient teaching methods to close the gap between science education's theoretical and practical components.

Keywords: electrical resistance, temperature dependence, science education, inquiry-based activity, resistor stability

Procedia PDF Downloads 76
4834 The Influence of Ecologically -Valid High- and Low-Volume Resistance Training on Muscle Strength and Size in Trained Men

Authors: Jason Dellatolla, Scott Thomas

Abstract:

Much of the current literature pertaining to resistance training (RT) volume prescription lacks ecological validity, and very few studies investigate true high-volume ranges. Purpose: The present study sought to investigate the effects of ecologically-valid high- vs low-volume RT on muscular size and strength in trained men. Methods: This study systematically randomized trained, college-aged men into two groups: low-volume (LV; n = 4) and high-volume (HV; n = 5). The sample size was affected by COVID-19 limitations. Subjects followed an ecologically-valid 6-week RT program targeting both muscle size and strength. RT occurred 3x/week on non-consecutive days. Over the course of six weeks, LVR and HVR gradually progressed from 15 to 23 sets/week and 30 to 46 sets/week of lower-body RT, respectively. Muscle strength was assessed via 3RM tests in the squat, stiff-leg deadlift (SL DL), and leg press. Muscle hypertrophy was evaluated through a combination of DXA, BodPod, and ultrasound (US) measurements. Results: Two-way repeated-measures ANOVAs indicated that strength in all 3 compound lifts increased significantly among both groups (p < 0.01); between-group differences only occurred in the squat (p = 0.02) and SL DL (p = 0.03), both of which favored HVR. Significant pre-to-post-study increases in indicators of hypertrophy were discovered for lean body mass in the legs via DXA, overall fat-free mass via BodPod, and US measures of muscle thickness (MT) for the rectus femoris, vastus intermedius, vastus medialis, vastus lateralis, long-head of the biceps femoris, and total MT. Between-group differences were only found for MT of the vastus medialis – favoring HVR. Moreover, each additional weekly set of lower-body RT was associated with an average increase in MT of 0.39% in the thigh muscles. Conclusion: We conclude that ecologically-valid RT regimens significantly improve muscular strength and indicators of hypertrophy. When HVR is compared to LVR, HVR provides significantly greater gains in muscular strength but has no greater effect on hypertrophy over the course of 6 weeks in trained, college-aged men.

Keywords: ecological validity, hypertrophy, resistance training, strength

Procedia PDF Downloads 114
4833 Central Vascular Function and Relaxibility in Beta-thalassemia Major Patients vs. Sickle Cell Anemia Patients by Abdominal Aorta and Aortic Root Speckle Tracking Echocardiography

Authors: Gehan Hussein, Hala Agha, Rasha Abdelraof, Marina George, Antoine Fakhri

Abstract:

Background: β-Thalassemia major (TM) and sickle cell disease (SCD) are inherited hemoglobin disorders resulting in chronic hemolytic anemia. Cardiovascular involvement is an important cause of morbidity and mortality in these groups of patients. The narrow border is between overt myocardial dysfunction and clinically silent left ventricular (LV) and / or right ventricular (RV) dysfunction in those patients. 3 D Speckle tracking echocardiography (3D STE) is a novel method for the detection of subclinical myocardial involvement. We aimed to study myocardial affection in SCD and TM using 3D STE, comparing it with conventional echocardiography, correlate it with serum ferritin level and lactate dehydrogenase (LDH). Methodology: Thirty SCD and thirty β TM patients, age range 4-18 years, were compared to 30 healthy age and sex matched control group. Cases were subjected to clinical examination, laboratory measurement of hemoglobin level, serum ferritin, and LDH. Transthoracic color Doppler echocardiography, 3D STE, tissue Doppler echocardiography, and aortic speckle tracking were performed. Results: significant reduction in global longitudinal strain (GLS), global circumferential strain (GCS), and global area strain (GAS) in SCD and TM than control (P value <0.001) there was significantly lower aortic speckle tracking in patients with TM and SCD than control (P value< 0.001). LDH was significantly higher in SCD than both TM and control and it correlated significantly positive mitral inflow E, (p value:0.022 and 0.072. r: 0.416 and -0.333 respectively) lateral E/E’ (p value.<0.001and 0.818. r. 0.618 and -0. 044.respectively) and septal E/E’ (p value 0.007 and 0.753& r value 0.485 and -0.060 respectively) in SCD but not TM and significant negative correlation between LDH and aortic root speckle tracking (value 0.681& r. -0.078.). The potential diagnostic accuracy of LDH in predicting vascular dysfunction as represented by aortic root GCS with a sensitivity 74% and aortic root GCS was predictive of LV dysfunction in SCD patients with sensitivity 100% Conclusion: 3D STE LV and RV systolic dysfunction in spite of their normal values by conventional echocardiography. SCD showed significantly lower right ventricular dysfunction and aortic root GCS than TM and control. LDH can be used to screen patients for cardiac dysfunction in SCD, not in TM

Keywords: thalassemia major, sickle cell disease, 3d speckle tracking echocardiography, LDH

Procedia PDF Downloads 170
4832 Ergonomics Aspects of Work with Computers

Authors: Leena Korpinen, Rauno Pääkkönen, Fabriziomaria Gobba

Abstract:

This paper is based on a large questionnaire study. The paper presents how all participants and subgroups (upper- and lower-level white-collar workers) answered the question, “Have you had an ache, pain, or numbness, which you associate with desktop computer use, in the different body parts during the last 12 months?’ 14.6% of participants (19.4% of women and 8.2% of men) reported that they had often or very often physical symptoms in the neck. Even if our results cannot prove a causal relation of symptoms with computer use, show that workers believe that computer use can influence their wellbeing: This is important when devising treatment modalities to decrease these physical symptoms.

Keywords: ergonomics, work, computer, symptoms

Procedia PDF Downloads 403
4831 From Poverty to Progress: A Comparative Analysis of Mongolia with PEER Countries

Authors: Yude Wu

Abstract:

Mongolia, grappling with significant socio-economic challenges, faces pressing issues of inequality and poverty, as evidenced by a high Gini coefficient and the highest poverty rate among the top 20 largest Asian countries. Despite government efforts, Mongolia's poverty rate experienced only a slight reduction from 29.6 percent in 2016 to 27.8 percent in 2020. PEER countries, such as South Africa, Botswana, Kazakhstan, and Peru, share characteristics with Mongolia, including reliance on the mining industry and classification as lower middle-income countries. Successful transitions of these countries to upper middle-income status between 1994 and the 2010s provide valuable insights. Drawing on secondary analyses of existing research and PEER country profiles, the study evaluates past policies, identifies gaps in current approaches, and proposes recommendations to combat poverty sustainably. The hypothesis includes a reliance on the mining industry and a transition from lower to upper middle-income status. Policies from these countries, such as the GEAR policy in South Africa and economic diversification in Botswana, offer insights into Mongolia's development. This essay aims to illuminate the multidimensional nature of underdevelopment in Mongolia through a secondary analysis of existing research and PEER country profiles, evaluating past policies, identifying gaps in current approaches, and providing recommendations for sustainable progress. Drawing inspiration from PEER countries, Mongolia can implement policies such as economic diversification to reduce vulnerability and create stable job opportunities. Emphasis on infrastructure, human capital, and strategic partnerships for Foreign Direct Investment (FDI) aligns with successful strategies implemented by PEER countries, providing a roadmap for Mongolia's development objectives.

Keywords: inequality, PEER countries, comparative analysis, nomadic animal husbandry, sustainable growth

Procedia PDF Downloads 63
4830 The Cost-Effectiveness of Pancreatic Surgical Cancer Care in the US vs. the European Union: Results of a Review of the Peer-Reviewed Scientific Literature

Authors: Shannon Hearney, Jeffrey Hoch

Abstract:

While all cancers are costly to treat, pancreatic cancer is a notoriously costly and deadly form of cancer. Across the world there are a variety of treatment centers ranging from small clinics to large, high-volume hospitals as well as differing structures of payment and access. It has been noted that centers that treat a high volume of pancreatic cancer patients have higher quality of care, it is unclear if that care is cost-effective. In the US there is no clear consensus on the cost-effectiveness of high-volume centers for the surgical care of pancreatic cancer. Other European countries, like Finland and Italy have shown that high-volume centers have lower mortality rates and can have lower costs, there however, is still a gap in knowledge about these centers cost-effectiveness globally. This paper seeks to review the current literature in Europe and the US to gain a better understanding of the state of high-volume pancreatic surgical centers cost-effectiveness while considering the contextual differences in health system structure. A review of major reference databases such as Medline, Embase and PubMed will be conducted for cost-effectiveness studies on the surgical treatment of pancreatic cancer at high-volume centers. Possible MeSH terms to be included, but not limited to, are: “pancreatic cancer”, “cost analysis”, “cost-effectiveness”, “economic evaluation”, “pancreatic neoplasms”, “surgical”, “Europe” “socialized medicine”, “privatized medicine”, “for-profit”, and “high-volume”. Studies must also have been available in the English language. This review will encompass European scientific literature, as well as those in the US. Based on our preliminary findings, we anticipate high-volume hospitals to provide better care at greater costs. We anticipate that high-volume hospitals may be cost-effective in different contexts depending on the national structure of a healthcare system. Countries with more centralized and socialized healthcare may yield results that are more cost-effective. High-volume centers may differ in their cost-effectiveness of the surgical care of pancreatic cancer internationally especially when comparing those in the United States to others throughout Europe.

Keywords: cost-effectiveness analysis, economic evaluation, pancreatic cancer, scientific literature review

Procedia PDF Downloads 91
4829 A Shift in Approach from Cereal Based Diet to Dietary Diversity in India: A Case Study of Aligarh District

Authors: Abha Gupta, Deepak K. Mishra

Abstract:

Food security issue in India has surrounded over availability and accessibility of cereal which is regarded as the only food group to check hunger and improve nutrition. Significance of fruits, vegetables, meat and other food products have totally been neglected given the fact that they provide essential nutrients to the body. There is a need to shift the emphasis from cereal-based approach to a more diverse diet so that aim of achieving food security may change from just reducing hunger to an overall health. This paper attempts to analyse how far dietary diversity level has been achieved across different socio-economic groups in India. For this purpose, present paper sets objectives to determine (a) percentage share of different food groups to total food expenditure and consumption by background characteristics (b) source of and preference for all food items and, (c) diversity of diet across socio-economic groups. A cross sectional survey covering 304 households selected through proportional stratified random sampling was conducted in six villages of Aligarh district of Uttar Pradesh, India. Information on amount of food consumed, source of consumption and expenditure on food (74 food items grouped into 10 major food groups) was collected with a recall period of seven days. Per capita per day food consumption/expenditure was calculated through dividing consumption/expenditure by household size and number seven. Food variety score was estimated by giving 0 values to those food groups/items which had not been eaten and 1 to those which had been taken by households in last seven days. Addition of all food group/item score gave result of food variety score. Diversity of diet was computed using Herfindahl-Hirschman index. Findings of the paper show that cereal, milk, roots and tuber food groups contribute a major share in total consumption/expenditure. Consumption of these food groups vary across socio-economic groups whereas fruit, vegetables, meat and other food consumption remain low and same. Estimation of dietary diversity show higher concentration of diet due to higher consumption of cereals, milk, root and tuber products and dietary diversity slightly varies across background groups. Muslims, Scheduled caste, small farmers, lower income class, food insecure, below poverty line and labour families show higher concentration of diet as compared to their counterpart groups. These groups also evince lower mean intake of number of food item in a week due to poor economic constraints and resultant lower accessibility to number of expensive food items. Results advocate to make a shift from cereal based diet to dietary diversity which not only includes cereal and milk products but also nutrition rich food items such as fruits, vegetables, meat and other products. Integrating a dietary diversity approach in food security programmes of the country would help to achieve nutrition security as hidden hunger is widespread among the Indian population.

Keywords: dietary diversity, food Security, India, socio-economic groups

Procedia PDF Downloads 340
4828 Primary and Secondary Big Bangs Theory of Creation of Universe

Authors: Shyam Sunder Gupta

Abstract:

The current theory for the creation of the universe, the Big Bang theory, is widely accepted but leaves some unanswered questions. It does not explain the origin of the singularity or what causes the Big Bang. The theory of the Big Bang also does not explain why there is such a huge amount of dark energy and dark matter in our universe. Also, there is a question related to one universe or multiple universes which needs to be answered. This research addresses these questions using the Bhagvat Puran and other Vedic scriptures as the basis. There is a Unique Pure Energy Field that is eternal, infinite, and finest of all and never transforms when in its original form. The Carrier Particles of Unique Pure Energy are Param-anus- Fundamental Energy Particles. Param-anus and a combination of these particles create bigger particles from which the Universe gets created. For creation to initiate, Unique Pure Energy is represented in three phases: positive phase energy, neutral phase eternal time energy and negative phase energy. Positive phase energy further expands in three forms of creative energies (CE1, CE2andCE3). From CE1 energy, three energy modes, mode of activation, mode of action, and mode of darkness, were created. From these three modes, 16 Principles, subtlest forms of energies, namely Pradhan, Mahat-tattva, Time, Ego, Intellect, Mind, Sound, Space, Touch, Air, Form, Fire, Taste, Water, Smell, and Earth, get created. In the Mahat-tattva, dominant in the Mode of Darkness, CE1 energy creates innumerable primary singularities from seven principles: Pradhan, Mahat-tattva, Ego, Sky, Air, Fire, and Water. CE1 energy gets divided as CE2 and enters, along with three modes and time, in each singularity, and primary Big Bang takes place, and innumerable Invisible Universes get created. Each Universe has seven coverings of 7 principles, and each layer is 10 times thicker than the previous layer. By energy CE2, space in Invisible Universe under the coverings is divided into two halves. In the lower half, the process of evolution gets initiated, and seeds of 24 elements get created, out of which 5 fundamental elements, building blocks of matter, Sky, Air, Fire, Water and Earth, create seeds of stars, planets, galaxies and all other matter. Since 5 fundamental elements get created out of the mode of darkness, it explains why there is so much dark energy and dark matter in our Universe. This process of creation, in the lower half of Invisible universe continues for 2.16 billion years. Further, in the lower part of the energy field, exactly at the Centre of Invisible Universe, Secondary Singularity is created, through which, by force of Mode of Action, Secondary Big Bang takes place and Visible Universe gets created in the shape of Lotus Flower, expanding into upper part. Visible matter starts appearing after a gap of 360,000 years. Within the Visible Universe, a small part gets created known as the Phenomenal Material World, which is our Solar System, the sun being in the Centre. Diameter of Solar planetary system is 6.4 billion km.

Keywords: invisible universe, phenomenal material world, primary Big Bang, secondary Big Bang, singularities, visible universe

Procedia PDF Downloads 89
4827 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance

Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian

Abstract:

Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR has been found to be a promising option for directly producing steam to a thermal cycle in order to generate low-cost electricity, but also it has been shown to be promising for indirect steam generation. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the direct steam generation of the linear Fresnel reflector. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.

Keywords: concentrated solar power, levelized cost of electricity, linear Fresnel reflectors, steam generation

Procedia PDF Downloads 111
4826 Study of Morning-Glory Spillway Structure in Hydraulic Characteristics by CFD Model

Authors: Mostafa Zandi, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. Morning-Glory spillway is one of the common spillways for discharging the overflow water behind dams, these kinds of spillways are constructed in dams with small reservoirs. In this research, the hydraulic flow characteristics of a morning-glory spillways are investigated with CFD model. Two dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k- and k-, were chosen to model Reynolds shear stress term. The power law scheme was used for discretization of momentum, k , and  equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k -ε (Standard) has the most consistent results with experimental results. When the jet is getting closer to end of basin, the computational results increase with the numerical results of their differences. The lower profile of the water jet has less sensitivity to the hydraulic jet profile than the hydraulic jet profile. In the pressure test, it was also found that the results show that the numerical values of the pressure in the lower landing number differ greatly in experimental results. The characteristics of the complex flows over a Morning-Glory spillway were studied numerically using a RANS solver. Grid study showed that numerical results of a 57512-node grid had the best agreement with the experimental values. The desired downstream channel length was preferred to be 1.5 meter, and the standard k-ε turbulence model produced the best results in Morning-Glory spillway. The numerical free-surface profiles followed the theoretical equations very well.

Keywords: morning-glory spillway, CFD model, hydraulic characteristics, wall function

Procedia PDF Downloads 77
4825 The Effectiveness of Probiotics in the Treatment of Minimal Hepatic Encephalopathy Among Patients with Cirrhosis: An Expanded Meta-Analysis

Authors: Erwin Geroleo, Higinio Mappala

Abstract:

Introduction Overt Hepatic Encephalopathy (OHE) is the most dreaded outcome of liver cirrhosis. Aside from the triggering factors which are already known to precipitate OHE, there is growing evidence that an altered gut microbiota profile (dysbiosis) can also trigger OHE. MHE is the mildest form of hepatic encephalopathy(HE), affecting about one-third of patients with cirrhosis, and close 80% of patients with cirrhosis and manifests as abnormalities in central nervous system function. Since these symptoms are subclinical most patients are not being treated to prevent OHE. The gut microbiota have been evaluated by several studies as a therapeutic option for MHE, especially in decreasing the levels of ammonia, thus preventing progression to OHE Objectives This study aims to evaluate the efficacy of probiotics in terms of reduction of ammonia levels in patient with minimal hepatic encephalopathies and to determine if Probiotics has role in the prevention of progression to overt hepatic encephalopathy in adult patients with minimal hepatic encephalopathy (MHE) Methods and Analysis The literature search strategy was restricted to human studies in adults subjects from 2004 to 2022. The Jadad Score Calculation was utilized in the assessment of the final studies included in this study. Eight (8) studies were included. Cochrane’s Revman Web, the Fixed Effects model and the Ztest were all used in the overall analysis of the outcomes. A p value of less than 0.0005 was statistically significant. Results. These results show that Probiotics significantly lowers the level of Ammonia in Cirrhotic patients with OHE. It also shows that the use of Probiotics significantly prevents the progression of MHE to OHE. The overall risk of bias graph indicates low risk of publication bias among the studies included in the meta-analysis. Main findings This research found that plasma ammonia concentration was lower among participants treated with probiotics (p<0.00001).) Ammonia level of the probiotics group is lower by 13.96 μmol/ on the average. Overall risk of developing overt hepatic encephalopathy in the probiotics group is shown to be decreased by 15% as compared to the placebo group Conclusion The analysis showed that compared with placebo, probiotics can decrease serum ammonia, may improve MHE and may prevent OHE.

Keywords: minimal hepatic encephalopathy, probiotics, liver cirrhosis, overt hepatic encephalopathy

Procedia PDF Downloads 46
4824 The Effects of Leadership on the Claim of Responsibility

Authors: Katalin Kovacs

Abstract:

In most forms of violence the perpetrators intend to hide their identities. Terrorism is different. Terrorist groups often take responsibility for their attacks, and consequently they reveal their identities. This unique characteristic of terrorism has been largely overlooked, and scholars are still puzzled as to why terrorist groups claim responsibility for their attacks. Certainly, the claim of responsibility is worth analysing. It would help to have a clearer picture of what terrorist groups try to achieve and how, but also to develop an understanding of the strategic planning of terrorist attacks and the message the terrorists intend to deliver. The research aims to answer the question why terrorist groups choose to claim responsibility for some of their attacks and not for others. In order to do so the claim of responsibility is considered to be a tactical choice, based on the assumption that terrorists weigh the costs and benefits of claiming responsibility. The main argument is that terrorist groups do not claim responsibility in cases when there is no tactical advantage gained from claiming responsibility. The idea that the claim of responsibility has tactical value offers the opportunity to test these assertions using a large scale empirical analysis. The claim of responsibility as a tactical choice depends on other tactical choices, such as the choice of target, the internationality of the attack, the number of victims and whether the group occupies territory or operates as an underground group. The structure of the terrorist groups and the level of decision making also affects the claim of responsibility. Terrorists on the lower level are less disciplined than the leaders. This means that the terrorists on lower levels pay less attention to the strategic objectives and engage easier in indiscriminate violence, and consequently they would less like to claim responsibility. Therefore, the research argues that terrorists, who are on a highest level of decision making would claim responsibility for the attacks as those are who takes into account the strategic objectives. As most studies on terrorism fail to provide definitions; therefore the researches are fragmented and incomparable. Separate, isolated researches do not support comprehensive thinking. It is also very important to note that there are only a few researches using quantitative methods. The aim of the research is to develop a new and comprehensive overview of the claim of responsibility based on strong quantitative evidence. By using well-established definitions and operationalisation the current research focuses on a broad range of attributes that can have tactical values in order to determine circumstances when terrorists are more likely to claim responsibility.

Keywords: claim of responsibility, leadership, tactical choice, terrorist group

Procedia PDF Downloads 313
4823 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model

Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson

Abstract:

The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.

Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania

Procedia PDF Downloads 105