Search results for: fuel volume estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5872

Search results for: fuel volume estimation

2362 Influence of Yield Stress and Compressive Strength on Direct Shear Behaviour of Steel Fibre-Reinforced Concrete

Authors: Bensaid Boulekbache, Mostefa Hamrat, Mohamed Chemrouk, Sofiane Amziane

Abstract:

This study aims in examining the influence of the paste yield stress and compressive strength on the behaviour of fibre-reinforced concrete (FRC) versus direct shear. The parameters studied are the steel fibre contents, the aspect ratio of fibres and the concrete strength. Prismatic specimens of dimensions 10x10x35cm made of concrete of various yield stress reinforced with steel fibres hooked at the ends with three fibre volume fractions (i.e. 0, 0.5, and 1%) and two aspects ratio (65 and 80) were tested to direct shear. Three types of concretes with various compressive strength and yield stress were tested, an ordinary concrete (OC), a self-compacting concrete (SCC) and a high strength concrete (HSC). The concrete strengths investigated include 30 MPa for OC, 60 MPa for SCC and 80 MPa for HSC. The results show that the shear strength and ductility are affected and have been improved very significantly by the fibre contents, fibre aspect ratio and concrete strength. As the compressive strength and the volume fraction of fibres increase, the shear strength increases. However, yield stress of concrete has an important influence on the orientation and distribution of the fibres in the matrix. The ductility was much higher for ordinary and self-compacting concretes (concrete with good workability). The ductility in direct shear depends on the fibre orientation and is significantly improved when the fibres are perpendicular to the shear plane. On the contrary, for concrete with poor workability, an inadequate distribution and orientation of fibres occurred, leading to a weak contribution of the fibres to the direct shear behaviour.

Keywords: concrete, fibre, direct shear, yield stress, orientation, strength

Procedia PDF Downloads 535
2361 Analysis of the Treatment Hemorrhagic Stroke in Multidisciplinary City Hospital №1 Nur-Sultan

Authors: M. G. Talasbayen, N. N. Dyussenbayev, Y. D. Kali, R. A. Zholbarysov, Y. N. Duissenbayev, I. Z. Mammadinova, S. M. Nuradilov

Abstract:

Background. Hemorrhagic stroke is an acute cerebrovascular accident resulting from rupture of a cerebral vessel or increased permeability of the wall and imbibition of blood into the brain parenchyma. Arterial hypertension is a common cause of hemorrhagic stroke. Male gender and age over 55 years is a risk factor for intracerebral hemorrhage. Treatment of intracerebral hemorrhage is aimed at the primary pathophysiological link: the relief of coagulopathy and the control of arterial hypertension. Early surgical treatment can limit cerebral compression; prevent toxic effects of blood to the brain parenchyma. Despite progress in the development of neuroimaging data, the use of minimally invasive techniques, and navigation system, mortality from intracerebral hemorrhage remains high. Materials and methods. The study included 78 patients (62.82% male and 37.18% female) with a verified diagnosis of hemorrhagic stroke in the period from 2019 to 2021. The age of patients ranged from 25 to 80 years, the average age was 54.66±11.9 years. Demographic, brain CT data (localization, volume of hematomas), methods of treatment, and disease outcome were analyzed. Results. The retrospective analyze demonstrate that 78.2% of all patients underwent surgical treatment: decompressive craniectomy in 37.7%, craniotomy with hematoma evacuation in 29.5%, and hematoma draining in 24.59% cases. The study of the proportion of deaths, depending on the volume of intracerebral hemorrhage, shows that the number of deaths was higher in the group with a hematoma volume of more than 60 ml. Evaluation of the relationship between the time before surgery and mortality demonstrates that the most favorable outcome is observed during surgical treatment in the interval from 3 to 24 hours. Mortality depending on age did not reveal a significant difference between age groups. An analysis of the impact of the surgery type on mortality reveals that decompressive craniectomy with or without hematoma evacuation led to an unfavorable outcome in 73.9% of cases, while craniotomy with hematoma evacuation and drainage led to mortality only in 28.82% cases. Conclusion. Even though the multimodal approaches, the development of surgical techniques and equipment, and the selection of optimal conservative therapy, the question of determining the tactics of managing and treating hemorrhagic strokes is still controversial. Nevertheless, our experience shows that surgical intervention within 24 hours from the moment of admission and craniotomy with hematoma evacuation improves the prognosis of treatment outcomes.

Keywords: hemorragic stroke, Intracerebral hemorrhage, surgical treatment, stroke mortality

Procedia PDF Downloads 101
2360 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors

Procedia PDF Downloads 429
2359 Development of Coir Reinforced Composite for Automotive Parts Application

Authors: Okpala Charles Chikwendu, Ezeanyim Okechukwu Chiedu, Onukwuli Somto Kenneth

Abstract:

The demand for lightweight and fuel-efficient automobiles has led to the use of fiber-reinforced polymer composites in place of traditional metal parts. Coir, a natural fiber, offers qualities such as low cost, good tensile strength, and biodegradability, making it a potential filler material for automotive components. However, poor interfacial adhesion between coir and polymeric matrices has been a challenge. To address poor interfacial adhesion with polymeric matrices due to their moisture content and method of preparation, the extracted coir was chemically treated using NaOH. To develop a side view mirror encasement by investigating the mechanical effect of fiber percentage composition, fiber length and percentage composition of Epoxy in a coir fiber reinforced composite, polyester was adopted as the resin for the mold, while that of the product is Epoxy. Coir served as the filler material for the product. Specimens with varied compositions of fiber loading (15, 30 and 45) %, length (10, 15, 20, 30 and 45) mm, and (55, 70, 85) % weight of epoxy resin were fabricated using hand lay-up technique, while those specimens were later subjected to mechanical tests (Tensile, Flexural and Impact test). The results of the mechanical test showed that the optimal solution for the input factors is coir at 45%, epoxy at 54.543%, and 45mm coir length, which was used for the development of a vehicle’s side view mirror encasement. The optimal solutions for the response parameters are 49.333 Mpa for tensile strength, flexural for 57.118 Mpa, impact strength for 34.787 KJ/M2, young modulus for 4.788 GPa, stress for 4.534 KN, and 20.483 mm for strain. The models that were developed using Design Expert software revealed that the input factors can achieve the response parameters in the system with 94% desirability. The study showed that coir is quite durable for filler material in an epoxy composite for automobile applications and that fiber loading and length have a significant effect on the mechanical behavior of coir fiber-reinforced epoxy composites. The coir's low density, considerable tensile strength, and bio-degradability contribute to its eco-friendliness and potential for reducing the environmental hazards of synthetic automotive components.

Keywords: coir, composite, coir fiber, coconut husk, polymer, automobile, mechanical test

Procedia PDF Downloads 54
2358 Development and Validation of High-Performance Liquid Chromatography Method for the Determination and Pharmacokinetic Study of Linagliptin in Rat Plasma

Authors: Hoda Mahgoub, Abeer Hanafy

Abstract:

Linagliptin (LNG) belongs to dipeptidyl-peptidase-4 (DPP-4) inhibitor class. DPP-4 inhibitors represent a new therapeutic approach for the treatment of type 2 diabetes in adults. The aim of this work was to develop and validate an accurate and reproducible HPLC method for the determination of LNG with high sensitivity in rat plasma. The method involved separation of both LNG and pindolol (internal standard) at ambient temperature on a Zorbax Eclipse XDB C18 column and a mobile phase composed of 75% methanol: 25% formic acid 0.1% pH 4.1 at a flow rate of 1.0 mL.min-1. UV detection was performed at 254nm. The method was validated in compliance with ICH guidelines and found to be linear in the range of 5–1000ng.mL-1. The limit of quantification (LOQ) was found to be 5ng.mL-1 based on 100µL of plasma. The variations for intra- and inter-assay precision were less than 10%, and the accuracy values were ranged between 93.3% and 102.5%. The extraction recovery (R%) was more than 83%. The method involved a single extraction step of a very small plasma volume (100µL). The assay was successfully applied to an in-vivo pharmacokinetic study of LNG in rats that were administered a single oral dose of 10mg.kg-1 LNG. The maximum concentration (Cmax) was found to be 927.5 ± 23.9ng.mL-1. The area under the plasma concentration-time curve (AUC0-72) was 18285.02 ± 605.76h.ng.mL-1. In conclusion, the good accuracy and low LOQ of the bioanalytical HPLC method were suitable for monitoring the full pharmacokinetic profile of LNG in rats. The main advantages of the method were the sensitivity, small sample volume, single-step extraction procedure and the short time of analysis.

Keywords: HPLC, linagliptin, pharmacokinetic study, rat plasma

Procedia PDF Downloads 237
2357 Performance of HVOF Sprayed Ni-20CR and Cr3C2-NiCr Coatings on Fe-Based Superalloy in an Actual Industrial Environment of a Coal Fired Boiler

Authors: Tejinder Singh Sidhu

Abstract:

Hot corrosion has been recognized as a severe problem in steam-powered electricity generation plants and industrial waste incinerators as it consumes the material at an unpredictably rapid rate. Consequently, the load-carrying ability of the components reduces quickly, eventually leading to catastrophic failure. The inability to either totally prevent hot corrosion or at least detect it at an early stage has resulted in several accidents, leading to loss of life and/or destruction of infrastructures. A number of countermeasures are currently in use or under investigation to combat hot corrosion, such as using inhibitors, controlling the process parameters, designing a suitable industrial alloy, and depositing protective coatings. However, the protection system to be selected for a particular application must be practical, reliable, and economically viable. Due to the continuously rising cost of the materials as well as increased material requirements, the coating techniques have been given much more importance in recent times. Coatings can add value to products up to 10 times the cost of the coating. Among the different coating techniques, thermal spraying has grown into a well-accepted industrial technology for applying overlay coatings onto the surfaces of engineering components to allow them to function under extreme conditions of wear, erosion-corrosion, high-temperature oxidation, and hot corrosion. In this study, the hot corrosion performances of Ni-20Cr and Cr₃C₂-NiCr coatings developed by High Velocity Oxy-Fuel (HVOF) process have been studied. The coatings were developed on a Fe-based superalloy, and experiments were performed in an actual industrial environment of a coal-fired boiler. The cyclic study was carried out around the platen superheater zone where the temperature was around 1000°C. The study was conducted for 10 cycles, and one cycle was consisting of 100 hours of heating followed by 1 hour of cooling at ambient temperature. Both the coatings deposited on Fe-based superalloy imparted better hot corrosion resistance than the uncoated one. The Ni-20Cr coated superalloy performed better than the Cr₃C₂-NiCr coated in the actual working conditions of the coal fired boiler. It is found that the formation of chromium oxide at the boundaries of Ni-rich splats of the coating blocks the inward permeation of oxygen and other corrosive species to the substrate.

Keywords: hot corrosion, coating, HVOF, oxidation

Procedia PDF Downloads 76
2356 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 265
2355 The Use of a Novel Visual Kinetic Demonstration Technique in Student Skill Acquisition of the Sellick Cricoid Force Manoeuvre

Authors: L. Nathaniel-Wurie

Abstract:

The Sellick manoeuvre a.k.a the application of cricoid force (CF), was first described by Brian Sellick in 1961. CF is the application of digital pressure against the cricoid cartilage with the intention of posterior force causing oesophageal compression against the vertebrae. This is designed to prevent passive regurgitation of gastric contents, which is a major cause of morbidity and mortality during emergency airway management inside and outside of the hospital. To the authors knowledge, there is no universally standardised training modality and, therefore, no reliable way to examine if there are appropriate outcomes. If force is not measured during training, how can one surmise that appropriate, accurate, or precise amounts of force are being used routinely. Poor homogeneity in teaching and untested outcomes will correlate with reduced efficacy and increased adverse effects. For this study, the accuracy of force delivery in trained professionals was tested, and outcomes contrasted against a novice control and a novice study group. In this study, 20 operating department practitioners were tested (with a mean experience of 5.3years of performing CF). Subsequent contrast with 40 novice students who were randomised into one of two arms. ‘Arm A’ were explained the procedure, then shown the procedure then asked to perform CF with the corresponding force measurement being taken three times. Arm B had the same process as arm A then before being tested, they had 10, and 30 Newtons applied to their hands to increase intuitive understanding of what the required force equated to, then were asked to apply the equivalent amount of force against a visible force metre and asked to hold that force for 20 seconds which allowed direct visualisation and correction of any over or under estimation. Following this, Arm B were then asked to perform the manoeuvre, and the force generated measured three times. This study shows that there is a wide distribution of force produced by trained professionals and novices performing the procedure for the first time. Our methodology for teaching the manoeuvre shows an improved accuracy, precision, and homogeneity within the group when compared to novices and even outperforms trained practitioners. In conclusion, if this methodology is adopted, it may correlate with higher clinical outcomes, less adverse events, and more successful airway management in critical medical scenarios.

Keywords: airway, cricoid, medical education, sellick

Procedia PDF Downloads 74
2354 Numerical Analysis of Core-Annular Blood Flow in Microvessels at Low Reynolds Numbers

Authors: L. Achab, F. Iachachene

Abstract:

In microvessels, red blood cells (RBCs) exhibit a tendency to migrate towards the vessel center, establishing a core-annular flow pattern. The core region, marked by a high concentration of RBCs, is governed by significantly non-Newtonian viscosity. Conversely, the annular layer, composed of cell-free plasma, is characterized by Newtonian low viscosity. This property enables the plasma layer to act as a lubricant for the vessel walls, efficiently reducing resistance to the movement of blood cells. In this study, we investigate the factors influencing blood flow in microvessels and the thickness of the annular plasma layer using a non-miscible fluids approach in a 2D axisymmetric geometry. The governing equations of an incompressible unsteady flow are solved numerically through the Volume of Fluid (VOF) method to track the interface between the two immiscible fluids. To model blood viscosity in the core region, we adopt the Quemada constitutive law which is accurately captures the shear-thinning blood rheology over a wide range of shear rates. Our results are then compared to an established theoretical approach under identical flow conditions, particularly concerning the radial velocity profile and the thickness of the annular plasma layer. The simulation findings for low Reynolds numbers, demonstrate a notable agreement with the theoretical solution, emphasizing the pivotal role of blood’s rheological properties in the core region in determining the thickness of the annular plasma layer.

Keywords: core-annular flows, microvessels, Quemada model, plasma layer thickness, volume of fluid method

Procedia PDF Downloads 44
2353 An Analytical Formulation of Pure Shear Boundary Condition for Assessing the Response of Some Typical Sites in Mumbai

Authors: Raj Banerjee, Aniruddha Sengupta

Abstract:

An earthquake event, associated with a typical fault rupture, initiates at the source, propagates through a rock or soil medium and finally daylights at a surface which might be a populous city. The detrimental effects of an earthquake are often quantified in terms of the responses of superstructures resting on the soil. Hence, there is a need for the estimation of amplification of the bedrock motions due to the influence of local site conditions. In the present study, field borehole log data of Mangalwadi and Walkeswar sites in Mumbai city are considered. The data consists of variation of SPT N-value with the depth of soil. A correlation between shear wave velocity (Vₛ) and SPT N value for various soil profiles of Mumbai city has been developed using various existing correlations which is used further for site response analysis. MATLAB program is developed for studying the ground response analysis by performing two dimensional linear and equivalent linear analysis for some of the typical Mumbai soil sites using pure shear (Multi Point Constraint) boundary condition. The model is validated in linear elastic and equivalent linear domain using the popular commercial program, DEEPSOIL. Three actual earthquake motions are selected based on their frequency contents and durations and scaled to a PGA of 0.16g for the present ground response analyses. The results are presented in terms of peak acceleration time history with depth, peak shear strain time history with depth, Fourier amplitude versus frequency, response spectrum at the surface etc. The peak ground acceleration amplification factors are found to be about 2.374, 3.239 and 2.4245 for Mangalwadi site and 3.42, 3.39, 3.83 for Walkeswar site using 1979 Imperial Valley Earthquake, 1989 Loma Gilroy Earthquake and 1987 Whitter Narrows Earthquake, respectively. In the absence of any site-specific response spectrum for the chosen sites in Mumbai, the generated spectrum at the surface may be utilized for the design of any superstructure at these locations.

Keywords: deepsoil, ground response analysis, multi point constraint, response spectrum

Procedia PDF Downloads 176
2352 Switching of Series-Parallel Connected Modules in an Array for Partially Shaded Conditions in a Pollution Intensive Area Using High Powered MOSFETs

Authors: Osamede Asowata, Christo Pienaar, Johan Bekker

Abstract:

Photovoltaic (PV) modules may become a trend for future PV systems because of their greater flexibility in distributed system expansion, easier installation due to their nature, and higher system-level energy harnessing capabilities under shaded or PV manufacturing mismatch conditions. This is as compared to the single or multi-string inverters. Novel residential scale PV arrays are commonly connected to the grid by a single DC–AC inverter connected to a series, parallel or series-parallel string of PV panels, or many small DC–AC inverters which connect one or two panels directly to the AC grid. With an increasing worldwide interest in sustainable energy production and use, there is renewed focus on the power electronic converter interface for DC energy sources. Three specific examples of such DC energy sources that will have a role in distributed generation and sustainable energy systems are the photovoltaic (PV) panel, the fuel cell stack, and batteries of various chemistries. A high-efficiency inverter using Metal Oxide Semiconductor Field-Effect Transistors (MOSFETs) for all active switches is presented for a non-isolated photovoltaic and AC-module applications. The proposed configuration features a high efficiency over a wide load range, low ground leakage current and low-output AC-current distortion with no need for split capacitors. The detailed power stage operating principles, pulse width modulation scheme, multilevel bootstrap power supply, and integrated gate drivers for the proposed inverter is described. Experimental results of a hardware prototype, show that not only are MOSFET efficient in the system, it also shows that the ground leakage current issues are alleviated in the proposed inverter and also a 98 % maximum associated driver circuit is achieved. This, in turn, provides the need for a possible photovoltaic panel switching technique. This will help to reduce the effect of cloud movements as well as improve the overall efficiency of the system.

Keywords: grid connected photovoltaic (PV), Matlab efficiency simulation, maximum power point tracking (MPPT), module integrated converters (MICs), multilevel converter, series connected converter

Procedia PDF Downloads 117
2351 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species

Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel

Abstract:

Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.

Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis

Procedia PDF Downloads 77
2350 Platelet Volume Indices: Emerging Markers of Diabetic Thrombocytopathy

Authors: Mitakshara Sharma, S. K. Nema

Abstract:

Diabetes mellitus (DM) is metabolic disorder prevalent in pandemic proportions, incurring significant morbidity and mortality due to associated vascular angiopathies. Platelet related thrombogenesis plays key role in pathogenesis of these complications. Most patients with type II DM suffer from preventable vascular complications and early diagnosis can help manage these successfully. These complications are attributed to platelet activation which can be recognised by the increase in Platelet Volume Indices(PVI) viz. Mean Platelet Volume(MPV) and Platelet Distribution Width(PDW). This study was undertaken with the aim of finding a relationship between PVI and vascular complications of Diabetes mellitus, their importance as a causal factor in these complications and use as markers for early detection of impending vascular complications in patients with poor glycaemic status. This is a cross-sectional study conducted for 2 years with total 930 subjects. The subjects were segregated in 03 groups on basis of glycosylated haemoglobin (HbA1C) as: - (a) Diabetic, (b) Non-Diabetic and (c) Subjects with Impaired fasting glucose(IFG) with 300 individuals in IFG and non-diabetic group & 330 individuals in diabetic group. The diabetic group was further divided into two groups: - (a) Diabetic subjects with diabetes related vascular complications (b) Diabetic subjects without diabetes related vascular complications. Samples for HbA1C and platelet indices were collected using Ethylene diamine tetracetic acid(EDTA) as anticoagulant and processed on SYSMEX-XS-800i autoanalyser. The study revealed stepwise increase in PVI from non-diabetics to IFG to diabetics. MPV and PDW of diabetics, IFG and non diabetics were 17.60 ± 2.04, 11.76 ± 0.73, 9.93 ± 0.64 and 19.17 ± 1.48, 15.49 ± 0.67, 10.59 ± 0.67 respectively with a significant p value 0.00 and a significant positive correlation (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). However, significant negative correlation was found between glycaemic levels and total platelet count (PC- HbA1c r =-0.164). MPV & PDW of subjects with and without diabetes related complications were (15.14 ± 1.04) fl & (17.51±0.39) fl and (18.96 ± 0.83) fl & (20.09 ± 0.98) fl respectively with a significant p value 0.00.The current study demonstrates raised platelet indices & reduced platelet counts in association with rising glycaemic levels and diabetes related vascular complications across various study groups & showed that platelet morphology is altered with increasing glycaemic levels. These changes can be known by measurements of PVI which are important, simple, cost effective, effortless tool & indicators of impending vascular complications in patients with deranged glycaemic control. PVI should be researched and explored further as surrogate markers to develop a clinical tool for early recognition of vascular changes related to diabetes and thereby help prevent them. They can prove to be more useful in developing countries with limited resources. This study is multi-parameter, comprehensive with adequately powered study design and represents pioneering effort in India on account of the fact that both Platelet indices (MPV & PDW) along with platelet count have been evaluated together for the first time in Diabetics, non diabetics, patients with IFG and also in the diabetic patients with and without diabetes related vascular complications.

Keywords: diabetes, HbA1C, IFG, MPV, PDW, PVI

Procedia PDF Downloads 232
2349 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study

Authors: Thomas Arink, Isam Janajreh

Abstract:

The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).

Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires

Procedia PDF Downloads 513
2348 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction

Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun

Abstract:

The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.

Keywords: usability, qualitative data, text-processing algorithm, natural language processing

Procedia PDF Downloads 278
2347 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators

Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros

Abstract:

Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.

Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis

Procedia PDF Downloads 129
2346 Nonlinear Vibration of FGM Plates Subjected to Acoustic Load in Thermal Environment Using Finite Element Modal Reduction Method

Authors: Hassan Parandvar, Mehrdad Farid

Abstract:

In this paper, a finite element modeling is presented for large amplitude vibration of functionally graded material (FGM) plates subjected to combined random pressure and thermal load. The material properties of the plates are assumed to vary continuously in the thickness direction by a simple power law distribution in terms of the volume fractions of the constituents. The material properties depend on the temperature whose distribution along the thickness can be expressed explicitly. The von Karman large deflection strain displacement and extended Hamilton's principle are used to obtain the governing system of equations of motion in structural node degrees of freedom (DOF) using finite element method. Three-node triangular Mindlin plate element with shear correction factor is used. The nonlinear equations of motion in structural degrees of freedom are reduced by using modal reduction method. The reduced equations of motion are solved numerically by 4th order Runge-Kutta scheme. In this study, the random pressure is generated using Monte Carlo method. The modeling is verified and the nonlinear dynamic response of FGM plates is studied for various values of volume fraction and sound pressure level under different thermal loads. Snap-through type behavior of FGM plates is studied too.

Keywords: nonlinear vibration, finite element method, functionally graded material (FGM) plates, snap-through, random vibration, thermal effect

Procedia PDF Downloads 256
2345 Evaluation of the Self-Organizing Map and the Adaptive Neuro-Fuzzy Inference System Machine Learning Techniques for the Estimation of Crop Water Stress Index of Wheat under Varying Application of Irrigation Water Levels for Efficient Irrigation Scheduling

Authors: Aschalew C. Workneh, K. S. Hari Prasad, C. S. P. Ojha

Abstract:

The crop water stress index (CWSI) is a cost-effective, non-destructive, and simple technique for tracking the start of crop water stress. This study investigated the feasibility of CWSI derived from canopy temperature to detect the water status of wheat crops. Artificial intelligence (AI) techniques have become increasingly popular in recent years for determining CWSI. In this study, the performance of two AI techniques, adaptive neuro-fuzzy inference system (ANFIS) and self-organizing maps (SOM), are compared while determining the CWSI of paddy crops. Field experiments were conducted for varying irrigation water applications during two seasons in 2022 and 2023 at the irrigation field laboratory at the Civil Engineering Department, Indian Institute of Technology Roorkee, India. The ANFIS and SOM-simulated CWSI values were compared with the experimentally calculated CWSI (EP-CWSI). Multiple regression analysis was used to determine the upper and lower CWSI baselines. The upper CWSI baseline was found to be a function of crop height and wind speed, while the lower CWSI baseline was a function of crop height, air vapor pressure deficit, and wind speed. The performance of ANFIS and SOM were compared based on mean absolute error (MAE), mean bias error (MBE), root mean squared error (RMSE), index of agreement (d), Nash-Sutcliffe efficiency (NSE), and coefficient of correlation (R²). Both models successfully estimated the CWSI of the paddy crop with higher correlation coefficients and lower statistical errors. However, the ANFIS (R²=0.81, NSE=0.73, d=0.94, RMSE=0.04, MAE= 0.00-1.76 and MBE=-2.13-1.32) outperformed the SOM model (R²=0.77, NSE=0.68, d=0.90, RMSE=0.05, MAE= 0.00-2.13 and MBE=-2.29-1.45). Overall, the results suggest that ANFIS is a reliable tool for accurately determining CWSI in wheat crops compared to SOM.

Keywords: adaptive neuro-fuzzy inference system, canopy temperature, crop water stress index, self-organizing map, wheat

Procedia PDF Downloads 43
2344 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture

Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani

Abstract:

The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated.  Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.

Keywords: carbon capture and storage, oxy-combustion, netpower cycle, oxy turbine cycles, zero emission, heat exchanger design, supercritical carbon dioxide, oxy-fuel power plant, pinch point analysis

Procedia PDF Downloads 199
2343 Relationships Between the Petrophysical and Mechanical Properties of Rocks and Shear Wave Velocity

Authors: Anamika Sahu

Abstract:

The Himalayas, like many mountainous regions, is susceptible to multiple hazards. In recent times, the frequency of such disasters is continuously increasing due to extreme weather phenomena. These natural hazards are responsible for irreparable human and economic loss. The Indian Himalayas has repeatedly been ruptured by great earthquakes in the past and has the potential for a future large seismic event as it falls under the seismic gap. Damages caused by earthquakes are different in different localities. It is well known that, during earthquakes, damage to the structure is associated with the subsurface conditions and the quality of construction materials. So, for sustainable mountain development, prior estimation of site characterization will be valuable for designing and constructing the space area and for efficient mitigation of the seismic risk. Both geotechnical and geophysical investigation of the subsurface is required to describe the subsurface complexity. In mountainous regions, geophysical methods are gaining popularity as areas can be studied without disturbing the ground surface, and also these methods are time and cost-effective. The MASW method is used to calculate the Vs30. Vs30 is the average shear wave velocity for the top 30m of soil. Shear wave velocity is considered the best stiffness indicator, and the average of shear wave velocity up to 30 m is used in National Earthquake Hazards Reduction Program (NEHRP) provisions (BSSC,1994) and Uniform Building Code (UBC), 1997 classification. Parameters obtained through geotechnical investigation have been integrated with findings obtained through the subsurface geophysical survey. Joint interpretation has been used to establish inter-relationships among mineral constituents, various textural parameters, and unconfined compressive strength (UCS) with shear wave velocity. It is found that results obtained through the MASW method fitted well with the laboratory test. In both conditions, mineral constituents and textural parameters (grain size, grain shape, grain orientation, and degree of interlocking) control the petrophysical and mechanical properties of rocks and the behavior of shear wave velocity.

Keywords: MASW, mechanical, petrophysical, site characterization

Procedia PDF Downloads 81
2342 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: deflagration, large eddy simulation, turbulent combustion, vented enclosure

Procedia PDF Downloads 236
2341 Economic and Environmental Assessment of Heat Recovery in Beer and Spirit Production

Authors: Isabel Schestak, Jan Spriet, David Styles, Prysor Williams

Abstract:

Breweries and distilleries are well-known for their high water usage. The water consumption in a UK brewery to produce one litre of beer reportedly ranges from 3-9 L and in a distillery from 7-45 L to produce a litre of spirit. This includes product water such as mashing water, but also water for wort and distillate cooling and for cleaning of tanks, casks, and kegs. When cooling towers are used, cooling water can be the dominating water consumption in a brewery or distillery. Interlinked to the high water use is a substantial heating requirement for mashing, wort boiling, or distillation, typically met by fossil fuel combustion such as gasoil. Many water and waste water streams are leaving the processes hot, such as the returning cooling water or the pot ales. Therefore, several options exist to optimise water and energy efficiency of spirit production through heat recovery. Although these options are known in the sector, they are often not applied in practice due to planning efforts or financial obstacles. In this study, different possibilities and design options for heat recovery systems are explored in four breweries/distilleries in the UK and assessed from an economic but also environmental point of view. The eco-efficiency methodology, according to ISO 14045, is applied to combine both assessment criteria to determine the optimum solution for heat recovery application in practice. The economic evaluation is based on the total value added (TVA) while the Life Cycle Assessment (LCA) methodology is applied to account for the environmental impacts through the installations required for heat recovery. The four case study businesses differ in a) production scale with mashing volumes ranging from 2500 to 40,000 L, in b) terms of heating and cooling technology used, and in c) the extent to which heat recovery is/is not applied. This enables the evaluation of different cases for heat recovery based on empirical data. The analysis provides guidelines for practitioners in the brewing and distilling sector in and outside the UK for the realisation of heat recovery measures. Financial and environmental payback times are showcased for heat recovery systems in the four distilleries which are operating at different production scales. The results are expected to encourage the application of heat recovery where environmentally and economically beneficial and ultimately contribute to a reduction of the water and energy footprint in brewing and distilling businesses.

Keywords: brewery, distillery, eco-efficiency, heat recovery from process and waste water, life cycle assessment

Procedia PDF Downloads 116
2340 Organic Co-Polymer Monolithic Columns for Liquid Chromatography Mixed Mode Protein Separations

Authors: Ahmed Alkarimi, Kevin Welham

Abstract:

Organic mixed mode monolithic columns were fabricated from; glycidyl methacrylate-co-ethylene dimethacrylate-co-stearyl methacrylate, using glycidyl methacrylate and stearyl methacrylate as co monomers representing 30% and 70% respectively of the liquid volume with ethylene dimethacrylate crosslinker and 2,2-dimethoxy-2-phenylacetophenone as the free radical initiator. The monomers were mixed with a binary porogenic solvent, comprising propan-1-ol, and methanol (0.825 mL each). The monolith was formed by photo polymerization (365 nm) inside a borosilicate glass tube (1.5 mm ID and 3 mm OD x 50 mm length). The monolith was observed to have formed correctly by optical examination and generated reasonable backpressure, approximately 650 psi at a flow rate of 0.2 mL min⁻¹ 50:50 acetonitrile: water. The morphological properties of the monolithic columns were investigated using scanning electron microscopy images, and Brunauer-Emmett-Teller analysis, the results showed that the monolith was formed properly with 19.98 ± 0.01 mm² surface area, 0.0205 ± 0.01 cm³ g⁻¹ pore volume and 6.93 ± 0.01 nm average pore size. The polymer monolith formed was further investigated using proton nuclear magnetic resonance, and Fourier transform infrared spectroscopy. The monolithic columns were investigated using high-performance liquid chromatography to test their ability to separate different samples with a range of properties. The columns displayed both hydrophobic/hydrophilic and hydrophobic/ion exchange interactions with the compounds tested indicating that true mixed mode separations. The mixed mode monolithic columns exhibited significant separation of proteins.

Keywords: LC separation, proteins separation, monolithic column, mixed mode

Procedia PDF Downloads 155
2339 Effect of Knowledge of Bubble Point Pressure on Estimating PVT Properties from Correlations

Authors: Ahmed El-Banbi, Ahmed El-Maraghi

Abstract:

PVT properties are needed as input data in all reservoir, production, and surface facilities engineering calculations. In the absence of PVT reports on valid reservoir fluid samples, engineers rely on PVT correlations to generate the required PVT data. The accuracy of PVT correlations varies, and no correlation group has been found to provide accurate results for all oil types. The effect of inaccurate PVT data can be significant in engineering calculations and is well documented in the literature. Bubble point pressure can sometimes be obtained from external sources. In this paper, we show how to utilize the known bubble point pressure to improve the accuracy of calculated PVT properties from correlations. We conducted a systematic study using around 250 reservoir oil samples to quantify the effect of pre-knowledge of bubble point pressure. The samples spanned a wide range of oils, from very volatile oils to black oils and all the way to low-GOR oils. A method for shifting both undersaturated and saturated sections of the PVT properties curves to the correct bubble point is explained. Seven PVT correlation families were used in this study. All PVT properties (e.g., solution gas-oil ratio, formation volume factor, density, viscosity, and compressibility) were calculated using the correct bubble point pressure and the correlation estimated bubble point pressure. Comparisons between the calculated PVT properties and actual laboratory-measured values were made. It was found that pre-knowledge of bubble point pressure and using the shifting technique presented in the paper improved the correlation-estimated values by 10% to more than 30%. The most improvement was seen in the solution gas-oil ratio and formation volume factor.

Keywords: PVT data, PVT properties, PVT correlations, bubble point pressure

Procedia PDF Downloads 55
2338 Assessing the Feasibility of Italian Hydrogen Targets with the Open-Source Energy System Optimization Model TEMOA - Italy

Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

Hydrogen is expected to become a game changer in the energy transition, especially enabling sector coupling possibilities and the decarbonization of hard-to-abate end-uses. The Italian National Recovery and Resilience Plan identifies hydrogen as one of the key elements of the ecologic transition to meet international decarbonization objectives, also including it in several pilot projects for the early development in Italy. This matches the European energy strategy, which aims to make hydrogen a leading energy carrier of the future, setting ambitious goals to be accomplished by 2030. The huge efforts needed to achieve the announced targets require to carefully investigate of their feasibility in terms of economic expenditures and technical aspects. In order to quantitatively assess the hydrogen potential within the Italian context and the feasibility of the planned investments and projects, this work uses the TEMOA-Italy energy system model to study pathways to meet the strict objectives above cited. The possible hydrogen development has been studied both in the supply-side and demand-side of the energy system, also including storage options and distribution chains. The assessment comprehends alternative hydrogen production technologies involved in a competition market, reflecting the several possible investments declined by the Italian National Recovery and Resilience Plan to boost the development and spread of this infrastructure, including the sector coupling potential with natural gas through the currently existing infrastructure and CO2 capture for the production of synfuels. On the other hand, the hydrogen end-uses phase covers a wide range of consumption alternatives, from fuel-cell vehicles, for which both road and non-road transport categories are considered, to steel, and chemical industries uses and cogeneration for residential and commercial buildings. The model includes both high and low TRL technologies in order to provide a consistent outcome for the future decades as it does for the present day, and since it is developed through the use of an open-source code instance and database, transparency and accessibility are fully granted.

Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA

Procedia PDF Downloads 96
2337 An Economic Study for Fish Production in Egypt

Authors: Manal Elsayed Elkheshin, Rasha Saleh Mansour, Mohamed Fawzy Mohamed Eldnasury, Mamdouh Elbadry Mohamed

Abstract:

This research Aims to identify the main factors affecting the production and the fish consumption in Egypt, through the econometric estimation for various forms functions of fish production and fish consumption during the period (1991-2014), as the aim of this research to forecast the production and the fish consumption in Egypt until 2020, through determine the best standard methods using (ARIMA).This research also aims to the economic feasibility of the production of fish in aquaculture farms study; investment cost and represents the value of land, buildings, equipment and irrigation. Aquaculture requires three types of fish (Tilapia, carp fish, and mullet fish), and the total area of the farm, about an acre. The annual Fish production from this project about 3.5 tons. The annual investment costs of about 50500 pounds, Find conclude that the project can repay the cost of their investments after about 4 years and 5 months, and therefore recommend the implementation of the project, and internal rate of return reached (IRR) of about 22.1%, where it is clear that the rate of large internal rate of return, and achieves pound invested in this project annual return is estimated at 22.1 pounds, more than the opportunity cost, so we recommend the need to implement the project.Recommendations:1. Increasing the fish agriculture to decrease the gap of animal protein. 2.Increasing the number of mechanism fishing boats, and the provision of transport equipped to maintain the quality of fish production. 3.Encourage and attract the local and foreign investments, providing advice to the investor on the aquaculture field. 4. Action newsletters awareness of the importance of these projects where these projects resulted in a net profit after recovery in less than five years, IRR amounted to about 23%, which is much more than the opportunity cost of a bank interest rate is about 7%, helping to create work and graduates opportunities, and contribute to the reduction of imports of the fish, and improve the performance of the food trade balance.

Keywords: equation model, individual share, red meat, consumption, production, endogenous variable, exogenous variable, financial performance evaluates fish culture, feasibility study, fish production, aquaculture

Procedia PDF Downloads 358
2336 Measuring the Embodied Energy of Construction Materials and Their Associated Cost Through Building Information Modelling

Authors: Ahmad Odeh, Ahmad Jrade

Abstract:

Energy assessment is an evidently significant factor when evaluating the sustainability of structures especially at the early design stage. Today design practices revolve around the selection of material that reduces the operational energy and yet meets their displinary need. Operational energy represents a substantial part of the building lifecycle energy usage but the fact remains that embodied energy is an important aspect unaccounted for in the carbon footprint. At the moment, little or no consideration is given to embodied energy mainly due to the complexity of calculation and the various factors involved. The equipment used, the fuel needed, and electricity required for each material vary with location and thus the embodied energy will differ for each project. Moreover, the method and the technique used in manufacturing, transporting and putting in place will have a significant influence on the materials’ embodied energy. This anomaly has made it difficult to calculate or even bench mark the usage of such energies. This paper presents a model aimed at helping designers select the construction materials based on their embodied energy. Moreover, this paper presents a systematic approach that uses an efficient method of calculation and ultimately provides new insight into construction material selection. The model is developed in a BIM environment targeting the quantification of embodied energy for construction materials through the three main stages of their life: manufacturing, transportation and placement. The model contains three major databases each of which contains a set of the most commonly used construction materials. The first dataset holds information about the energy required to manufacture any type of materials, the second includes information about the energy required for transporting the materials while the third stores information about the energy required by tools and cranes needed to place an item in its intended location. The model provides designers with sets of all available construction materials and their associated embodied energies to use for the selection during the design process. Through geospatial data and dimensional material analysis, the model will also be able to automatically calculate the distance between the factories and the construction site. To remain within the sustainability criteria set by LEED, a final database is created and used to calculate the overall construction cost based on R.M.S. means cost data and then automatically recalculate the costs for any modifications. Design criteria including both operational and embodied energies will cause designers to revaluate the current material selection for cost, energy, and most importantly sustainability.

Keywords: building information modelling, energy, life cycle analysis, sustainablity

Procedia PDF Downloads 265
2335 Experimental Study on Shaft Grouting Bearing Capacity of Small Diameter Bored Piles

Authors: Trung Le Thanh

Abstract:

Bored piles are always the optimal solution for high-rise building foundations. They have many advantages, such as large diameter, large pile length and construction in all different geological conditions. However, due to construction characteristics, the load-bearing capacity of bored piles is not optimal because wall friction is reduced due to poor contact between the pile and the surrounding soil. Therefore, grouting technology along the pile body helps improve the load-bearing capacity of bored piles significantly through increasing the skin resistance of the pile and surrounding soil. The improvement of pile skin resistance depends on the parameters of grouting technology, especially grouting volume, mortar viscosity, mortar strength,... and different geological conditions. Studies show that the technology of grouting piles on sandy soil is more effective than on clay. This article presents an experimental model to determine the load-bearing capacity of bored piles with a diameter of 400 mm and a length of 3 m on sand with different slurry volume in Tan Uyen city, Binh Duong province. On that basis, analyze the correlation between the increase in load-bearing capacity of bored piles without and with shaft grouting pile. Research results show that the wall resistance of shaft grouted piles increases 2-3 times compared to piles without grouting, and the pile's load-bearing capacity increases significantly. The article's research provides scientific value for consulting work on the design of bored piles when grouted along the pile body.

Keywords: bored pile, shaft grouting, bearing capacity, pile shaft resistance

Procedia PDF Downloads 59
2334 Agricultural Mechanization for Transformation

Authors: Lawrence Gumbe

Abstract:

Kenya Vision 2030 is the country's programme for transformation covering the period 2008 to 2030. Its objective is to help transform Kenya into a newly industrializing, middle-income, exceeding US$10000, country providing a high quality of life to all its citizens by 2030, in a clean and secure environment. Increased agricultural and production and productivity is crucial for the realization of Vision 2030. Mechanization of agriculture in order to achieve greater yields is the only way to achieve these objectives. There are contending groups and views on the strategy for agricultural mechanization. The first group are those who oppose the widespread adoption of advanced technologies (mostly internal combustion engines and tractors) in agricultural mechanization as entirely inappropriate in most situations in developing countries. This group argues that mechanically powered -agricultural mechanization often leads to displacement of labour and hence increased unemployment, and this results in a host of other socio-economic problems, amongst them, rural-urban migration, inequitable distribution of wealth and in many cases an increase in absolute poverty, balance of payments due to the need to import machinery, fuel and sometimes technical assistance to manage them. The second group comprises of those who view the use of the improved hand tools and animal powered technology as transitional step between the most rudimentary step in technological development (characterized by entire reliance on human muscle power) and the advanced technologies (characterized 'by reliance on tractors and other machinery). The third group comprises those who regard these intermediate technologies (ie. improved hand tools and draught animal technology in agriculture) as a ‘delaying’ tactic and they advocate the use of mechanical technologies as-the most appropriate. This group argues that alternatives to the mechanical technologies do not just exist as a practical matter, or, if they are available, they are inefficient and they cannot be compared to the mechanical technologies in terms of economics and productivity. The fourth group advocates a compromise between groups two and third above. This group views the improved hand tools and draught animal technology as more of an 18th century technology and the modem tractor and combine harvester as too advanced for developing countries. This group has been busy designing an ‘intermediate’, ‘appropriate’, ‘mini’, ‘micro’ tractor for use by farmers in developing countries. This paper analyses and concludes on the different agricultural mechanization strategies available to Kenya and other third world countries

Keywords: agriculture, mechanazation, transformation, industrialization

Procedia PDF Downloads 330
2333 The Grade Six Pupils' Learning Styles and Their Achievements and Difficulties on Fractions Based on Kolb's Model

Authors: Faiza Abdul Latip

Abstract:

One of the ultimate goals of any nation is to produce competitive manpower and this includes Philippines. Inclination in the field of Mathematics has a significant role in achieving this goal. However, Mathematics, as considered by most people, is the most difficult subject matter along with its topics to learn. This could be manifested from the low performance of students in national and international assessments. Educators have been widely using learning style models in identifying the way students learn. Moreover, it could be the frontline in knowing the difficulties held by each learner in a particular topic specifically concepts pertaining to fractions. However, as what many educators observed, students show difficulties in doing mathematical tasks and in great degree in dealing with fractions most specifically in the district of Datu Odin Sinsuat, Maguindanao. This study focused on the Datu Odin Sinsuat district grade six pupils’ learning styles along with their achievements and difficulties in learning concepts on fractions. Five hundred thirty-two pupils from ten different public elementary schools of the Datu Odin Sinsuat districts were purposively used as the respondents of the study. A descriptive research using the survey method was employed in this study. Quantitative analysis on the pupils’ learning styles on the Kolb’s Learning Style Inventory (KLSI) and scores on the mathematics diagnostic test on fraction concepts were made using this method. The simple frequency and percentage counts were used to analyze the pupils’ learning styles and their achievements on fractions. To determine the pupils’ difficulties in fractions, the index of difficulty on every item was determined. Lastly, the Kruskal-Wallis Test was used in determining the significant difference in the pupils’ achievements on fractions classified by their learning styles. This test was set at 0.05 level of significance. The minimum H-Value of 7.82 was used to determine the significance of the test. The results revealed that the pupils of Datu Odin Sinsuat districts learn fractions in varied ways as they are of different learning styles. However, their achievements in fractions are low regardless of their learning styles. Difficulties in learning fractions were found most in the area of Estimation, Comparing/Ordering, and Division Interpretation of Fractions. Most of the pupils find it very difficult to use fraction as a measure, compare or arrange series of fractions and use the concept of fraction as a quotient.

Keywords: difficulties in fraction, fraction, Kolb's model, learning styles

Procedia PDF Downloads 207