Search results for: abundance estimation
303 Assessing the Impact of Climate Change on Pulses Production in Khyber Pakhtunkhwa, Pakistan
Authors: Khuram Nawaz Sadozai, Rizwan Ahmad, Munawar Raza Kazmi, Awais Habib
Abstract:
Climate change and crop production are intrinsically associated with each other. Therefore, this research study is designed to assess the impact of climate change on pulses production in Southern districts of Khyber Pakhtunkhwa (KP) Province of Pakistan. Two pulses (i.e. chickpea and mung bean) were selected for this research study with respect to climate change. Climatic variables such as temperature, humidity and precipitation along with pulses production and area under cultivation of pulses were encompassed as the major variables of this study. Secondary data of climatic variables and crop variables for the period of thirty four years (1986-2020) were obtained from Pakistan Metrological Department and Agriculture Statistics of KP respectively. Panel data set of chickpea and mung bean crops was estimated separately. The analysis validate that both data sets were a balanced panel data. The Hausman specification test was run separately for both the panel data sets whose findings had suggested the fixed effect model can be deemed as an appropriate model for chickpea panel data, however random effect model was appropriate for estimation of the panel data of mung bean. Major findings confirm that maximum temperature is statistically significant for the chickpea yield. This implies if maximum temperature increases by 1 0C, it can enhance the chickpea yield by 0.0463 units. However, the impact of precipitation was reported insignificant. Furthermore, the humidity was statistically significant and has a positive association with chickpea yield. In case of mung bean the minimum temperature was significantly contributing in the yield of mung bean. This study concludes that temperature and humidity can significantly contribute to enhance the pulses yield. It is recommended that capacity building of pulses growers may be made to adapt the climate change strategies. Moreover, government may ensure the availability of climate change resistant varieties of pulses to encourage the pulses cultivation.Keywords: climate change, pulses productivity, agriculture, Pakistan
Procedia PDF Downloads 44302 Perceptual Image Coding by Exploiting Internal Generative Mechanism
Authors: Kuo-Cheng Liu
Abstract:
In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain
Procedia PDF Downloads 248301 Gas Chromatography-Analysis, Antioxidant, Anti-Inflammatory, and Anticancer Activities of Some Extracts and Fractions of Linum usitatissimum
Authors: Eman Abdullah Morsi, Hend Okasha, Heba Abdel Hady, Mortada El-Sayed, Mohamed Abbas Shemis
Abstract:
Context: Linum usitatissimum (Linn), known as Flaxseed, is one of the most important medicinal plants traditionally used for various health as nutritional purposes. Objective: Estimation of total phenolic and flavonoid contents as well as evaluate the antioxidant using α, α-diphenyl-β-picrylhydrazyl (DPPH), 2-2'azinobis (3-ethylbenzthiazoline-6-sulphonic acid (ABTS) and total antioxidant capacity (TAC) assay and investigation of anti-inflammatory by Bovine serum albumin (BSA) and anticancer activities of hepatocellular carcinoma cell line (HepG2) and breast cancer cell line (MCF7) have been applied on hexane, ethyl acetate, n-butanol and methanol extracts and also, fractions of methonal extract (hexane, ethyl acetate and n-butanol). Materials and Methods: Phenolic and flavonoid contents were detected using spectrophotometric and colorimetric assays. Antioxidant and anti-inflammatory activities were estimated in-vitro. Anticancer activity of extracts and fractions of methanolic extract were tested on (HepG2) and (MCF7). Results: Methanolic extract and its ethyl acetate fraction contain higher contents of total phenols and flavonoids. In addition, methanolic extract had higher antioxidant activity. Butanolic and ethyl acetate fractions yielded higher percent of inhibition of protein denaturation. Meanwhile, ethyl acetate fraction and methanolic extract had anticancer activity against HepG2 and MCF7 (IC50=60 ± 0.24 and 29.4 ± 0.12µg.mL⁻¹) and (IC50=94.7 ± 0.21 and 227 ± 0.48µg.mL⁻¹), respectively. In Gas chromatography-mass spectrometry (GC-MS) analysis, methanolic extract has 32 compounds, whereas; ethyl acetate and butanol fractions contain 40 and 36 compounds, respectively. Conclusion: Flaxseed contains totally different biologically active compounds that have been found to possess good variable activities, which can protect human body against several diseases.Keywords: phenolic content, flavonoid content, HepG2, MCF7, hemolysis-assay, flaxseed
Procedia PDF Downloads 126300 Risk Assessment of Natural Gas Pipelines in Coal Mined Gobs Based on Bow-Tie Model and Cloud Inference
Authors: Xiaobin Liang, Wei Liang, Laibin Zhang, Xiaoyan Guo
Abstract:
Pipelines pass through coal mined gobs inevitably in the mining area, the stability of which has great influence on the safety of pipelines. After extensive literature study and field research, it was found that there are a few risk assessment methods for coal mined gob pipelines, and there is a lack of data on the gob sites. Therefore, the fuzzy comprehensive evaluation method is widely used based on expert opinions. However, the subjective opinions or lack of experience of individual experts may lead to inaccurate evaluation results. Hence the accuracy of the results needs to be further improved. This paper presents a comprehensive approach to achieve this purpose by combining bow-tie model and cloud inference. The specific evaluation process is as follows: First, a bow-tie model composed of a fault tree and an event tree is established to graphically illustrate the probability and consequence indicators of pipeline failure. Second, the interval estimation method can be scored in the form of intervals to improve the accuracy of the results, and the censored mean algorithm is used to remove the maximum and minimum values of the score to improve the stability of the results. The golden section method is used to determine the weight of the indicators and reduce the subjectivity of index weights. Third, the failure probability and failure consequence scores of the pipeline are converted into three numerical features by using cloud inference. The cloud inference can better describe the ambiguity and volatility of the results which can better describe the volatility of the risk level. Finally, the cloud drop graphs of failure probability and failure consequences can be expressed, which intuitively and accurately illustrate the ambiguity and randomness of the results. A case study of a coal mine gob pipeline carrying natural gas has been investigated to validate the utility of the proposed method. The evaluation results of this case show that the probability of failure of the pipeline is very low, the consequences of failure are more serious, which is consistent with the reality.Keywords: bow-tie model, natural gas pipeline, coal mine gob, cloud inference
Procedia PDF Downloads 250299 Flood Simulation and Forecasting for Sustainable Planning of Response in Municipalities
Authors: Mariana Damova, Stanko Stankov, Emil Stoyanov, Hristo Hristov, Hermand Pessek, Plamen Chernev
Abstract:
We will present one of the first use cases on the DestinE platform, a joint initiative of the European Commission, European Space Agency and EUMETSAT, providing access to global earth observation, meteorological and statistical data, and emphasize the good practice of intergovernmental agencies acting in concert. Further, we will discuss the importance of space-bound disruptive solutions for improving the balance between the ever-increasing water-related disasters coming from climate change and minimizing their economic and societal impact. The use case focuses on forecasting floods and estimating the impact of flood events on the urban environment and the ecosystems in the affected areas with the purpose of helping municipal decision-makers to analyze and plan resource needs and to forge human-environment relationships by providing farmers with insightful information for improving their agricultural productivity. For the forecast, we will adopt an EO4AI method of our platform ISME-HYDRO, in which we employ a pipeline of neural networks applied to in-situ measurements and satellite data of meteorological factors influencing the hydrological and hydrodynamic status of rivers and dams, such as precipitations, soil moisture, vegetation index, snow cover to model flood events and their span. ISME-HYDRO platform is an e-infrastructure for water resources management based on linked data, extended with further intelligence that generates forecasts with the method described above, throws alerts, formulates queries, provides superior interactivity and drives communication with the users. It provides synchronized visualization of table views, graphviews and interactive maps. It will be federated with the DestinE platform.Keywords: flood simulation, AI, Earth observation, e-Infrastructure, flood forecasting, flood areas localization, response planning, resource estimation
Procedia PDF Downloads 21298 The Impact of Human Intervention on Net Primary Productivity for the South-Central Zone of Chile
Authors: Yannay Casas-Ledon, Cinthya A. Andrade, Camila E. Salazar, Mauricio Aguayo
Abstract:
The sustainable management of available natural resources is a crucial question for policy-makers, economists, and the research community. Among several, land constitutes one of the most critical resources, which is being intensively appropriated by human activities producing ecological stresses and reducing ecosystem services. In this context, net primary production (NPP) has been considered as a feasible proxy indicator for estimating the impacts of human interventions on land-uses intensity. Accordingly, the human appropriation of NPP (HANPP) was calculated for the south-central regions of Chile between 2007 and 2014. The HANPP was defined as the difference between the potential NPP of the naturally produced vegetation (NPP0, i.e., the vegetation that would exist without any human interferences) and the NPP remaining in the field after harvest (NPPeco), expressed in gC/m² yr. Other NPP flows taken into account in HANPP estimation were the harvested (NPPh) and the losses of NPP through land conversion (NPPluc). The ArcGIS 10.4 software was used for assessing the spatial and temporal HANPP changes. The differentiation of HANPP as % of NPP0 was estimated by each landcover type taken in 2007 and 2014 as the reference years. The spatial results depicted a negative impact on land use efficiency during 2007 and 2014, showing negative HANPP changes for the whole region. The harvest and biomass losses through land conversion components are the leading causes of loss of land-use efficiency. Furthermore, the study depicted higher HANPP in 2014 than in 2007, representing 50% of NPP0 for all landcover classes concerning 2007. This performance was mainly related to the higher volume of harvested biomass for agriculture. In consequence, the cropland depicted the high HANPP followed by plantation. This performance highlights the strong positive correlation between the economic activities developed into the region. This finding constitutes the base for a better understanding of the main driving force influencing biomass productivity and a powerful metric for supporting the sustainable management of land use.Keywords: human appropriation, land-use changes, land-use impact, net primary productivity
Procedia PDF Downloads 136297 Derivation of Fragility Functions of Marine Drilling Risers Under Ocean Environment
Authors: Pranjal Srivastava, Piyali Sengupta
Abstract:
The performance of marine drilling risers is crucial in the offshore oil and gas industry to ensure safe drilling operation with minimum downtime. Experimental investigations on marine drilling risers are limited in the literature owing to the expensive and exhaustive test setup required to replicate the realistic riser model and ocean environment in the laboratory. Therefore, this study presents an analytical model of marine drilling riser for determining its fragility under ocean environmental loading. In this study, the marine drilling riser is idealized as a continuous beam having a concentric circular cross-section. Hydrodynamic loading acting on the marine drilling riser is determined by Morison’s equations. By considering the equilibrium of forces on the marine drilling riser for the connected and normal drilling conditions, the governing partial differential equations in terms of independent variables z (depth) and t (time) are derived. Subsequently, the Runge Kutta method and Finite Difference Method are employed for solving the partial differential equations arising from the analytical model. The proposed analytical approach is successfully validated with respect to the experimental results from the literature. From the dynamic analysis results of the proposed analytical approach, the critical design parameters peak displacements, upper and lower flex joint rotations and von Mises stresses of marine drilling risers are determined. An extensive parametric study is conducted to explore the effects of top tension, drilling depth, ocean current speed and platform drift on the critical design parameters of the marine drilling riser. Thereafter, incremental dynamic analysis is performed to derive the fragility functions of shallow water and deep-water marine drilling risers under ocean environmental loading. The proposed methodology can also be adopted for downtime estimation of marine drilling risers incorporating the ranges of uncertainties associated with the ocean environment, especially at deep and ultra-deepwater.Keywords: drilling riser, marine, analytical model, fragility
Procedia PDF Downloads 146296 Organ Dose Calculator for Fetus Undergoing Computed Tomography
Authors: Choonsik Lee, Les Folio
Abstract:
Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.Keywords: computed tomography, fetal dose, pregnant women, radiation dose
Procedia PDF Downloads 140295 Constraints on Source Rock Organic Matter Biodegradation in the Biogenic Gas Fields in the Sanhu Depression, Qaidam Basin, Northwestern China: A Study of Compound Concentration and Concentration Ratio Changes Using GC-MS Data
Authors: Mengsha Yin
Abstract:
Extractable organic matter (EOM) from thirty-six biogenic gas source rocks from the Sanhu Depression in Qaidam Basin in northwestern China were obtained via Soxhlet extraction. Twenty-nine of them were conducted SARA (Saturates, Aromatics, Resins and Asphaltenes) separation for bulk composition analysis. Saturated and aromatic fractions of all the extractions were analyzed by Gas Chromatography-Mass Spectrometry (GC-MS) to investigate the compound compositions. More abundant n-alkanes, naphthalene, phenanthrene, dibenzothiophene and their alkylated products occur in samples in shallower depths. From 2000m downward, concentrations of these compounds increase sharply, and concentration ratios of more-over-less biodegradation susceptible compounds coincidently decrease dramatically. ∑iC15-16, 18-20/∑nC15-16, 18-20 and hopanoids/∑n-alkanes concentration ratios and mono- and tri-aromatic sterane concentrations and concentration ratios frequently fluctuate with depth rather than trend with it, reflecting effects from organic input and paleoenvironments other than biodegradation. Saturated and aromatic compound distributions on the saturates and aromatics total ion chromatogram (TIC) traces of samples display different degrees of biodegradation. Dramatic and simultaneous variations in compound concentrations and their ratios at 2000m and their changes with depth underneath cooperatively justified the crucial control of burial depth on organic matter biodegradation scales in source rocks and prompted the proposition that 2000m is the bottom depth boundary for active microbial activities in this study. The study helps to better curb the conditions where effective source rocks occur in terms of depth in the Sanhu biogenic gas fields and calls for additional attention to source rock pore size estimation during biogenic gas source rock appraisals.Keywords: pore space, Sanhu depression, saturated and aromatic hydrocarbon compound concentration, source rock organic matter biodegradation, total ion chromatogram
Procedia PDF Downloads 156294 Synthesis and Thermoluminescence Investigations of Doped LiF Nanophosphor
Authors: Pooja Seth, Shruti Aggarwal
Abstract:
Thermoluminescence dosimetry (TLD) is one of the most effective methods for the assessment of dose during diagnostic radiology and radiotherapy applications. In these applications monitoring of absorbed dose is essential to prevent patient from undue exposure and to evaluate the risks that may arise due to exposure. LiF based thermoluminescence (TL) dosimeters are promising materials for the estimation, calibration and monitoring of dose due to their favourable dosimetric characteristics like tissue-equivalence, high sensitivity, energy independence and dose linearity. As the TL efficiency of a phosphor strongly depends on the preparation route, it is interesting to investigate the TL properties of LiF based phosphor in nanocrystalline form. LiF doped with magnesium (Mg), copper (Cu), sodium (Na) and silicon (Si) in nanocrystalline form has been prepared using chemical co-precipitation method. Cubical shape LiF nanostructures are formed. TL dosimetry properties have been investigated by exposing it to gamma rays. TL glow curve structure of nanocrystalline form consists of a single peak at 419 K as compared to the multiple peaks observed in microcrystalline form. A consistent glow curve structure with maximum TL intensity at annealing temperature of 573 K and linear dose response from 0.1 to 1000 Gy is observed which is advantageous for radiotherapy application. Good reusability, low fading (5 % over a month) and negligible residual signal (0.0019%) are observed. As per photoluminescence measurements, wide emission band at 360 nm - 550 nm is observed in an undoped LiF. However, an intense peak at 488 nm is observed in doped LiF nanophosphor. The phosphor also exhibits the intense optically stimulated luminescence. Nanocrystalline LiF: Mg, Cu, Na, Si phosphor prepared by co-precipitation method showed simple glow curve structure, linear dose response, reproducibility, negligible residual signal, good thermal stability and low fading. The LiF: Mg, Cu, Na, Si phosphor in nanocrystalline form has tremendous potential in diagnostic radiology, radiotherapy and high energy radiation application.Keywords: thermoluminescence, nanophosphor, optically stimulated luminescence, co-precipitation method
Procedia PDF Downloads 404293 Assessment of the Performance of the Sonoreactors Operated at Different Ultrasound Frequencies, to Remove Pollutants from Aqueous Media
Authors: Gabriela Rivadeneyra-Romero, Claudia del C. Gutierrez Torres, Sergio A. Martinez-Delgadillo, Victor X. Mendoza-Escamilla, Alejandro Alonzo-Garcia
Abstract:
Ultrasonic degradation is currently being used in sonochemical reactors to degrade pollutant compounds from aqueous media, as emerging contaminants (e.g. pharmaceuticals, drugs and personal care products.) because they can produce possible ecological impacts on the environment. For this reason, it is important to develop appropriate water and wastewater treatments able to reduce pollution and increase reuse. Pollutants such as textile dyes, aromatic and phenolic compounds, cholorobenzene, bisphenol-A and carboxylic acid and other organic pollutants, can be removed from wastewaters by sonochemical oxidation. The effect on the removal of pollutants depends on the type of the ultrasonic frequency used; however, not much studies have been done related to the behavior of the fluid into the sonoreactors operated at different ultrasonic frequencies. Based on the above, it is necessary to study the hydrodynamic behavior of the liquid generated by the ultrasonic irradiation to design efficient sonoreactors to reduce treatment times and costs. In this work, it was studied the hydrodynamic behavior of the fluid in sonochemical reactors at different frequencies (250 kHz, 500 kHz and 1000 kHz). The performances of the sonoreactors at those frequencies were simulated using computational fluid dynamics (CFD). Due to there is great sound speed gradient between piezoelectric and fluid, k-e models were used. Piezoelectric was defined as a vibration surface, to evaluate the different frequencies effect on the fluid into sonochemical reactor. Structured hexahedral cells were used to mesh the computational liquid domain, and fine triangular cells were used to mesh the piezoelectric transducers. Unsteady state conditions were used in the solver. Estimation of the dissipation rate, flow field velocities, Reynolds stress and turbulent quantities were evaluated by CFD and 2D-PIV measurements. Test results show that there is no necessary correlation between an increase of the ultrasonic frequency and the pollutant degradation, moreover, the reactor geometry and power density are important factors that should be considered in the sonochemical reactor design.Keywords: CFD, reactor, ultrasound, wastewater
Procedia PDF Downloads 190292 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications
Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso
Abstract:
The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.Keywords: interferometry, MIMO RADAR, SAR, tomography
Procedia PDF Downloads 195291 Physico-Chemical Characterization of Vegetable Oils from Oleaginous Seeds (Croton megalocarpus, Ricinus communis L., and Gossypium hirsutum L.)
Authors: Patrizia Firmani, Sara Perucchini, Irene Rapone, Raffella Borrelli, Stefano Chiaberge, Manuela Grande, Rosamaria Marrazzo, Alberto Savoini, Andrea Siviero, Silvia Spera, Fabio Vago, Davide Deriu, Sergio Fanutti, Alessandro Oldani
Abstract:
According to the Renewable Energy Directive II, the use of palm oil in diesel will be gradually reduced from 2023 and should reach zero in 2030 due to the deforestation caused by its production. Eni aims at finding alternative feedstocks for its biorefineries to eliminate the use of palm oil by 2023. Therefore, the ideal vegetable oils to be used in bio-refineries are those obtainable from plants that grow in marginal lands and with low impact on food-and-feed chain; hence, Eni research is studying the possibility of using oleaginous seeds, such as castor, croton, and cotton, to extract the oils to be exploited as feedstock in bio-refineries. To verify their suitability for the upgrading processes, an analytical protocol for their characterization has been drawn up and applied. The analytical characterizations include a step of water and ashes content determination, elemental analysis (CHNS analysis, X-Ray Fluorescence, Inductively Coupled Plasma - Optical Emission Spectroscopy, ICP– Mass Spectrometry), and total acid number determination. Gas chromatography coupled to flame ionization detector (GC-FID) is used to quantify the lipid content in terms of free fatty acids, mono-, di- and triacylglycerols, and fatty acids composition. Eventually, Nuclear Magnetic Resonance and Fourier Transform-Infrared spectroscopies are exploited with GC-MS and Fourier Transform-Ion Cyclotron Resonance to study the composition of the oils. This work focuses on the GC-FID analysis of the lipid fraction of these oils, as the main constituent and of greatest interest for bio-refinery processes. Specifically, the lipid component of the extracted oil was quantified after sample silanization and transmethylation: silanization allows the elution of high-boiling compounds and is useful for determining the quantity of free acids and glycerides in oils, while transmethylation leads to a mixture of fatty acid esters and glycerol, thus allowing to evaluate the composition of glycerides in terms of Fatty Acids Methyl Esters (FAME). Cotton oil was extracted from cotton oilcake, croton oil was obtained by seeds pressing and seeds and oilcake ASE extraction, while castor oil comes from seed pressing (not performed in Eni laboratories). GC-FID analyses reported that the cotton oil is 90% constituted of triglycerides and about 6% diglycerides, while free fatty acids are about 2%. In terms of FAME, C18 acids make up 70% of the total and linoleic acid is the major constituent. Palmitic acid is present at 17.5%, while the other acids are in low concentration (<1%). Both analyzes show the presence of non-gas chromatographable compounds. Croton oils from seed pressing and extraction mainly contain triglycerides (98%). Concerning FAME, the main component is linoleic acid (approx. 80%). Oilcake croton oil shows higher abundance of diglycerides (6% vs ca 2%) and a lower content of triglycerides (38% vs 98%) compared to the previous oils. Eventually, castor oil is mostly constituted of triacylglycerols (about 69%), followed by diglycerides (about 10%). About 85.2% of total FAME is ricinoleic acid, as a constituent of triricinolein, the most abundant triglyceride of castor oil. Based on the analytical results, these oils represent feedstocks of interest for possible exploitation as advanced biofuels.Keywords: analytical protocol, biofuels, biorefinery, gas chromatography, vegetable oil
Procedia PDF Downloads 144290 Evaluation of a Piecewise Linear Mixed-Effects Model in the Analysis of Randomized Cross-over Trial
Authors: Moses Mwangi, Geert Verbeke, Geert Molenberghs
Abstract:
Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment with respect to a reference treatment (placebo or standard). The main advantage of using cross-over design over conventional parallel design is its flexibility, where every subject become its own control, thereby reducing confounding effect. Jones & Kenward, discuss in detail more recent developments in the analysis of cross-over trials. We revisit the simple piecewise linear mixed-effects model, proposed by Mwangi et. al, (in press) for its first application in the analysis of cross-over trials. We compared performance of the proposed piecewise linear mixed-effects model with two commonly cited statistical models namely, (1) Grizzle model; and (2) Jones & Kenward model, used in estimation of the treatment effect, in the analysis of randomized cross-over trial. We estimate two performance measurements (mean square error (MSE) and coverage probability) for the three methods, using data simulated from the proposed piecewise linear mixed-effects model. Piecewise linear mixed-effects model yielded lowest MSE estimates compared to Grizzle and Jones & Kenward models for both small (Nobs=20) and large (Nobs=600) sample sizes. It’s coverage probability were highest compared to Grizzle and Jones & Kenward models for both small and large sample sizes. A piecewise linear mixed-effects model is a better estimator of treatment effect than its two competing estimators (Grizzle and Jones & Kenward models) in the analysis of cross-over trials. The data generating mechanism used in this paper captures two time periods for a simple 2-Treatments x 2-Periods cross-over design. Its application is extendible to more complex cross-over designs with multiple treatments and periods. In addition, it is important to note that, even for single response models, adding more random effects increases the complexity of the model and thus may be difficult or impossible to fit in some cases.Keywords: Evaluation, Grizzle model, Jones & Kenward model, Performance measures, Simulation
Procedia PDF Downloads 122289 Drivers of Liking: Probiotic Petit Suisse Cheese
Authors: Helena Bolini, Erick Esmerino, Adriano Cruz, Juliana Paixao
Abstract:
The currently concern for health has increased demand for low-calorie ingredients and functional foods as probiotics. Understand the reasons that infer on food choice, besides a challenging task, it is important step for development and/or reformulation of existing food products. The use of appropriate multivariate statistical techniques, such as External Preference Map (PrefMap), associated with regression by Partial Least Squares (PLS) can help in determining those factors. Thus, this study aimed to determine, through PLS regression analysis, the sensory attributes considered drivers of liking in probiotic petit suisse cheeses, strawberry flavor, sweetened with different sweeteners. Five samples in same equivalent sweetness: PROB1 (Sucralose 0.0243%), PROB2 (Stevia 0.1520%), PROB3 (Aspartame 0.0877%), PROB4 (Neotame 0.0025%) and PROB5 (Sucrose 15.2%) determined by just-about-right and magnitude estimation methods, and three commercial samples COM1, COM2 and COM3, were studied. Analysis was done over data coming from QDA, performed by 12 expert (highly trained assessors) on 20 descriptor terms, correlated with data from assessment of overall liking in acceptance test, carried out by 125 consumers, on all samples. Sequentially, results were submitted to PLS regression using XLSTAT software from Byossistemes. As shown in results, it was possible determine, that three sensory descriptor terms might be considered drivers of liking of probiotic petit suisse cheese samples added with sweeteners (p<0.05). The milk flavor was noticed as a sensory characteristic with positive impact on acceptance, while descriptors bitter taste and sweet aftertaste were perceived as descriptor terms with negative impact on acceptance of petit suisse probiotic cheeses. It was possible conclude that PLS regression analysis is a practical and useful tool in determining drivers of liking of probiotic petit suisse cheeses sweetened with artificial and natural sweeteners, allowing food industry to understand and improve their formulations maximizing the acceptability of their products.Keywords: acceptance, consumer, quantitative descriptive analysis, sweetener
Procedia PDF Downloads 446288 Determination of the Phytochemicals Composition and Pharmacokinetics of whole Coffee Fruit Caffeine Extract by Liquid Chromatography-Tandem Mass Spectrometry
Authors: Boris Nemzer, Nebiyu Abshiru, Z. B. Pietrzkowski
Abstract:
Coffee cherry is one of the most ubiquitous agricultural commodities which possess nutritional and human health beneficial properties. Between the two most widely used coffee cherries Coffea arabica (Arabica) and Coffea canephora (Robusta), Coffea arabica remains superior due to its sensory properties and, therefore, remains in great demand in the global coffee market. In this study, the phytochemical contents and pharmacokinetics of Coffeeberry® Energy (CBE), a commercially available Arabica whole coffee fruit caffeine extract, are investigated. For phytochemical screening, 20 mg of CBE was dissolved in an aqueous methanol solution for analysis by mass spectrometry (MS). Quantification of caffeine and chlorogenic acids (CGAs) contents of CBE was performed using HPLC. For the bioavailability study, serum samples were collected from human subjects before and after 1, 2 and 3 h post-ingestion of 150mg CBE extract. Protein precipitation and extraction were carried out using methanol. Identification of compounds was performed using an untargeted metabolomic approach on Q-Exactive Orbitrap MS coupled to reversed-phase chromatography. Data processing was performed using Thermo Scientific Compound Discover 3.3 software. Phytochemical screening identified a total of 170 compounds, including organic acids, phenolic acids, CGAs, diterpenoids and hydroxytryptamine. Caffeine & CGAs make up more than, respectively, 70% & 9% of the total CBE composition. For serum samples, a total of 82 metabolites representing 32 caffeine- and 50 phenolic-derived metabolites were identified. Volcano plot analysis revealed 32 differential metabolites (24 caffeine- and 8 phenolic-derived) that showed an increase in serum level post-CBE dosing. Caffeine, uric acid, and trimethyluric acid isomers exhibited 4- to 10-fold increase in serum abundance post-dosing. 7-Methyluric acid, 1,7-dimethyluric acid, paraxanthine and theophylline exhibited a minimum of 1.5-fold increase in serum level. Among the phenolic-derived metabolites, iso-feruloyl quinic acid isomers (3-, 4- and 5-iFQA) showed the highest increase in serum level. These compounds were essentially absent in serum collected before dosage. More interestingly, the iFQA isomers were not originally present in the CBE extract, as our phytochemical screen did not identify these compounds. This suggests the potential formation of the isomers during the digestion and absorption processes. Pharmacokinetics parameters (Cmax, Tmax and AUC0-3h) of caffeine- and phenolic-derived metabolites were also investigated. Caffeine was rapidly absorbed, reaching a maximum concentration (Cmax) of 10.95 µg/ml in just 1 hour. Thereafter, caffeine level steadily dropped from the peak level, although it did not return to baseline within the 3-hour dosing period. The disappearance of caffeine from circulation was mirrored by the rise in the concentration of its methylxanthine metabolites. Similarly, serum concentration of iFQA isomers steadily increased, reaching maximum (Cmax: 3-iFQA, 1.54 ng/ml; 4-iFQA, 2.47 ng/ml; 5-iFQA, 2.91 ng/ml) at tmax of 1.5 hours. The isomers remained well above the baseline during the 3-hour dosing period, allowing them to remain in circulation long enough for absorption into the body. Overall, the current study provides evidence of the potential health benefits of a uniquely formulated whole coffee fruit product. Consumption of this product resulted in a distinct serum profile of bioactive compounds, as demonstrated by the more than 32 metabolites that exhibited a significant change in systemic exposure.Keywords: phytochemicals, mass spectrometry, pharmacokinetics, differential metabolites, chlorogenic acids
Procedia PDF Downloads 68287 Two-Level Separation of High Air Conditioner Consumers and Demand Response Potential Estimation Based on Set Point Change
Authors: Mehdi Naserian, Mohammad Jooshaki, Mahmud Fotuhi-Firuzabad, Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee
Abstract:
In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a recent solution is presented to uncover consumers with high air conditioner demand among large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.Keywords: communication infrastructure, smart meters, power systems, HVAC system, residential HVAC systems
Procedia PDF Downloads 67286 Application of Groundwater Level Data Mining in Aquifer Identification
Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen
Abstract:
Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.Keywords: aquifer identification, decision tree, groundwater, Fourier transform
Procedia PDF Downloads 157285 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method
Authors: Defne Uz
Abstract:
Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration
Procedia PDF Downloads 145284 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term
Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu
Abstract:
In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records
Procedia PDF Downloads 217283 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique
Authors: Sahar Tabarroki, Ahad Nazari
Abstract:
The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.Keywords: architectural design, design error, risk management, risk factor
Procedia PDF Downloads 130282 Experimental Study of Sand-Silt Mixtures with Torsional and Flexural Resonant Column Tests
Authors: Meghdad Payan, Kostas Senetakis, Arman Khoshghalb, Nasser Khalili
Abstract:
Dynamic properties of soils, especially at the range of very small strains, are of particular interest in geotechnical engineering practice for characterization of the behavior of geo-structures subjected to a variety of stress states. This study reports on the small-strain dynamic properties of sand-silt mixtures with particular emphasis on the effect of non-plastic fines content on the small strain shear modulus (Gmax), Young’s Modulus (Emax), material damping (Ds,min) and Poisson’s Ratio (v). Several clean sands with a wide range of grain size characteristics and particle shape are mixed with variable percentages of a silica non-plastic silt as fines content. Prepared specimens of sand-silt mixtures at different initial void ratios are subjected to sequential torsional and flexural resonant column tests with elastic dynamic properties measured along an isotropic stress path up to 800 kPa. It is shown that while at low percentages of fines content, there is a significant difference between the dynamic properties of the various samples due to the different characteristics of the sand portion of the mixtures, this variance diminishes as the fines content increases and the soil behavior becomes mainly silt-dominant, rendering no significant influence of sand properties on the elastic dynamic parameters. Indeed, beyond a specific portion of fines content, around 20% to 30% typically denoted as threshold fines content, silt is controlling the behavior of the mixture. Using the experimental results, new expressions for the prediction of small-strain dynamic properties of sand-silt mixtures are developed accounting for the percentage of silt and the characteristics of the sand portion. These expressions are general in nature and are capable of evaluating the elastic dynamic properties of sand-silt mixtures with any types of parent sand in the whole range of silt percentage. The inefficiency of skeleton void ratio concept in the estimation of small-strain stiffness of sand-silt mixtures is also illustrated.Keywords: damping ratio, Poisson’s ratio, resonant column, sand-silt mixture, shear modulus, Young’s modulus
Procedia PDF Downloads 250281 Prediction of Formation Pressure Using Artificial Intelligence Techniques
Authors: Abdulmalek Ahmed
Abstract:
Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)
Procedia PDF Downloads 149280 International Entrepreneurial Orientation and Institutionalism: The Effect on International Performance for Latin American SMEs
Authors: William Castillo, Hugo Viza, Arturo Vargas
Abstract:
The Pacific Alliance is a trade bloc that is composed of four emerging economies: Chile, Colombia, Peru, and Mexico. These economies have gained macroeconomic stability in the past decade and as a consequence present future economic progress. Under this positive scenario, international business firms have flourished. However, the literature in this region has been widely unexamined. Therefore, it is critical to fill this theoretical gap, especially considering that Latin America is starting to become a global player and it possesses a different institutional context than developed markets. This paper analyzes the effect of international entrepreneurial orientation and institutionalism on international performance, for the Pacific Alliance small-to-medium enterprises (SMEs). The literature considers international entrepreneurial orientation to be a powerful managerial capability – along the resource based view- that firms can leverage to obtain a satisfactory international performance. Thereby, obtaining a competitive advantage through the correct allocation of key resources to exploit the capabilities here involved. Entrepreneurial Orientation is defined around five factors: innovation, proactiveness, risk-taking, competitive aggressiveness, and autonomy. Nevertheless, the institutional environment – both local and foreign, adversely affects International Performance; this is especially the case for emerging markets with uncertain scenarios. In this way, the study analyzes an Entrepreneurial Orientation, key endogenous variable of international performance, and Institutionalism, an exogenous variable. The survey data consists of Pacific Alliance SMEs that have foreign operations in at least another country in the trade bloc. Findings are still in an ongoing research process. Later, the study will undertake a structural equation modeling (SEM) using the variance-based partial least square estimation procedure. The software that is going to be used is the SmartPLS. This research contributes to the theoretical discussion of a largely postponed topic: SMEs in Latin America, that has had limited academic research. Also, it has practical implication for decision-makers and policy-makers, providing insights into what is behind international performance.Keywords: institutional theory, international entrepreneurial orientation, international performance, SMEs, Pacific Alliance
Procedia PDF Downloads 248279 Carrying Capacity Estimation for Small Hydro Plant Located in Torrential Rivers
Authors: Elena Carcano, James Ball, Betty Tiko
Abstract:
Carrying capacity refers to the maximum population that a given level of resources can sustain over a specific period. In undisturbed environments, the maximum population is determined by the availability and distribution of resources, as well as the competition for their utilization. This information is typically obtained through long-term data collection. In regulated environments, where resources are artificially modified, populations must adapt to changing conditions, which can lead to additional challenges due to fluctuations in resource availability over time and throughout development. An example of this is observed in hydropower plants, which alter water flow and impact fish migration patterns and behaviors. To assess how fish species can adapt to these changes, specialized surveys are conducted, which provide valuable information on fish populations, sample sizes, and density before and after flow modifications. In such situations, it is highly recommended to conduct hydrological and biological monitoring to gain insight into how flow reductions affect species adaptability and to prevent unfavorable exploitation conditions. This analysis involves several planned steps that help design appropriate hydropower production while simultaneously addressing environmental needs. Consequently, the study aims to strike a balance between technical assessment, biological requirements, and societal expectations. Beginning with a small hydro project that requires restoration, this analysis focuses on the lower tail of the Flow Duration Curve (FDC), where both hydrological and environmental goals can be met. The proposed approach involves determining the threshold condition that is tolerable for the most vulnerable species sampled (Telestes Muticellus) by identifying a low flow value from the long-term FDC. The results establish a practical connection between hydrological and environmental information and simplify the process by establishing a single reference flow value that represents the minimum environmental flow that should be maintained.Keywords: carrying capacity, fish bypass ladder, long-term streamflow duration curve, eta-beta method, environmental flow
Procedia PDF Downloads 40278 The Impact of Informal Care on Health Behavior among Older People with Chronic Diseases: A Study in China Using Propensity Score Matching
Abstract:
Improvement of health behavior among people with chronic diseases is vital for increasing longevity and enhancing quality of life. This paper researched the causal effects of informal care on the compliance with doctor’s health advices – smoking control, dietetic regulation, weight control and keep exercising – among older people with chronic diseases in China, which is facing the challenge of aging. We addressed the selection bias by using propensity score matching in the estimation process. We used the 2011-2012 national baseline data of the China Health and Retirement Longitudinal Study. Our results showed informal care can help improve health behavior of older people. First, informal care improved the compliance of smoking controls: whether smoke, frequency of smoking, and the time lag between wake up and the first cigarette was all lower for these older people with informal care; Second, for dietetic regulation, older people with informal care had more meals every day than older people without informal care; Third, three variables: BMI, whether gain weight and whether lose weight were used to measure the outcome of weight control. There were no significant difference between group with informal care and that without for BMI and the possibility of losing weight. Older people with informal care had lower possibility of gain weight than that without; Last, for the advice of keeping exercising, informal care increased the probability of walking exercise, however, the difference between groups for moderate and vigorous exercise were not significant. Our results indicate policy makers who aim to decrease accidents should take informal care to elders into account and provide an appropriate policy to meet the demand of informal care. Our birth policy and postponed retirement policy may decrease the informal caregiving hours, so adjustments of these policies are important and urgent to meet the current situation of aged tendency of population. In addition, government could give more support to develop organizations to provide formal care, such as nursing home. We infer that formal care is also useful for health behavior improvements.Keywords: chronic diseases, compliance, CHARLS, health advice, informal care, older people, propensity score matching
Procedia PDF Downloads 405277 Serum Vitamin D and Carboxy-Terminal TelopeptideType I Collagen Levels: As Markers for Bone Health Affection in Patients Treated with Different Antiepileptic Drugs
Authors: Moetazza M. Al-Shafei, Hala Abdel Karim, Eitedal M. Daoud, Hassan Zaki Hassuna
Abstract:
Epilepsy is a common neurological disorder affecting all age groups. It is one of the world's most prevalent non-communicable diseases. Increased evidence suggesting that long term usage of anti-epileptic drugs can have adverse effects on bone mineralization and bone molding .Aiming to study these effects and to give guide lines to support bone health through early intervention. From Neurology Out-Patient Clinic kaser Elaini University Hospital, 60 Patients were enrolled, 40 patients on antiepileptic drugs for at least two years and 20 controls matched with age and sex, epileptic but before starting treatment both chosen under specific criteria. Patients were divided into four groups, three groups with monotherapy treated with either Phynetoin, Valporic acid or Carbamazipine and fourth group treated with both Valporic acid and Carbamazipine. Estimation of serum Carboxy-Terminal Telopeptide of Type I- Collagen(ICTP) bone resorption marker, serum 25(OH )vit D3, calcium ,magnesium and phosphorus were done .Results showed that all patients on AED had significant low levels of 25(OH) vit D3 (p<0.001) ,with significant elevation of ICTP (P<0.05) versus controls. In group treated with Phynotoin highly significant elevation of (ICTP) marker and decrease of both serum 25(OH) vit D3 (P<0, 0001) and serum calcium(P<0.05)versus control. Double drug group showed significant decrease of serum 25(OH) vit D3 (P<0.0001) and decrease in Phosphorus (P<0.05) versus controls. Serum magnesium showed no significant differences between studied groups. We concluded that Anti- epileptic drugs appears to be an aggravating factor on bone mineralization ,so therapeutically it can be worth wile to supplement calcium and vitamin D even before initiation of antiepileptic therapy. ICTP marker can be used to evaluate change in bone resorption before and during AED therapy.Keywords: antiepileptic drugs, bone minerals, carboxy teminal telopeptidetype-1-collagen bone resorption marker, vitamin D
Procedia PDF Downloads 493276 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups
Authors: Lily Ingsrisawang, Tasanee Nacharoen
Abstract:
Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors
Procedia PDF Downloads 434275 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE
Authors: Parimalah Velo, Ahmad Zakaria
Abstract:
Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging
Procedia PDF Downloads 271274 The Use of a Novel Visual Kinetic Demonstration Technique in Student Skill Acquisition of the Sellick Cricoid Force Manoeuvre
Authors: L. Nathaniel-Wurie
Abstract:
The Sellick manoeuvre a.k.a the application of cricoid force (CF), was first described by Brian Sellick in 1961. CF is the application of digital pressure against the cricoid cartilage with the intention of posterior force causing oesophageal compression against the vertebrae. This is designed to prevent passive regurgitation of gastric contents, which is a major cause of morbidity and mortality during emergency airway management inside and outside of the hospital. To the authors knowledge, there is no universally standardised training modality and, therefore, no reliable way to examine if there are appropriate outcomes. If force is not measured during training, how can one surmise that appropriate, accurate, or precise amounts of force are being used routinely. Poor homogeneity in teaching and untested outcomes will correlate with reduced efficacy and increased adverse effects. For this study, the accuracy of force delivery in trained professionals was tested, and outcomes contrasted against a novice control and a novice study group. In this study, 20 operating department practitioners were tested (with a mean experience of 5.3years of performing CF). Subsequent contrast with 40 novice students who were randomised into one of two arms. ‘Arm A’ were explained the procedure, then shown the procedure then asked to perform CF with the corresponding force measurement being taken three times. Arm B had the same process as arm A then before being tested, they had 10, and 30 Newtons applied to their hands to increase intuitive understanding of what the required force equated to, then were asked to apply the equivalent amount of force against a visible force metre and asked to hold that force for 20 seconds which allowed direct visualisation and correction of any over or under estimation. Following this, Arm B were then asked to perform the manoeuvre, and the force generated measured three times. This study shows that there is a wide distribution of force produced by trained professionals and novices performing the procedure for the first time. Our methodology for teaching the manoeuvre shows an improved accuracy, precision, and homogeneity within the group when compared to novices and even outperforms trained practitioners. In conclusion, if this methodology is adopted, it may correlate with higher clinical outcomes, less adverse events, and more successful airway management in critical medical scenarios.Keywords: airway, cricoid, medical education, sellick
Procedia PDF Downloads 79