Search results for: slip parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2255

Search results for: slip parameter

155 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale

Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal

Abstract:

Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.

Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable

Procedia PDF Downloads 302
154 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 173
153 Synthesis of LiMₓMn₂₋ₓO₄ Doped Co, Ni, Cr and Its Characterization as Lithium Battery Cathode

Authors: Dyah Purwaningsih, Roto Roto, Hari Sutrisno

Abstract:

Manganese dioxide (MnO₂) and its derivatives are among the most widely used materials for the positive electrode in both primary and rechargeable lithium batteries. The MnO₂ derivative compound of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is one of the leading candidates for positive electrode materials in lithium batteries as it is abundant, low cost and environmentally friendly. Over the years, synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) has been carried out using various methods including sol-gel, gas condensation, spray pyrolysis, and ceramics. Problems with these various methods persist including high cost (so commercially inapplicable) and must be done at high temperature (environmentally unfriendly). This research aims to: (1) synthesize LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) by reflux technique; (2) develop microstructure analysis method from XRD Powder LiMₓMn₂₋ₓO₄ data with the two-stage method; (3) study the electrical conductivity of LiMₓMn₂₋ₓO₄. This research developed the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) with reflux. The materials consisting of Mn(CH₃COOH)₂. 4H₂O and Na₂S₂O₈ were refluxed for 10 hours at 120°C to form β-MnO₂. The doping of Co, Ni and Cr were carried out using solid-state method with LiOH to form LiMₓMn₂₋ₓO₄. The instruments used included XRD, SEM-EDX, XPS, TEM, SAA, TG/DTA, FTIR, LCR meter and eight-channel battery analyzer. Microstructure analysis of LiMₓMn₂₋ₓO₄ was carried out on XRD powder data by two-stage method using FullProf program integrated into WinPlotR and Oscail Program as well as on binding energy data from XPS. The morphology of LiMₓMn₂₋ₓO₄ was studied with SEM-EDX, TEM, and SAA. The thermal stability test was performed with TG/DTA, the electrical conductivity was studied from the LCR meter data. The specific capacity of LiMₓMn₂₋ₓO₄ as lithium battery cathode was tested using an eight-channel battery analyzer. The results showed that the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) was successfully carried out by reflux. The optimal temperature of calcination is 750°C. XRD characterization shows that LiMn₂O₄ has a cubic crystal structure with Fd3m space group. By using the CheckCell in the WinPlotr, the increase of Li/Mn mole ratio does not result in changes in the LiMn₂O₄ crystal structure. The doping of Co, Ni and Cr on LiMₓMn₂₋ₓO₄ (x = 0.02; 0.04; 0; 0.6; 0.08; 0.10) does not change the cubic crystal structure of Fd3m. All the formed crystals are polycrystals with the size of 100-450 nm. Characterization of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) microstructure by two-stage method shows the shrinkage of lattice parameter and cell volume. Based on its range of capacitance, the conductivity obtained at LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is an ionic conductivity with varying capacitance. The specific battery capacity at a voltage of 4799.7 mV for LiMn₂O₄; Li₁.₀₈Mn₁.₉₂O₄; LiCo₀.₁Mn₁.₉O₄; LiNi₀.₁Mn₁.₉O₄ and LiCr₀.₁Mn₁.₉O₄ are 88.62 mAh/g; 2.73 mAh/g; 89.39 mAh/g; 85.15 mAh/g; and 1.48 mAh/g respectively.

Keywords: LiMₓMn₂₋ₓO₄, solid-state, reflux, two-stage method, ionic conductivity, specific capacity

Procedia PDF Downloads 194
152 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 354
151 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí

Abstract:

A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.

Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding

Procedia PDF Downloads 97
150 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 110
149 Optical and Near-UV Spectroscopic Properties of Low-Redshift Jetted Quasars in the Main Sequence in the Main Sequence Context

Authors: Shimeles Terefe Mengistue, Ascensión Del Olmo, Paola Marziani, Mirjana Pović, María Angeles Martínez-Carballo, Jaime Perea, Isabel M. Árquez

Abstract:

Quasars have historically been classified into two distinct classes, radio-loud (RL) and radio-quiet (RQ), taking into account the presence and absence of relativistic radio jets, respectively. The absence of spectra with a high S/N ratio led to the impression that all quasars (QSOs) are spectroscopically similar. Although different attempts were made to unify these two classes, there is a long-standing open debate involving the possibility of a real physical dichotomy between RL and RQ quasars. In this work, we present new high S/N spectra of 11 extremely powerful jetted quasars with radio-to-optical flux density ratio > 1000 that concomitantly cover the low-ionization emission of Mgii𝜆2800 and Hbeta𝛽 as well as the Feii blends in the redshift range 0.35 < z < 1, observed at Calar Alto Observatory (Spain). This work aims to quantify broad emission line differences between RL and RQ quasars by using the four-dimensional eigenvector 1 (4DE1) parameter space and its main sequence (MS) and to check the effect of powerful radio ejection on the low ionization broad emission lines. Emission lines are analysed by making two complementary approaches, a multicomponent non-linear fitting to account for the individual components of the broad emission lines and by analysing the full profile of the lines through parameters such as total widths, centroid velocities at different fractional intensities, asymmetry, and kurtosis indices. It is found that broad emission lines show large reward asymmetry both in Hbeta𝛽 and Mgii2800A. The location of our RL sources in a UV plane looks similar to the optical one, with weak Feii UV emission and broad Mgii2800A. We supplement the 11 sources with large samples from previous work to gain some general inferences. The result shows, compared to RQ, our extreme RL quasars show larger median Hbeta full width at half maximum (FWHM), weaker Feii emission, larger 𝑀BH, lower 𝐿bol/𝐿Edd, and a restricted space occupation in the optical and UV MS planes. The differences are more elusive when the comparison is carried out by restricting the RQ population to the region of the MS occupied by RL quasars, albeit an unbiased comparison matching 𝑀BH and 𝐿bol/𝐿Edd suggests that the most powerful RL quasars show the highest redward asymmetries in Hbeta.

Keywords: galaxies, active, line, profiles, quasars, emission lines, supermassive black holes

Procedia PDF Downloads 60
148 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model

Authors: T. Thein, S. Kalyar Myo

Abstract:

Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.

Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)

Procedia PDF Downloads 286
147 Muscle and Cerebral Regional Oxygenation in Preterm Infants with Shock Using Near-Infrared Spectroscopy

Authors: Virany Diana, Martono Tri Utomo, Risa Etika

Abstract:

Background: Shock is one severe condition that can be a major cause of morbidity and mortality in the Neonatal Intensive Care Unit. Preterm infants are very susceptible to shock caused by many complications such as asphyxia, patent ductus arteriosus, intra ventricle haemorrhage, necrotizing enterocolitis, persistent pulmonal hypertension of the newborn, and septicaemia. Limited hemodynamic monitoring for early detection of shock causes delayed intervention and comprises the outcomes. Clinical parameters still used in neonatal shock detection, such as Capillary Refill Time, heart rate, cold extremity, and urine production. Blood pressure is most frequently used to evaluate preterm's circulation, but hypotension indicates uncompensated shock. Near-infrared spectroscopy (NIRS) is known as a noninvasive tool for monitoring and detecting the state of inadequate tissue perfusion. Muscle oxygen saturation shows decreased cardiac output earlier than systemic parameters of tissue oxygenation when cerebral regional oxygen saturation is still stabilized by autoregulation. However, to our best knowledge, until now, no study has analyzed the decrease of muscle oxygen regional saturation (mRSO₂) and the ratio of muscle and cerebral oxygen regional saturation (mRSO₂/cRSO₂) by NIRS in preterm with shock. Purpose: The purpose of this study is to analyze the decrease of mRSO₂ and ratio of muscle to cerebral oxygen regional saturation (mRSO₂/cRSO₂) by NIRS in preterm with shock. Patients and Methods: This cross-sectional study was conducted on preterm infants with 28-34 weeks gestational age, admitted to the NICU of Dr. Soetomo Hospital from November to January 2022. Patients were classified into two groups: shock and non-shock. The diagnosis of shock is based on clinical criteria (tachycardia, prolonged CRT, cold extremity, decreased urine production, and MAP Blood Pressure less than GA in weeks). Measurement of mRSO₂ and cRSO₂ by NIRS was performed by the doctor in charge when the patient came to NICU. Results: We enrolled 40 preterm infants. The initial conventional hemodynamic parameter as the basic diagnosis of shock showed significant differences in all variables. Preterm with shock had higher mean HR (186.45±1.5), lower MAP (29.8±2.1), and lower SBP (45.1±4.28) than non-shock children, and most had a prolonged CRT. The patients’ outcome was not a significant difference between shock and non-shock patients. The mean mRSO₂ in the shock and non-shock groups were 33,65 ± 11,32 vs. 69,15 ± 3,96 (p=0.001), and the mean ratio mRSO₂/cRSO₂ 0,45 ± 0,12 vs. 0,84 ± 0,43 (p=0,001), were significantly different. The mean cRSO₂ in the shock and non-shock groups were 71,60 ± 4,90 vs. 81,85 ± 7,85 (p 0.082), not significantly different. Conclusion: The decrease of mRSO₂ and ratio of mRSO₂/cRSO₂ can differentiate between shock and non-shock in the preterm infant when cRSO₂ is still normal.

Keywords: preterm infant, regional muscle oxygen saturation, regional cerebral oxygen saturation, NIRS, shock

Procedia PDF Downloads 91
146 Assessment of Rainfall Erosivity, Comparison among Methods: Case of Kakheti, Georgia

Authors: Mariam Tsitsagi, Ana Berdzenishvili

Abstract:

Rainfall intensity change is one of the main indicators of climate change. It has a great influence on agriculture as one of the main factors causing soil erosion. Splash and sheet erosion are one of the most prevalence and harmful for agriculture. It is invisible for an eye at first stage, but the process will gradually move to stream cutting erosion. Our study provides the assessment of rainfall erosivity potential with the use of modern research methods in Kakheti region. The region is the major provider of wheat and wine in the country. Kakheti is located in the eastern part of Georgia and characterized quite a variety of natural conditions. The climate is dry subtropical. For assessment of the exact rate of rainfall erosion potential several year data of rainfall with short intervals are needed. Unfortunately, from 250 active metro stations running during the Soviet period only 55 of them are active now and 5 stations in Kakheti region respectively. Since 1936 we had data on rainfall intensity in this region, and rainfall erosive potential is assessed, in some old papers, but since 1990 we have no data about this factor, which in turn is a necessary parameter for determining the rainfall erosivity potential. On the other hand, researchers and local communities suppose that rainfall intensity has been changing and the number of haily days has also been increasing. However, finding a method that will allow us to determine rainfall erosivity potential as accurate as possible in Kakheti region is very important. The study period was divided into three sections: 1936-1963; 1963-1990 and 1990-2015. Rainfall erosivity potential was determined by the scientific literature and old meteorological stations’ data for the first two periods. And it is known that in eastern Georgia, at the boundary between steppe and forest zones, rainfall erosivity in 1963-1990 was 20-75% higher than that in 1936-1963. As for the third period (1990-2015), for which we do not have data of rainfall intensity. There are a variety of studies, where alternative ways of calculating the rainfall erosivity potential based on lack of data are discussed e.g.based on daily rainfall data, average annual rainfall data and the elevation of the area, etc. It should be noted that these methods give us a totally different results in case of different climatic conditions and sometimes huge errors in some cases. Three of the most common methods were selected for our research. Each of them was tested for the first two sections of the study period. According to the outcomes more suitable method for regional climatic conditions was selected, and after that, we determined rainfall erosivity potential for the third section of our study period with use of the most successful method. Outcome data like attribute tables and graphs was specially linked to the database of Kakheti, and appropriate thematic maps were created. The results allowed us to analyze the rainfall erosivity potential changes from 1936 to the present and make the future prospect. We have successfully implemented a method which can also be use for some another region of Georgia.

Keywords: erosivity potential, Georgia, GIS, Kakheti, rainfall

Procedia PDF Downloads 225
145 Identification of Phenolic Compounds and Study the Antimicrobial Property of Eleaocarpus Ganitrus Fruits

Authors: Velvizhi Dharmalingam, Rajalaksmi Ramalingam, Rekha Prabhu, Ilavarasan Raju

Abstract:

Background: The use of herbal products for various therapeutic regimens has increased tremendously in the developing countries. Elaeocarpus ganitrus(Rudraksha) is a broad-leaved tree, belonging to the family Elaeocarpaceae found in tropical and subtropical areas. It is popular in an indigenous system of medicine like Ayurveda, Siddha, and Unani. According to Ayurvedic medicine, Rudraksha is used in the managing of blood pressure, asthma, mental disorders, diabetes, gynaecological disorders, neurological disorders such as epilepsy and liver diseases. Objectives: The present study aimed to study the physicochemical parameters of Elaeocarpus ganitrus(fruits) and identify the phenolic compounds (gallic acid, ellagic acid, and chebulinic acid). To estimate the microbial load and the antibacterial activity of extract of Elaeocarpus ganitrus for selective pathogens. Methodology: The dried powdered fruit of Elaeocarpus ganitrus was performed the physicochemical parameters (such as Loss on drying, Alcohol soluble extractive, Water soluble extractive, Total ash and Acid insoluble ash) and pH was measured. The dried coarse powdered fruit of Elaeocarpus ganitrus was extracted successively with hexane, chloroform, ethylacetate and aqueous alcohol by cold percolation method. Identification of phenolic compounds (gallic acid, ellagic acid, chebulinic acid) was done by HPTLC method and confirmed by co-TLC using different solvent system.The successive extracts of Elaeocarpus ganitrus and standards (like gallic acid, ellagic acid, and chebulinic acid) was approximately weighed and made up with alcohol. HPTLC (CAMAG) analysis was performed on a TLC over silica gel 60F254 precoated aluminium plate, layer thickness 0.2 mm (E.Merck, Germany) by using ATS4, Visualizer and Scanner with wavelength at 254 nm, 366 nm and derivatized with different reagents. The microbial load such as total bacterial count, total fungal count, Enterobacteria, Escherichia coli, Salmonella species, Staphylococcus aureus and Pseudomonas aeruginosa by serial dilution method and antibacterial activity of was measured by Kirby bauer method for selective pathogens. Results: The physicochemical parameter of Elaeocarpus ganitrus was studied for standardization of crude drug. Among all the successive extracts were identified with phenolic compounds and Elaeocarpus ganitrus extract having potent antibacterial activity against gram-positive and gram-negative bacteria.

Keywords: antimicrobial activity, Elaeocarpus ganitrus, HPTLC, phenolic compounds

Procedia PDF Downloads 342
144 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.

Keywords: anammox, filter media, kinetics, nitrogen removal

Procedia PDF Downloads 382
143 Freight Forwarders’ Liability: A Need for Revival of Unidroit Draft Convention after Six Decades

Authors: Mojtaba Eshraghi Arani

Abstract:

The freight forwarders, who are known as the Architect of Transportation, play a vital role in the supply chain management. The package of various services which they provide has made the legal nature of freight forwarders very controversial, so that they might be qualified once as principal or carrier and, on other occasions, as agent of the shipper as the case may be. They could even be involved in the transportation process as the agent of shipping line, which makes the situation much more complicated. The courts in all countries have long had trouble in distinguishing the “forwarder as agent” from “forwarder as principal” (as it is outstanding in the prominent case of “Vastfame Camera Ltd v Birkart Globistics Ltd And Others” 2005, Hong Kong). It is not fully known that in the case of a claim against the forwarder, what particular parameter would be used by the judge among multiple, and sometimes contradictory, tests for determining the scope of the forwarder liability. In particular, every country has its own legal parameters for qualifying the freight forwarders that is completely different from others, as it is the case in France in comparison with Germany and England. The unpredictability of the courts’ decisions in this regard has provided the freight forwarders with the opportunity to impose any limitation or exception of liability while pretending to play the role of a principal, consequently making the cargo interests incur ever-increasing damage. The transportation industry needs to remove such uncertainty by unifying national laws governing freight forwarders liability. A long time ago, in 1967, The International Institute for Unification of Private Law (UNIDROIT) prepared a draft convention called “Draft Convention on Contract of Agency for Forwarding Agents Relating to International Carriage of Goods” (hereinafter called “UNIDROIT draft convention”). The UNIDROIT draft convention provided a clear and certain framework for the liability of freight forwarder in each capacity as agent or carrier, but it failed to transform to a convention, and eventually, it was consigned to oblivion. Today, after nearly 6 decades from that era, the necessity of such convention can be felt apparently. However, one might reason that the same grounds, in particular, the resistance by forwarders’ association, FIATA, exist yet, and thus it is not logical to revive a forgotten draft convention after such long period of time. It is argued in this article that the main reason for resisting the UNIDROIT draft convention in the past was pending efforts for developing the “1980 United Nation Convention on International Multimodal Transport of Goods”. However, the latter convention failed to become in force on due time in a way that there was no new accession since 1996, as a result of which the UNIDROIT draft convention must be revived strongly and immediately submitted to the relevant diplomatic conference. A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and cases.

Keywords: freight forwarder, revival, agent, principal, uidroit, draft convention

Procedia PDF Downloads 75
142 Association of Brain Derived Neurotrophic Factor with Iron as well as Vitamin D, Folate and Cobalamin in Pediatric Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

The impact of metabolic syndrome (MetS) on cognition and functions of the brain is being investigated. Iron deficiency and deficiencies of B9 (folate) as well as B12 (cobalamin) vitamins are best-known nutritional anemias. They are associated with cognitive disorders and learning difficulties. The antidepressant effects of vitamin D are known and the deficiency state affects mental functions negatively. The aim of this study is to investigate possible correlations of MetS with serum brain-derived neurotrophic factor (BDNF), iron, folate, cobalamin and vitamin D in pediatric patients. 30 children, whose age- and sex-dependent body mass index (BMI) percentiles vary between 85 and 15, 60 morbid obese children with above 99th percentiles constituted the study population. Anthropometric measurements were taken. BMI values were calculated. Age- and sex-dependent BMI percentile values were obtained using the appropriate tables prepared by the World Health Organization (WHO). Obesity classification was performed according to WHO criteria. Those with MetS were evaluated according to MetS criteria. Serum BDNF was determined by enzyme-linked immunosorbent assay. Serum folate was analyzed by an immunoassay analyzer. Serum cobalamin concentrations were measured using electrochemiluminescence immunoassay. Vitamin D status was determined by the measurement of 25-hydroxycholecalciferol [25-hydroxy vitamin D3, 25(OH)D] using high performance liquid chromatography. Statistical evaluations were performed using SPSS for Windows, version 16. The p values less than 0.05 were accepted as statistically significant. Although statistically insignificant, lower folate and cobalamin values were found in MO children compared to those observed for children with normal BMI. For iron and BDNF values, no alterations were detected among the groups. Significantly decreased vitamin D concentrations were noted in MO children with MetS in comparison with those in children with normal BMI (p ≤ 0.05). The positive correlation observed between iron and BDNF in normal-BMI group was not found in two MO groups. In THE MetS group, the partial correlation among iron, BDNF, folate, cobalamin, vitamin D controlling for waist circumference and BMI was r = -0.501; p ≤ 0.05. None was calculated in MO and normal BMI groups. In conclusion, vitamin D should also be considered during the assessment of pediatric MetS. Waist circumference and BMI should collectively be evaluated during the evaluation of MetS in children. Within this context, BDNF appears to be a key biochemical parameter during the examination of obesity degree in terms of mental functions, cognition and learning capacity. The association observed between iron and BDNF in children with normal BMI was not detected in MO groups possibly due to development of inflammation and other obesity-related pathologies. It was suggested that this finding may contribute to mental function impairments commonly observed among obese children.

Keywords: brain-derived neurotrophic factor, iron, vitamin B9, vitamin B12, vitamin D

Procedia PDF Downloads 121
141 Effect of Minimalist Footwear on Running Economy Following Exercise-Induced Fatigue

Authors: Jason Blair, Adeboye Adebayo, Mohamed Saad, Jeannette M. Byrne, Fabien A. Basset

Abstract:

Running economy is a key physiological parameter of an individual’s running efficacy and a valid tool for predicting performance outcomes. Of the many factors known to influence running economy (RE), footwear certainly plays a role owing to its characteristics that vary substantially from model to model. Although minimalist footwear is believed to enhance RE and thereby endurance performance, conclusive research reports are scarce. Indeed, debates remain as to which footwear characteristics most alter RE. The purposes of this study were, therefore, two-fold: (a) to determine whether wearing minimalist shoes results in better RE compared to shod and to identify relationships with kinematic and muscle activation patterns; (b) to determine whether changes in RE with minimalist shoes are still evident following a fatiguing bout of exercise. Well-trained male distance runners (n=10; 29.0 ± 7.5 yrs; 71.0 ± 4.8 kg; 176.3 ± 6.5 cm) partook first in a maximal O₂ uptake determination test (VO₂ₘₐₓ = 61.6 ± 7.3 ml min⁻¹ kg⁻¹) 7 days prior to the experimental sessions. Second, in a fully randomized fashion, an RE test consisting of three 8-min treadmill runs in shod and minimalist footwear were performed prior to and following exercise induced fatigue (EIF). The minimalist and shod conditions were tested with a minimum of 7-day wash-out period between conditions. The RE bouts, interspaced by 2-min rest periods, were run at 2.79, 3.33, and 3.89 m s⁻¹ with a 1% grade. EIF consisted of 7 times 1000 m at 94-97% VO₂ₘₐₓ interspaced with 3-min recovery. Cardiorespiratory, electromyography (EMG), kinematics, rate of perceived exertion (RPE) and blood lactate were measured throughout the experimental sessions. A significant main speed effect on RE (p=0.001) and stride frequency (SF) (p=0.001) was observed. The pairwise comparisons showed that running at 2.79 m s⁻¹ was less economic compared to 3.33, and 3.89 m s⁻¹ (3.56 ± 0.38, 3.41 ± 0.45, 3.40 ± 0.45 ml O₂ kg⁻¹ km⁻¹; respectively) and that SF increased as a function of speed (79 ± 5, 82 ± 5, 84 ± 5 strides min⁻¹). Further, EMG analyses revealed that root mean square EMG significantly increased as a function of speed for all muscles (Biceps femoris, Gluteus maximus, Gastrocnemius, Tibialis anterior, Vastus lateralis). During EIF, the statistical analysis revealed a significant main effect of time on lactate production (from 2.7 ± 5.7 to 11.2 ± 6.2 mmol L⁻¹), RPE scores (from 7.6 ± 4.0 to 18.4 ± 2.7) and peak HR (from 171 ± 30 to 181 ± 20 bpm), expect for the recovery period. Surprisingly, a significant main footwear effect was observed on running speed during intervals (p=0.041). Participants ran faster with minimalist shoes compared to shod (3:24 ± 0:44 min [95%CI: 3:14-3:34] vs. 3:30 ± 0:47 min [95%CI: 3:19-3:41]). Although EIF altered lactate production and RPE scores, no other effect was noticeable on RE, EMG, and SF pre- and post-EIF, except for the expected speed effect. The significant footwear effect on running speed during EIF was unforeseen but could be due to shoe mass and/or heel-toe-drop differences. We also cannot discard the effect of speed on foot-strike pattern and therefore, running performance.

Keywords: exercise-induced fatigue, interval training, minimalist footwear, running economy

Procedia PDF Downloads 248
140 Schema Therapy as Treatment for Adults with Autism Spectrum Disorder and Comorbid Personality Disorder: A Multiple Baseline Case Series Study Testing Cognitive-Behavioral and Experiential Interventions

Authors: Richard Vuijk, Arnoud Arntz

Abstract:

Rationale: To our knowledge treatment of personality disorder comorbidity in adults with autism spectrum disorder (ASD) is understudied and is still in its infancy: We do not know if treatment of personality disorders may be applicable to adults with ASD. In particular, it is unknown whether patients with ASD benefit from experiential techniques that are part of schema therapy developed for the treatment of personality disorders. Objective: The aim of the study is to investigate the efficacy of a schema mode focused treatment with adult clients with ASD and comorbid personality pathology (i.e. at least one personality disorder). Specifically, we investigate if they can benefit from both cognitive-behavioral, and experiential interventions. Study design: A multiple baseline case series study. Study population: Adult individuals (age > 21 years) with ASD and at least one personality disorder. Participants will be recruited from Sarr expertise center for autism in Rotterdam. The study requires 12 participants. Intervention: The treatment protocol consists of 35 weekly offered sessions, followed by 10 monthly booster sessions. A multiple baseline design will be used with baseline varying from 5 to 10 weeks, with weekly supportive sessions. After baseline, a 5-week exploration phase follows with weekly sessions during which current and past functioning, psychological symptoms, schema modes are explored, and information about the treatment will be given. Then 15 weekly sessions with cognitive-behavioral interventions and 15 weekly sessions with experiential interventions will be given. Finally, there will be a 10-month follow-up phase with monthly booster sessions. Participants are randomly assigned to baseline length, and respond weekly during treatment and monthly at follow-up on Belief Strength of negative core beliefs (by VAS), and fill out SMI, SCL-90 and SRS-A 7 times during screening procedure (i.e. before baseline), after baseline, after exploration, after cognitive and behavioral interventions, after experiential interventions, and after 5- and 10- month follow-up. The SCID-II will be administered during screening procedure (i.e. before baseline), at 5- and at 10-month follow-up. Main study parameters: The primary study parameter is negative core beliefs. Secondary study parameters include schema modes, personality disorder manifestations, psychological symptoms, and social interaction and communication. Discussion: To the best of author’s knowledge so far no study has been published on the application of schema mode focused interventions in adult patients with ASD and comorbid PD(s). This study offers the first systematic test of application of schema therapy for adults with ASD. The results of this study will provide initial evidence for the effectiveness of schema therapy in treating adults with both ASD and PD(s). The study intends to provide valuable information for future development and implementation of therapeutic interventions for adults with both ASD and PD(s).

Keywords: adults, autism spectrum disorder, personality disorder, schema therapy

Procedia PDF Downloads 239
139 Combined Treatment of Estrogen-Receptor Positive Breast Microtumors with 4-Hydroxytamoxifen and Novel Non-Steroidal Diethyl Stilbestrol-Like Analog Produces Enhanced Preclinical Treatment Response and Decreased Drug Resistance

Authors: Sarah Crawford, Gerry Lesley

Abstract:

This research is a pre-clinical assessment of anti-cancer effects of novel non-steroidal diethyl stilbestrol-like estrogen analogs in estrogen-receptor positive/ progesterone-receptor positive human breast cancer microtumors of MCF 7 cell line. Tamoxifen analog formulation (Tam A1) was used as a single agent or in combination with therapeutic concentrations of 4-hydroxytamoxifen, currently used as a long-term treatment for the prevention of breast cancer recurrence in women with estrogen receptor positive/ progesterone receptor positive malignancies. At concentrations ranging from 30-50 microM, Tam A1 induced microtumor disaggregation and cell death. Incremental cytotoxic effects correlated with increasing concentrations of Tam A1. Live tumor microscopy showed that microtumos displayed diffuse borders and substrate-attached cells were rounded-up and poorly adherent. A complete cytotoxic effect was observed using 40-50 microM Tam A1 with time course kinetics similar to 4-hydroxytamoxifen. Combined treatment with TamA1 (30-50 microM) and 4-hydroxytamoxifen (10-15 microM) induced a highly cytotoxic, synergistic combined treatment response that was more rapid and complete than using 4-hydroxytamoxifen as a single agent therapeutic. Microtumors completely dispersed or formed necrotic foci indicating a highly cytotoxic combined treatment response. Moreover, breast cancer microtumors treated with both 4-hydroxytamoxifen and Tam A1 displayed lower levels of long-term post-treatment regrowth, a critical parameter of primary drug resistance, than observed for 4-hydroxytamoxifen when used as a single agent therapeutic. Tumor regrowth at 6 weeks post-treatment with either single agent 4-hydroxy tamoxifen, Tam A1 or a combined treatment was assessed for the development of drug resistance. Breast cancer cells treated with both 4-hydroxytamoxifen and Tam A1 displayed significantly lower levels of post-treatment regrowth, indicative of decreased drug resistance, than observed for either single treatment modality. The preclinical data suggest that combined treatment involving the use of tamoxifen analogs may be a novel clinical approach for long-term maintenance therapy in patients with estrogen-receptor positive/progesterone-receptor positive breast cancer receiving hormonal therapy to prevent disease recurrence. Detailed data on time-course, IC50 and tumor regrowth assays post- treatment as well as a proposed mechanism of action to account for observed synergistic drug effects will be presented.

Keywords: 4-hydroxytamoxifen, tamoxifen analog, drug-resistance, microtumors

Procedia PDF Downloads 69
138 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.

Keywords: contaminated sediment, monitoring, recovery, remediation

Procedia PDF Downloads 237
137 Importance of Different Spatial Parameters in Water Quality Analysis within Intensive Agricultural Area

Authors: Marina Bubalo, Davor Romić, Stjepan Husnjak, Helena Bakić

Abstract:

Even though European Council Directive 91/676/EEC known as Nitrates Directive was adopted in 1991, the issue of water quality preservation in areas of intensive agricultural production still persist all over Europe. High nitrate nitrogen concentrations in surface and groundwater originating from diffuse sources are one of the most important environmental problems in modern intensive agriculture. The fate of nitrogen in soil, surface and groundwater in agricultural area is mostly affected by anthropogenic activity (i.e. agricultural practice) and hydrological and climatological conditions. The aim of this study was to identify impact of land use, soil type, soil vulnerability to pollutant percolation, and natural aquifer vulnerability to nitrate occurrence in surface and groundwater within an intensive agricultural area. The study was set in Varaždin County (northern Croatia), which is under significant influence of the large rivers Drava and Mura and due to that entire area is dominated by alluvial soil with shallow active profile mainly on gravel base. Negative agricultural impact on water quality in this area is evident therefore the half of selected county is a part of delineated nitrate vulnerable zones (NVZ). Data on water quality were collected from 7 surface and 8 groundwater monitoring stations in the County. Also, recent study of the area implied detailed inventory of agricultural production and fertilizers use with the aim to produce new agricultural land use database as one of dominant parameters. The analysis of this database done using ArcGIS 10.1 showed that 52,7% of total County area is agricultural land and 59,2% of agricultural land is used for intensive agricultural production. On the other hand, 56% of soil within the county is classified as soil vulnerable to pollutant percolation. The situation is similar with natural aquifer vulnerability; northern part of the county ranges from high to very high aquifer vulnerability. Statistical analysis of water quality data is done using SPSS 13.0. Cluster analysis group both surface and groundwater stations in two groups according to nitrate nitrogen concentrations. Mean nitrate nitrogen concentration in surface water – group 1 ranges from 4,2 to 5,5 mg/l and in surface water – group 2 from 24 to 42 mg/l. The results are similar, but evidently higher, in groundwater samples; mean nitrate nitrogen concentration in group 1 ranges from 3,9 to 17 mg/l and in group 2 from 36 to 96 mg/l. ANOVA analysis confirmed statistical significance between stations that are classified in the same group. The previously listed parameters (land use, soil type, etc.) were used in factorial correspondence analysis (FCA) to detect importance of each stated parameter in local water quality. Since stated parameters mostly cannot be altered, there is obvious necessity for more precise and more adapted land management in such conditions.

Keywords: agricultural area, nitrate, factorial correspondence analysis, water quality

Procedia PDF Downloads 259
136 Evaluation of Tensile Strength of Natural Fibres Reinforced Epoxy Composites Using Fly Ash as Filler Material

Authors: Balwinder Singh, Veerpaul Kaur Mann

Abstract:

A composite material is formed by the combination of two or more phases or materials. Natural minerals-derived Basalt fiber is a kind of fiber being introduced in the polymer composite industry due to its good mechanical properties similar to synthetic fibers and low cost, environment friendly. Also, there is a rising trend towards the use of industrial wastes as fillers in polymer composites with the aim of improving the properties of the composites. The mechanical properties of the fiber-reinforced polymer composites are influenced by various factors like fiber length, fiber weight %, filler weight %, filler size, etc. Thus, a detailed study has been done on the characterization of short-chopped Basalt fiber-reinforced polymer matrix composites using fly ash as filler. Taguchi’s L9 orthogonal array has been used to develop the composites by considering fiber length (6, 9 and 12 mm), fiber weight % (25, 30 and 35 %) and filler weight % (0, 5 and 10%) as input parameters with their respective levels and a thorough analysis on the mechanical characteristics (tensile strength and impact strength) has been done using ANOVA analysis with the help of MINITAB14 software. The investigation revealed that fiber weight is the most significant parameter affecting tensile strength, followed by fiber length and fiber weight %, respectively, while impact characterization showed that fiber length is the most significant factor, followed by fly ash weight, respectively. Introduction of fly ash proved to be beneficial in both the characterization with enhanced values upto 5% fly ash weight. The present study on the natural fibres reinforced epoxy composites using fly ash as filler material to study the effect of input parameters on the tensile strength in order to maximize tensile strength of the composites. Fabrication of composites based on Taguchi L9 orthogonal array design of experiments by using three factors fibre type, fibre weight % and fly ash % with three levels of each factor. The Optimization of composition of natural fibre reinforces composites using ANOVA for obtaining maximum tensile strength on fabricated composites revealed that the natural fibres along with fly ash can be successfully used with epoxy resin to prepare polymer matrix composites with good mechanical properties. Paddy- Paddy fibre gives high elasticity to the fibre composite due to presence of approximately hexagonal structure of cellulose present in paddy fibre. Coir- Coir fibre gives less tensile strength than paddy fibre as Coir fibre is brittle in nature when it pulls breakage occurs showing less tensile strength. Banana- Banana fibre has the least tensile strength in comparison to the paddy & coir fibre due to less cellulose content. Higher fibre weight leads to reduction in tensile strength due to increased nuclei of air pockets. Increasing fly ash content reduces tensile strength due to nonbonding of fly ash particles with natural fibre. Fly ash is also not very strong as compared to the epoxy resin leading to reduction in tensile strength.

Keywords: tensile strength and epoxy resin. basalt Fiber, taguchi, polymer matrix, natural fiber

Procedia PDF Downloads 49
135 Removal of Heavy Metal Ions from Aqueous Solution by Polymer Enhanced Ultrafiltration Using Unmodified Starch as Biopolymer

Authors: Nurul Huda Baharuddin, Nik Meriam Nik Sulaiman, Mohammed Kheireddine Aroua

Abstract:

The effects of pH, polymer concentration, and metal ions feed concentration for four selected heavy metals Zn (II), Pb (II), Cr (III) and Cr (VI) were tested by using Polymer Enhanced Ultrafiltration (PEUF). An alternative biopolymer namely unmodified starch is proposed as a binding reagent in consequences, as compared to commonly used water-soluble polymers namely polyethylene glycol (PEG) and polyethyleneimine (PEI) in the removal of selected four heavy metal ions. The speciation species profiles of four selected complexes ions namely Zn (II), Pb (II), Cr (III) and Cr (VI) and the present of hydroxides ions (OH-) in variously charged ions were investigated by available software at certain pH range. In corresponds to identify the potential of complexation behavior between metal ion-polymers, potentiometric titration studies were obtained at first before carried out experimental works. Experimental works were done using ultrafiltration systems obtained by laboratory ultrafiltration bench scale equipped with 10 kDa polysulfone hollow fiber membrane. Throughout the laboratory works, the rejection coefficient and permeate flux were found to be significantly affected by the main operating parameter, namely the effects of pH, polymer composition and metal ions concentrations. The interaction of complexation between two binding polymers namely unmodified starch and PEG were occurred due to physical attraction of metal ions to the polymer on the molecular surface with high possibility of chemical occurrence. However, these selected metal ions are mainly complexes by polymer functional groups whenever there is interaction with PEI polymer. For study of single metal ions solutions, Zn (II) ions' rejections approaching over 90% were obtained at pH 7 for each tested polymer. This behavior was similar to Pb (II), Cr (III) and Cr (VI); where the rejections were obtained at lower acidic pH and increased at neutral pH of 7. Different behavior was found by Cr (VI) ions where a high rejection was only achieved at acidic pH region with PEI. Polymer concentration and metal ions concentration are found to have a significant effect on rejections. For mixed metal ion solutions, the behavior of metal ion rejections was similar to single metal ion solutions for investigation on the effects of pH. Rejection values were high at pH 7 for Zn (II) pH 7 for Zn (II) and Cr (III) ions, corresponding to higher rejections with unmodified starch. Pb (II) ions obtained high rejections when tested with PEG whenever carried out in mixed metal ion solutions. High Cr (VI) ions' rejection was found with PEI in single and mixed metal ions solutions at neutral pH range. The influence of starch’s granule structure towards the rejections of these four selected metal ions is found to be attracted in a non-ionic manner. No significant effects on permeate flux were obtained when tested at different pH ranges, polymer concentrations and metal ions feed either by single or mixtures metal ions solutions. Canizares Model was employed as the theoretical model to predict permeate flux and metal ions retention on the study of heavy metal ions removal.

Keywords: polyethyleneimine, polyethylene glycol, polymer-enhanced ultrafiltration, unmodified starch

Procedia PDF Downloads 178
134 Chebyshev Collocation Method for Solving Heat Transfer Analysis for Squeezing Flow of Nanofluid in Parallel Disks

Authors: Mustapha Rilwan Adewale, Salau Ayobami Muhammed

Abstract:

This study focuses on the heat transfer analysis of magneto-hydrodynamics (MHD) squeezing flow between parallel disks, considering a viscous incompressible fluid. The upper disk exhibits both upward and downward motion, while the lower disk remains stationary but permeable. By employing similarity transformations, a system of nonlinear ordinary differential equations is derived to describe the flow behavior. To solve this system, a numerical approach, namely the Chebyshev collocation method, is utilized. The study investigates the influence of flow parameters and compares the obtained results with existing literature. The significance of this research lies in understanding the heat transfer characteristics of MHD squeezing flow, which has practical implications in various engineering and industrial applications. By employing the similarity transformations, the complex governing equations are simplified into a system of nonlinear ordinary differential equations, facilitating the analysis of the flow behavior. To obtain numerical solutions for the system, the Chebyshev collocation method is implemented. This approach provides accurate approximations for the nonlinear equations, enabling efficient computations of the heat transfer properties. The obtained results are compared with existing literature, establishing the validity and consistency of the numerical approach. The study's major findings shed light on the influence of flow parameters on the heat transfer characteristics of the squeezing flow. The analysis reveals the impact of parameters such as magnetic field strength, disk motion amplitude, fluid viscosity on the heat transfer rate between the disks, the squeeze number(S), suction/injection parameter(A), Hartman number(M), Prandtl number(Pr), modified Eckert number(Ec), and the dimensionless length(δ). These findings contribute to a comprehensive understanding of the system's behavior and provide insights for optimizing heat transfer processes in similar configurations. In conclusion, this study presents a thorough heat transfer analysis of magneto-hydrodynamics squeezing flow between parallel disks. The numerical solutions obtained through the Chebyshev collocation method demonstrate the feasibility and accuracy of the approach. The investigation of flow parameters highlights their influence on heat transfer, contributing to the existing knowledge in this field. The agreement of the results with previous literature further strengthens the reliability of the findings. These outcomes have practical implications for engineering applications and pave the way for further research in related areas.

Keywords: squeezing flow, magneto-hydro-dynamics (MHD), chebyshev collocation method(CCA), parallel manifolds, finite difference method (FDM)

Procedia PDF Downloads 77
133 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂

Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang

Abstract:

CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.

Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces

Procedia PDF Downloads 278
132 Evolution of Microstructure through Phase Separation via Spinodal Decomposition in Spinel Ferrite Thin Films

Authors: Nipa Debnath, Harinarayan Das, Takahiko Kawaguchi, Naonori Sakamoto, Kazuo Shinozaki, Hisao Suzuki, Naoki Wakiya

Abstract:

Nowadays spinel ferrite magnetic thin films have drawn considerable attention due to their interesting magnetic and electrical properties with enhanced chemical and thermal stability. Spinel ferrite magnetic films can be implemented in magnetic data storage, sensors, and spin filters or microwave devices. It is well established that the structural, magnetic and transport properties of the magnetic thin films are dependent on microstructure. Spinodal decomposition (SD) is a phase separation process, whereby a material system is spontaneously separated into two phases with distinct compositions. The periodic microstructure is the characteristic feature of SD. Thus, SD can be exploited to control the microstructure at the nanoscale level. In bulk spinel ferrites having general formula, MₓFe₃₋ₓ O₄ (M= Co, Mn, Ni, Zn), phase separation via SD has been reported only for cobalt ferrite (CFO); however, long time post-annealing is required to occur the spinodal decomposition. We have found that SD occurs in CoF thin film without using any post-deposition annealing process if we apply magnetic field during thin film growth. Dynamic Aurora pulsed laser deposition (PLD) is a specially designed PLD system through which in-situ magnetic field (up to 2000 G) can be applied during thin film growth. The in-situ magnetic field suppresses the recombination of ions in the plume. In addition, the peak’s intensity of the ions in the spectra of the plume also increases when magnetic field is applied to the plume. As a result, ions with high kinetic energy strike into the substrate. Thus, ion-impingement occurred under magnetic field during thin film growth. The driving force of SD is the ion-impingement towards the substrates that is induced by in-situ magnetic field. In this study, we report about the occurrence of phase separation through SD and evolution of microstructure after phase separation in spinel ferrite thin films. The surface morphology of the phase separated films show checkerboard like domain structure. The cross-sectional microstructure of the phase separated films reveal columnar type phase separation. Herein, the decomposition wave propagates in lateral direction which has been confirmed from the lateral composition modulations in spinodally decomposed films. Large magnetic anisotropy has been found in spinodally decomposed nickel ferrite (NFO) thin films. This approach approves that magnetic field is also an important thermodynamic parameter to induce phase separation by the enhancement of up-hill diffusion in thin films. This thin film deposition technique could be a more efficient alternative for the fabrication of self-organized phase separated thin films and employed in controlling of the microstructure at nanoscale level.

Keywords: Dynamic Aurora PLD, magnetic anisotropy, spinodal decomposition, spinel ferrite thin film

Procedia PDF Downloads 367
131 Global-Scale Evaluation of Two Satellite-Based Passive Microwave Soil Moisture Data Sets (SMOS and AMSR-E) with Respect to Modelled Estimates

Authors: A. Alyaaria, b, J. P. Wignerona, A. Ducharneb, Y. Kerrc, P. de Rosnayd, R. de Jeue, A. Govinda, A. Al Bitarc, C. Albergeld, J. Sabaterd, C. Moisya, P. Richaumec, A. Mialonc

Abstract:

Global Level-3 surface soil moisture (SSM) maps from the passive microwave soil moisture and Ocean Salinity satellite (SMOSL3) have been released. To further improve the Level-3 retrieval algorithm, evaluation of the accuracy of the spatio-temporal variability of the SMOS Level 3 products (referred to here as SMOSL3) is necessary. In this study, a comparative analysis of SMOSL3 with a SSM product derived from the observations of the Advanced Microwave Scanning Radiometer (AMSR-E) computed by implementing the Land Parameter Retrieval Model (LPRM) algorithm, referred to here as AMSRM, is presented. The comparison of both products (SMSL3 and AMSRM) were made against SSM products produced by a numerical weather prediction system (SM-DAS-2) at ECMWF (European Centre for Medium-Range Weather Forecasts) for the 03/2010-09/2011 period at global scale. The latter product was considered here a 'reference' product for the inter-comparison of the SMOSL3 and AMSRM products. Three statistical criteria were used for the evaluation, the correlation coefficient (R), the root-mean-squared difference (RMSD), and the bias. Global maps of these criteria were computed, taking into account vegetation information in terms of biome types and Leaf Area Index (LAI). We found that both the SMOSL3 and AMSRM products captured well the spatio-temporal variability of the SM-DAS-2 SSM products in most of the biomes. In general, the AMSRM products overestimated (i.e., wet bias) while the SMOSL3 products underestimated (i.e., dry bias) SSM in comparison to the SM-DAS-2 SSM products. In term of correlation values, the SMOSL3 products were found to better capture the SSM temporal dynamics in highly vegetated biomes ('Tropical humid', 'Temperate Humid', etc.) while best results for AMSRM were obtained over arid and semi-arid biomes ('Desert temperate', 'Desert tropical', etc.). When removing the seasonal cycles in the SSM time variations to compute anomaly values, better correlation with the SM-DAS-2 SSM anomalies were obtained with SMOSL3 than with AMSRM, in most of the biomes with the exception of desert regions. Eventually, we showed that the accuracy of the remotely sensed SSM products is strongly related to LAI. Both the SMOSL3 and AMSRM (slightly better) SSM products correlate well with the SM-DAS2 products over regions with sparse vegetation for values of LAI < 1 (these regions represent almost 50% of the pixels considered in this global study). In regions where LAI>1, SMOSL3 outperformed AMSRM with respect to SM-DAS-2: SMOSL3 had almost consistent performances up to LAI = 6, whereas AMSRM performance deteriorated rapidly with increasing values of LAI.

Keywords: remote sensing, microwave, soil moisture, AMSR-E, SMOS

Procedia PDF Downloads 357
130 Prominent Lipid Parameters Correlated with Trunk-to-Leg and Appendicular Fat Ratios in Severe Pediatric Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

The examination of both serum lipid fractions and body’s lipid composition are quite informative during the evaluation of obesity stages. Within this context, alterations in lipid parameters are commonly observed. The variations in the fat distribution of the body are also noteworthy. Total cholesterol (TC), triglycerides (TRG), low density lipoprotein-cholesterol (LDL-C), high density lipoprotein-cholesterol (HDL-C) are considered as the basic lipid fractions. Fat deposited in trunk and extremities may give considerable amount of information and different messages during discrete health states. Ratios are also derived from distinct fat distribution in these areas. Trunk-to-leg fat ratio (TLFR) and trunk-to-appendicular fat ratio (TAFR) are the most recently introduced ratios. In this study, lipid fractions and TLFR, as well as TAFR, were evaluated, and the distinctions among healthy, obese (OB), and morbid obese (MO) groups were investigated. Three groups [normal body mass index (N-BMI), OB, MO] were constituted from a population aged 6 to 18 years. Ages and sexes of the groups were matched. The study protocol was approved by the Non-interventional Ethics Committee of Tekirdag Namik Kemal University. Written informed consent forms were obtained from the parents of the participants. Anthropometric measurements (height, weight, waist circumference, hip circumference, head circumference, neck circumference) were obtained and recorded during the physical examination. Body mass index values were calculated. Total, trunk, leg, and arm fat mass values were obtained by TANITA Bioelectrical Impedance Analysis. These values were used to calculate TLFR and TAFR. Systolic (SBP) and diastolic blood pressures (DBP) were measured. Routine biochemical tests including TC, TRG, LDL-C, HDL-C, and insulin were performed. Data were evaluated using SPSS software. p value smaller than 0.05 was accepted as statistically significant. There was no difference among the age values and gender ratios of the groups. Any statistically significant difference was not observed in terms of DBP, TLFR as well as serum lipid fractions. Higher SBP values were measured both in OB and MO children than those with N-BMI. TAFR showed a significant difference between N-BMI and OB groups. Statistically significant increases were detected between insulin values of N-BMI group and OB as well as MO groups. There were bivariate correlations between LDL and TLFR (r=0.396; p=0.037) as well as TAFR values (r=0.413; p=0.029) in MO group. When adjusted for SBP and DBP, partial correlations were calculated as (r=0.421; p=0.032) and (r=0.438; p=0.025) for LDL-TLFR as well as LDL-TAFR, respectively. Much stronger partial correlations were obtained for the same couples (r=0.475; p=0.019 and r=0.473; p=0.020, respectively) upon controlling for TRG and HDL-C. Much stronger partial correlations observed in MO children emphasize the potential transition from morbid obesity to metabolic syndrome. These findings have concluded that LDL-C may be suggested as a discriminating parameter between OB and MO children.

Keywords: children, lipid parameters, obesity, trunk-to-leg fat ratio, trunk-to-appendicular fat ratio

Procedia PDF Downloads 113
129 Printed Electronics for Enhanced Monitoring of Organ-on-Chip Culture Media Parameters

Authors: Alejandra Ben-Aissa, Martina Moreno, Luciano Sappia, Paul Lacharmoise, Ana Moya

Abstract:

Organ-on-Chip (OoC) stands out as a highly promising approach for drug testing, presenting a cost-effective and ethically superior alternative to conventional in vivo experiments. These cutting-edge devices emerge from the integration of tissue engineering and microfluidic technology, faithfully replicating the physiological conditions of targeted organs. Consequently, they offer a more precise understanding of drug responses without the ethical concerns associated with animal testing. When addressing the limitations of OoC due to conventional and time-consuming techniques, Lab-On-Chip (LoC) emerge as a disruptive technology capable of providing real-time monitoring without compromising sample integrity. This work develops LoC platforms that can be integrated within OoC platforms to monitor essential culture media parameters, including glucose, oxygen, and pH, facilitating the straightforward exchange of sensing units within a dynamic and controlled environment without disrupting cultures. This approach preserves the experimental setup, minimizes the impact on cells, and enables efficient, prolonged measurement. The LoC system is fabricated following the patented methodology protected by EU patent EP4317957A1. One of the key challenges of integrating sensors in a biocompatible, feasible, robust, and scalable manner is addressed through fully printed sensors, ensuring a customized, cost-effective, and scalable solution. With this technique, sensor reliability is enhanced, providing high sensitivity and selectivity for accurate parameter monitoring. In the present study, LoC is validated measuring a complete culture media. The oxygen sensor provided a measurement range from 0 mgO2/L to 6.3 mgO2/L. The pH sensor demonstrated a measurement range spanning 2 pH units to 9.5 pH units. Additionally, the glucose sensor achieved a measurement range from 0 mM to 11 mM. All the measures were performed with the sensors integrated in the LoC. In conclusion, this study showcases the impactful synergy of OoC technology with LoC systems using fully printed sensors, marking a significant step forward in ethical and effective biomedical research, particularly in drug development. This innovation not only meets current demands but also lays the groundwork for future advancements in precision and customization within scientific exploration. Acknowledgments: This work was financially supported by the Catalan Government through the funding grant ACCIÓ-Eurecat (Project Traça-IMPULSENS).

Keywords: organ on chip, lab on chip, real time monitoring, biosensors

Procedia PDF Downloads 24
128 Item-Trait Pattern Recognition of Replenished Items in Multidimensional Computerized Adaptive Testing

Authors: Jianan Sun, Ziwen Ye

Abstract:

Multidimensional computerized adaptive testing (MCAT) is a popular research topic in psychometrics. It is important for practitioners to clearly know the item-trait patterns of administered items when a test like MCAT is operated. Item-trait pattern recognition refers to detecting which latent traits in a psychological test are measured by each of the specified items. If the item-trait patterns of the replenished items in MCAT item pool are well detected, the interpretability of the items can be improved, which can further promote the abilities of the examinees who attending the MCAT to be accurately estimated. This research explores to solve the item-trait pattern recognition problem of the replenished items in MCAT item pool from the perspective of statistical variable selection. The popular multidimensional item response theory model, multidimensional two-parameter logistic model, is assumed to fit the response data of MCAT. The proposed method uses the least absolute shrinkage and selection operator (LASSO) to detect item-trait patterns of replenished items based on the essential information of item responses and ability estimates of examinees collected from a designed MCAT procedure. Several advantages of the proposed method are outlined. First, the proposed method does not strictly depend on the relative order between the replenished items and the selected operational items, so it allows the replenished items to be mixed into the operational items in reasonable order such as considering content constraints or other test requirements. Second, the LASSO used in this research improves the interpretability of the multidimensional replenished items in MCAT. Third, the proposed method can exert the advantage of shrinkage method idea for variable selection, so it can help to check item quality and key dimension features of replenished items and saves more costs of time and labors in response data collection than traditional factor analysis method. Moreover, the proposed method makes sure the dimensions of replenished items are recognized to be consistent with the dimensions of operational items in MCAT item pool. Simulation studies are conducted to investigate the performance of the proposed method under different conditions for varying dimensionality of item pool, latent trait correlation, item discrimination, test lengths and item selection criteria in MCAT. Results show that the proposed method can accurately detect the item-trait patterns of the replenished items in the two-dimensional and the three-dimensional item pool. Selecting enough operational items from the item pool consisting of high discriminating items by Bayesian A-optimality in MCAT can improve the recognition accuracy of item-trait patterns of replenished items for the proposed method. The pattern recognition accuracy for the conditions with correlated traits is better than those with independent traits especially for the item pool consisting of comparatively low discriminating items. To sum up, the proposed data-driven method based on the LASSO can accurately and efficiently detect the item-trait patterns of replenished items in MCAT.

Keywords: item-trait pattern recognition, least absolute shrinkage and selection operator, multidimensional computerized adaptive testing, variable selection

Procedia PDF Downloads 131
127 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation

Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang

Abstract:

The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.

Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics

Procedia PDF Downloads 134
126 The Administration of Infection Diseases During the Pandemic COVID-19 and the Role of the Differential Diagnosis with Biomarkers VB10

Authors: Sofia Papadimitriou

Abstract:

INTRODUCTION: The differential diagnosis between acute viral and bacterial infections is an important cost-effectiveness parameter at the stage of the treatment process in order to achieve the maximum benefits in therapeutic intervention by combining the minimum cost to ensure the proper use of antibiotics.The discovery of sensitive and robust molecular diagnostic tests in response to the role of the host in infections has enhanced the accurate diagnosis and differentiation of infections. METHOD: The study used a sample of six independent blood samples (total=756) which are associated with human proteins-proteins, each of which at the transcription stage expresses a different response in the host network between viral and bacterial infections.Τhe individual blood samples are subjected to a sequence of computer filters that identify a gene panel corresponding to an autonomous diagnostic score. The data set and the correspondence of the gene panel to the diagnostic patents a new Bangalore -Viral Bacterial (BL-VB). FINDING: We use a biomarker based on the blood of 10 genes(Panel-VB) that are an important prognostic value for the detection of viruses from bacterial infections with a weighted average AUROC of 0.97(95% CL:0.96-0.99) in eleven independent samples (sets n=898). We discovered a base with a patient score (VB 10 ) according to the table, which is a significant diagnostic value with a weighted average of AUROC 0.94(95% CL: 0.91-0.98) in 2996 patient samples from 56 public sets of data from 19 different countries. We also studied VB 10 in a new cohort of South India (BL-VB,n=56) and found 97% accuracy in confirmed cases of viral and bacterial infections. We found that VB 10 (a)accurately identifies the type of infection even in unspecified cases negative to the culture (b) shows its clinical condition recovery and (c) applies to all age groups, covering a wide range of acute bacterial and viral infectious, including non-specific pathogens. We applied our VB 10 rating to publicly available COVID 19 data and found that our rating diagnosed viral infection in patient samples. RESULTS: Τhe results of the study showed the diagnostic power of the biomarker VB 10 as a diagnostic test for the accurate diagnosis of acute infections in recovery conditions. We look forward to helping you make clinical decisions about prescribing antibiotics and integrating them into your policies management of antibiotic stewardship efforts. CONCLUSIONS: Overall, we are developing a new property of the RNA-based biomarker and a new blood test to differentiate between viral and bacterial infections to assist a physician in designing the optimal treatment regimen to contribute to the proper use of antibiotics and reduce the burden on antimicrobial resistance, AMR.

Keywords: acute infections, antimicrobial resistance, biomarker, blood transcriptome, systems biology, classifier diagnostic score

Procedia PDF Downloads 156