Search results for: j-r curve
137 Detailed Analysis of Mechanism of Crude Oil and Surfactant Emulsion
Authors: Riddhiman Sherlekar, Umang Paladia, Rachit Desai, Yash Patel
Abstract:
A number of surfactants which exhibit ultra-low interfacial tension and an excellent microemulsion phase behavior with crude oils of low to medium gravity are not sufficiently soluble at optimum salinity to produce stable aqueous solutions. Such solutions often show phase separation after a few days at reservoir temperature, which does not suffice the purpose and the time is short when compared to the residence time in a reservoir for a surfactant flood. The addition of polymer often exacerbates the problem although the poor stability of the surfactant at high salinity remains a pivotal issue. Surfactants such as SDS, Ctab with large hydrophobes produce lowest IFT, but are often not sufficiently water soluble at desired salinity. Hydrophilic co-solvents and/or co-surfactants are needed to make the surfactant-polymer solution stable at the desired salinity. This study focuses on contrasting the effect of addition of a co-solvent in stability of a surfactant –oil emulsion. The idea is to use a co-surfactant to increase stability of an emulsion. Stability of the emulsion is enhanced because of creation of micro-emulsion which is verified both visually and with the help of particle size analyzer at varying concentration of salinity, surfactant and co-surfactant. A lab-experimental method description is provided and the method is described in detail to permit readers to emulate all results. The stability of the oil-water emulsion is visualized with respect to time, temperature, salinity of the brine and concentration of the surfactant. Nonionic surfactant TX-100 when used as a co-surfactant increases the stability of the oil-water emulsion. The stability of the prepared emulsion is checked by observing the particle size distribution. For stable emulsion in volume% vs particle size curve, the peak should be obtained for particle size of 5-50 nm while for the unstable emulsion a bigger sized particles are observed. The UV-Visible spectroscopy is also used to visualize the fraction of oil that plays important role in the formation of micelles in stable emulsion. This is important as the study will help us to decide applicability of the surfactant based EOR method for a reservoir that contains a specific type of crude. The use of nonionic surfactant as a co-surfactant would also increase the efficiency of surfactant EOR. With the decline in oil discoveries during the last decades it is believed that EOR technologies will play a key role to meet the energy demand in years to come. Taking this into consideration, the work focuses on the optimization of the secondary recovery(Water flooding) with the help of surfactant and/or co-surfactants by creating desired conditions in the reservoir.Keywords: co-surfactant, enhanced oil recovery, micro-emulsion, surfactant flooding
Procedia PDF Downloads 251136 Groundwater Potential Mapping using Frequency Ratio and Shannon’s Entropy Models in Lesser Himalaya Zone, Nepal
Authors: Yagya Murti Aryal, Bipin Adhikari, Pradeep Gyawali
Abstract:
The Lesser Himalaya zone of Nepal consists of thrusting and folding belts, which play an important role in the sustainable management of groundwater in the Himalayan regions. The study area is located in the Dolakha and Ramechhap Districts of Bagmati Province, Nepal. Geologically, these districts are situated in the Lesser Himalayas and partly encompass the Higher Himalayan rock sequence, which includes low-grade to high-grade metamorphic rocks. Following the Gorkha Earthquake in 2015, numerous springs dried up, and many others are currently experiencing depletion due to the distortion of the natural groundwater flow. The primary objective of this study is to identify potential groundwater areas and determine suitable sites for artificial groundwater recharge. Two distinct statistical approaches were used to develop models: The Frequency Ratio (FR) and Shannon Entropy (SE) methods. The study utilized both primary and secondary datasets and incorporated significant role and controlling factors derived from field works and literature reviews. Field data collection involved spring inventory, soil analysis, lithology assessment, and hydro-geomorphology study. Additionally, slope, aspect, drainage density, and lineament density were extracted from a Digital Elevation Model (DEM) using GIS and transformed into thematic layers. For training and validation, 114 springs were divided into a 70/30 ratio, with an equal number of non-spring pixels. After assigning weights to each class based on the two proposed models, a groundwater potential map was generated using GIS, classifying the area into five levels: very low, low, moderate, high, and very high. The model's outcome reveals that over 41% of the area falls into the low and very low potential categories, while only 30% of the area demonstrates a high probability of groundwater potential. To evaluate model performance, accuracy was assessed using the Area under the Curve (AUC). The success rate AUC values for the FR and SE methods were determined to be 78.73% and 77.09%, respectively. Additionally, the prediction rate AUC values for the FR and SE methods were calculated as 76.31% and 74.08%. The results indicate that the FR model exhibits greater prediction capability compared to the SE model in this case study.Keywords: groundwater potential mapping, frequency ratio, Shannon’s Entropy, Lesser Himalaya Zone, sustainable groundwater management
Procedia PDF Downloads 81135 The Quantitative Analysis of the Influence of the Superficial Abrasion on the Lifetime of the Frog Rail
Authors: Dong Jiang
Abstract:
Turnout is the essential equipment on the railway, which also belongs to one of the strongest demanded infrastructural facilities of railway on account of the more seriously frog rail failures. In cooperation with Germany Company (DB Systemtechnik AG), our research team focuses on the quantitative analysis about the frog rails to predict their lifetimes. Moreover, the suggestions for the timely and effective maintenances are made to improve the economy of the frog rails. The lifetime of the frog rail depends strongly on the internal damage of the running surface until the breakages occur. On the basis of Hertzian theory of the contact mechanics, the dynamic loads of the running surface are calculated in form of the contact pressures on the running surface and the equivalent tensile stress inside the running surface. According to material mechanics, the strength of the frog rail is determined quantitatively in form of the Stress-cycle (S-N) curve. Under the interaction between the dynamic loads and the strength, the internal damage of the running surface is calculated by means of the linear damage hypothesis of the Miner’s rule. The emergence of the first Breakage on the running surface is to be defined as the failure criterion that the damage degree equals 1.0. From the microscopic perspective, the running surface of the frog rail is divided into numerous segments for the detailed analysis. The internal damage of the segment grows slowly in the beginning and disproportionately quickly in the end until the emergence of the breakage. From the macroscopic perspective, the internal damage of the running surface develops simply always linear along the lifetime. With this linear growth of the internal damages, the lifetime of the frog rail could be predicted simply through the immediate introduction of the slope of the linearity. However, the superficial abrasion plays an essential role in the results of the internal damages from the both perspectives. The influences of the superficial abrasion on the lifetime are described in form of the abrasion rate. It has two contradictory effects. On the one hand, the insufficient abrasion rate causes the concentration of the damage accumulation on the same position below the running surface to accelerate the rail failure. On the other hand, the excessive abrasion rate advances the disappearance of the head hardened surface of the frog rail to result in the untimely breakage on the surface. Thus, the relationship between the abrasion rate and the lifetime is subdivided into an initial phase of the increased lifetime and a subsequent phase of the more rapid decreasing lifetime with the continuous growth of the abrasion rate. Through the compensation of these two effects, the critical abrasion rate is discussed to reach the optimal lifetime.Keywords: breakage, critical abrasion rate, frog rail, internal damage, optimal lifetime
Procedia PDF Downloads 225134 Bovine Sperm Capacitation Promoters: The Comparison between Serum and Non-serum Albumin originated from Fish
Authors: Haris Setiawan, Phongsakorn Chuammitri, Korawan Sringarm, Montira Intanon, Anucha Sathanawongs
Abstract:
Capacitation is a prerequisite to achieving sperm competency to penetrate the oocyte naturally occurring in vivo throughout the female reproductive tract and entangling secretory fluid and epithelial cells. One of the crucial compounds in the oviductal fluid which promotes capacitation is albumin, secreted in major concentrations. However, the difficulties in the collection and the inconsistency of the oviductal fluid composition throughout the estrous cycle have replaced its function with serum-based albumins such as bovine serum albumin (BSA). BSA has been primarily involved and evidenced for their stabilizing effect to maintain the acrosome intact during the capacitation process, modulate hyperactivation, and elevate the number of sperm bound to zona pellucida. Contrary to its benefits, the use of blood-derived products in the culture system is not sustainable and increases the risk of disease transmissions, such as Creutzfeldt-Jakob disease (CJD) and bovine spongiform encephalopathy (BSE). Moreover, it has been asserted that this substance is an aeroallergen that produces allergies and respiratory problems. In an effort to identify an alternative sustainable and non-toxic albumin source, the present work evaluated sperm reactions to a capacitation medium containing albumin derived from the flesh of the snakehead fish (Channa striata). Before examining the ability of this non-serum albumin to promote capacitation in bovine sperm, the presence of albumin was detected using bromocresol purple (BCP) at the level of 25% from snakehead fish extract. Following the SDS-PAGE and densitometric analysis, two major bands at 40 kDa and 47 kDa consisting of 57% and 16% of total protein loaded were detected as the potential albumin-related bands. Significant differences were observed in all kinematic parameters upon incubation in the capacitation medium. Moreover, consistently higher values were shown for the kinematic parameters related to hyperactivation, such as amplitude lateral head (ALH), velocity curve linear (VCL), and linearity (LIN) when sperm were treated with 3 mg/mL of snakehead fish albumin among other treatments. Likewise, substantial differences of higher acrosome intact presented in sperm upon incubation with various concentrations of snakehead fish albumin for 90 minutes, indicating that this level of snakehead fish albumin can be used to replace the bovine serum albumin. However, further study is highly required to purify the albumin from snakehead fish extract for more reliable findings.Keywords: capacitation promoter, snakehead fish, non-serum albumin, bovine sperm
Procedia PDF Downloads 112133 Analysis of Waterjet Propulsion System for an Amphibious Vehicle
Authors: Nafsi K. Ashraf, C. V. Vipin, V. Anantha Subramanian
Abstract:
This paper reports the design of a waterjet propulsion system for an amphibious vehicle based on circulation distribution over the camber line for the sections of the impeller and stator. In contrast with the conventional waterjet design, the inlet duct is straight for water entry parallel and in line with the nozzle exit. The extended nozzle after the stator bowl makes the flow more axial further improving thrust delivery. Waterjet works on the principle of volume flow rate through the system and unlike the propeller, it is an internal flow system. The major difference between the propeller and the waterjet occurs at the flow passing the actuator. Though a ducted propeller could constitute the equivalent of waterjet propulsion, in a realistic situation, the nozzle area for the Waterjet would be proportionately larger to the inlet area and propeller disc area. Moreover, the flow rate through impeller disk is controlled by nozzle area. For these reasons the waterjet design is based on pump systems rather than propellers and therefore it is important to bring out the characteristics of the flow from this point of view. The analysis is carried out using computational fluid dynamics. Design of waterjet propulsion is carried out adapting the axial flow pump design and performance analysis was done with three-dimensional computational fluid dynamics (CFD) code. With the varying environmental conditions as well as with the necessity of high discharge and low head along with the space confinement for the given amphibious vehicle, an axial pump design is suitable. The major problem of inlet velocity distribution is the large variation of velocity in the circumferential direction which gives rise to heavy blade loading that varies with time. The cavitation criteria have also been taken into account as per the hydrodynamic pump design. Generally, waterjet propulsion system can be parted into the inlet, the pump, the nozzle and the steering device. The pump further comprises an impeller and a stator. Analytical and numerical approaches such as RANSE solver has been undertaken to understand the performance of designed waterjet propulsion system. Unlike in case of propellers the analysis was based on head flow curve with efficiency and power curves. The modeling of the impeller is performed using rigid body motion approach. The realizable k-ϵ model has been used for turbulence modeling. The appropriate boundary conditions are applied for the domain, domain size and grid dependence studies are carried out.Keywords: amphibious vehicle, CFD, impeller design, waterjet propulsion
Procedia PDF Downloads 228132 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances
Authors: P. Mounnarath, U. Schmitz, Ch. Zhang
Abstract:
Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis
Procedia PDF Downloads 435131 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 535130 Estimating Affected Croplands and Potential Crop Yield Loss of an Individual Farmer Due to Floods
Authors: Shima Nabinejad, Holger Schüttrumpf
Abstract:
Farmers who are living in flood-prone areas such as coasts are exposed to storm surges increased due to climate change. Crop cultivation is the most important economic activity of farmers, and in the time of flooding, agricultural lands are subject to inundation. Additionally, overflow saline water causes more severe damage outcomes than riverine flooding. Agricultural crops are more vulnerable to salinity than other land uses for which the economic damages may continue for a number of years even after flooding and affect farmers’ decision-making for the following year. Therefore, it is essential to assess what extent the agricultural areas are flooded and how much the associated flood damage to each individual farmer is. To address these questions, we integrated farmers’ decision-making at farm-scale with flood risk management. The integrated model includes identification of hazard scenarios, failure analysis of structural measures, derivation of hydraulic parameters for the inundated areas and analysis of the economic damages experienced by each farmer. The present study has two aims; firstly, it attempts to investigate the flooded cropland and potential crop damages for the whole area. Secondly, it compares them among farmers’ field for three flood scenarios, which differ in breach locations of the flood protection structure. To achieve its goal, the spatial distribution of fields and cultivated crops of farmers were fed into the flood risk model, and a 100-year storm surge hydrograph was selected as the flood event. The study area was Pellworm Island that is located in the German Wadden Sea National Park and surrounded by North Sea. Due to high salt content in seawater of North Sea, crops cultivated in the agricultural areas of Pellworm Island are 100% destroyed by storm surges which were taken into account in developing of depth-damage curve for analysis of consequences. As a result, inundated croplands and economic damages to crops were estimated in the whole Island which was further compared for six selected farmers under three flood scenarios. The results demonstrate the significance and the flexibility of the proposed model in flood risk assessment of flood-prone areas by integrating flood risk management and decision-making.Keywords: crop damages, flood risk analysis, individual farmer, inundated cropland, Pellworm Island, storm surges
Procedia PDF Downloads 257129 Rapid Plasmonic Colorimetric Glucose Biosensor via Biocatalytic Enlargement of Gold Nanostars
Authors: Masauso Moses Phiri
Abstract:
Frequent glucose monitoring is essential to the management of diabetes. Plasmonic enzyme-based glucose biosensors have the advantages of greater specificity, simplicity and rapidity. The aim of this study was to develop a rapid plasmonic colorimetric glucose biosensor based on biocatalytic enlargement of AuNS guided by GOx. Gold nanoparticles of 18 nm in diameter were synthesized using the citrate method. Using these as seeds, a modified seeded method for the synthesis of monodispersed gold nanostars was followed. Both the spherical and star-shaped nanoparticles were characterized using ultra-violet visible spectroscopy, agarose gel electrophoresis, dynamic light scattering, high-resolution transmission electron microscopy and energy-dispersive X-ray spectroscopy. The feasibility of a plasmonic colorimetric assay through growth of AuNS by silver coating in the presence of hydrogen peroxide was investigated by several control and optimization experiments. Conditions for excellent sensing such as the concentration of the detection solution in the presence of 20 µL AuNS, 10 mM of 2-(N-morpholino) ethanesulfonic acid (MES), ammonia and hydrogen peroxide were optimized. Using the optimized conditions, the glucose assay was developed by adding 5mM of GOx to the solution and varying concentrations of glucose to it. Kinetic readings, as well as color changes, were observed. The results showed that the absorbance values of the AuNS were blue shifting and increasing as the concentration of glucose was elevated. Control experiments indicated no growth of AuNS in the absence of GOx, glucose or molecular O₂. Increased glucose concentration led to an enhanced growth of AuNS. The detection of glucose was also done by naked-eye. The color development was near complete in ± 10 minutes. The kinetic readings which were monitored at 450 and 560 nm showed that the assay could discriminate between different concentrations of glucose by ± 50 seconds and near complete at ± 120 seconds. A calibration curve for the qualitative measurement of glucose was derived. The magnitude of wavelength shifts and absorbance values increased concomitantly with glucose concentrations until 90 µg/mL. Beyond that, it leveled off. The lowest amount of glucose that could produce a blue shift in the localized surface plasmon resonance (LSPR) absorption maxima was found to be 10 – 90 µg/mL. The limit of detection was 0.12 µg/mL. This enabled the construction of a direct sensitivity plasmonic colorimetric detection of glucose using AuNS that was rapid, sensitive and cost-effective with naked-eye detection. It has great potential for transfer of technology for point-of-care devices.Keywords: colorimetric, gold nanostars, glucose, glucose oxidase, plasmonic
Procedia PDF Downloads 152128 A Lower Dose of Topiramate with Enough Antiseizure Effect: A Realistic Therapeutic Range of Topiramate
Authors: Seolah Lee, Yoohyk Jang, Soyoung Lee, Kon Chu, Sang Kun Lee
Abstract:
Objective: The International League Against Epilepsy (ILAE) currently suggests a topiramate serum level range of 5-20 mg/L. However, numerous institutions have observed substantial drug response at lower levels. This study aims to investigate the correlation between topiramate serum levels, drug responsiveness, and adverse events to establish a more accurate and tailored therapeutic range. Methods: We retrospectively analyzed topiramate serum samples collected between January 2017 and January 2022 at Seoul National University Hospital. Clinical data, including serum levels, antiseizure regimens, seizure frequency, and adverse events, were collected. Patient responses were categorized as "insufficient" (reduction in seizure frequency <50%) or "sufficient" (reduction ≥ 50%). Within the "sufficient" group, further subdivisions included seizure-free and tolerable seizure subgroups. A population pharmacokinetic model estimated serum levels from spot measurements. ROC curve analysis determined the optimal serum level cut-off. Results: A total of 389 epilepsy patients, with 555 samples, were reviewed, having a mean dose of 178.4±117.9 mg/day and a serum level of 3.9±2.8 mg/L. Out of the samples, only 5.6% (n=31) exhibited insufficient response, with a mean serum level of 3.6±2.5 mg/L. In contrast, 94.4% (n=524) of samples demonstrated sufficient response, with a mean serum level of 4.0±2.8 mg/L. This difference was not statistically significant (p = 0.45). Among the 78 reported adverse events, logistic regression analysis identified a significant association between ataxia and serum concentration (p = 0.04), with an optimal cut-off value of 6.5 mg/L. In the subgroup of patients receiving monotherapy, those in the tolerable seizure group exhibited a significantly higher serum level compared to the seizure-free group (4.8±2.0 mg/L vs 3.4±2.3 mg/L, p < 0.01). Notably, patients in the tolerable seizure group displayed a higher likelihood of progressing into drug-resistant epilepsy during follow-up visits compared to the seizure-free group. Significance: This study proposed an optimal therapeutic concentration for topiramate based on the patient's responsiveness to the drug and the incidence of adverse effects. We employed a population pharmacokinetic model and analyzed topiramate serum levels to recommend a serum level below 6.5 mg/L to mitigate the risk of ataxia-related side effects. Our findings also indicated that topiramate dose elevation is unnecessary for suboptimal responders, as the drug's effectiveness plateaus at minimal doses.Keywords: topiramate, therapeutic range, low dos, antiseizure effect
Procedia PDF Downloads 55127 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)
Authors: Azimollah Aleshzadeh, Enver Vural Yavuz
Abstract:
The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping
Procedia PDF Downloads 132126 Layer-By-Layer Deposition of Poly (Amidoamine) and Poly (Acrylic Acid) on Grafted-Polylactide Nonwoven with Different Surface Charge
Authors: Sima Shakoorjavan, Mahdieh Eskafi, Dawid Stawski, Somaye Akbari
Abstract:
In this study, poly (amidoamine) dendritic material (PAMAM) and poly (acrylic acid) (PAA) as polycation and polyanion were deposited on surface charged polylactide (PLA) nonwoven to study the relationship of dye absorption capacity of layered-PLA with the number of deposited layers. To produce negatively charged-PLA, acrylic acid (AA) was grafted on the PLA surface (PLA-g-AA) through a chemical redox reaction with the strong oxidizing agent. Spectroscopy analysis, water contact measurement, and FTIR-ATR analysis confirm the successful grafting of AA on the PLA surface through the chemical redox reaction method. In detail, an increase in dye absorption percentage by 19% and immediate absorption of water droplets ensured hydrophilicity of PLA-g-AA surface; and the presence of new carbonyl bond at 1530 cm-¹ and a wide peak of hydroxyl between 3680-3130 cm-¹ confirm AA grafting. In addition, PLA as linear polyester can undergo aminolysis, which is the cleavage of ester bonds and replacement with amid bonds when exposed to an aminolysis agent. Therefore, to produce positively charged PLA, PAMAM as amine-terminated dendritic material was introduced to PLA molecular chains at different conditions; (1) at 60 C for 0.5, 1, 1.5, 2 hours of aminolysis and (2) at room temperature (RT) for 1, 2, 3, and 4 hours of aminolysis. Weight changes and spectrophotometer measurements showed a maximum in weight gain graph and K/S value curve indicating the highest PAMAM attachment at 60 C for 1 hour and RT for 2 hours which is considered as an optimum condition. Also, the emerging new peak around 1650 cm-1 corresponding to N-H bending vibration and double wide peak at around 3670-3170 cm-1 corresponding to N-H stretching vibration confirm PAMAM attachment in selected optimum condition. In the following, regarding the initial surface charge of grafted-PLA, lbl deposition was performed and started with PAA or PAMAM. FTIR-ATR results confirm chemical changes in samples due to deposition of the first layer (PAA or PAMAM). Generally, spectroscopy analysis indicated that an increase in layer number costed dye absorption capacity. It can be due to the partial deposition of a new layer on the previously deposited layer; therefore, the available PAMAM at the first layer is more than the third layer. In detail, in the case of layer-PLA starting lbl with negatively charged, having PAMAM as the first top layer (PLA-g-AA/PAMAM) showed the highest dye absorption of both cationic and anionic model dye.Keywords: surface modification, layer-by-layer technique, dendritic materials, PAMAM, dye absorption capacity, PLA nonwoven
Procedia PDF Downloads 84125 Tuning of Indirect Exchange Coupling in FePt/Al₂O₃/Fe₃Pt System
Authors: Rajan Goyal, S. Lamba, S. Annapoorni
Abstract:
The indirect exchange coupled system consists of two ferromagnetic layers separated by non-magnetic spacer layer. The type of exchange coupling may be either ferro or anti-ferro depending on the thickness of the spacer layer. In the present work, the strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt has been investigated by varying the thickness of the spacer layer Al₂O₃. The FePt/Al₂O₃/Fe₃Pt trilayer structure is fabricated on Si <100> single crystal substrate using sputtering technique. The thickness of FePt and Fe₃Pt is fixed at 60 nm and 2 nm respectively. The thickness of spacer layer Al₂O₃ was varied from 0 to 16 nm. The normalized hysteresis loops recorded at room temperature both in the in-plane and out of plane configuration reveals that the orientation of easy axis lies along the plane of the film. It is observed that the hysteresis loop for ts=0 nm does not exhibit any knee around H=0 indicating that the hard FePt layer and soft Fe₃Pt layer are strongly exchange coupled. However, the insertion of Al₂O₃ spacer layer of thickness ts = 0.7 nm results in appearance of a minor knee around H=0 suggesting the weakening of exchange coupling between FePt and Fe₃Pt. The disappearance of knee in hysteresis loop with further increase in thickness of the spacer layer up to 8 nm predicts the co-existence of ferromagnetic (FM) and antiferromagnetic (AFM) exchange interaction between FePt and Fe₃Pt. In addition to this, the out of plane hysteresis loop also shows an asymmetry around H=0. The exchange field Hex = (Hc↑-HC↓)/2, where Hc↑ and Hc↓ are the coercivity estimated from lower and upper branch of hysteresis loop, increases from ~ 150 Oe to ~ 700 Oe respectively. This behavior may be attributed to the uncompensated moments in the hard FePt layer and soft Fe₃Pt layer at the interface. A better insight into the variation in indirect exchange coupling has been investigated using recoil curves. It is observed that the almost closed recoil curves are obtained for ts= 0 nm up to a reverse field of ~ 5 kOe. On the other hand, the appearance of appreciable open recoil curves at lower reverse field ~ 4 kOe for ts = 0.7 nm indicates that uncoupled soft phase undergoes irreversible magnetization reversal at lower reverse field suggesting the weakening of exchange coupling. The openness of recoil curves decreases with increase in thickness of the spacer layer up to 8 nm. This behavior may be attributed to the competition between FM and AFM exchange interactions. The FM exchange coupling between FePt and Fe₃Pt due to porous nature of Al₂O₃ decreases much slower than the weak AFM coupling due to interaction between Fe ions of FePt and Fe₃Pt via O ions of Al₂O₃. The hysteresis loop has been simulated using Monte Carlo based on Metropolis algorithm to investigate the variation in strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt trilayer system.Keywords: indirect exchange coupling, MH loop, Monte Carlo simulation, recoil curve
Procedia PDF Downloads 190124 Investigation of Rehabilitation Effects on Fire Damaged High Strength Concrete Beams
Authors: Eun Mi Ryu, Ah Young An, Ji Yeon Kang, Yeong Soo Shin, Hee Sun Kim
Abstract:
As the number of fire incidents has been increased, fire incidents significantly damage economy and human lives. Especially when high strength reinforced concrete is exposed to high temperature due to a fire, deterioration occurs such as loss in strength and elastic modulus, cracking, and spalling of the concrete. Therefore, it is important to understand risk of structural safety in building structures by studying structural behaviors and rehabilitation of fire damaged high strength concrete structures. This paper aims at investigating rehabilitation effect on fire damaged high strength concrete beams using experimental and analytical methods. In the experiments, flexural specimens with high strength concrete are exposed to high temperatures according to ISO 834 standard time temperature curve. After heated, the fire damaged reinforced concrete (RC) beams having different cover thicknesses and fire exposure time periods are rehabilitated by removing damaged part of cover thickness and filling polymeric mortar into the removed part. From four-point loading test, results show that maximum loads of the rehabilitated RC beams are 1.8~20.9% higher than those of the non-fire damaged RC beam. On the other hand, ductility ratios of the rehabilitated RC beams are decreased than that of the non-fire damaged RC beam. In addition, structural analyses are performed using ABAQUS 6.10-3 with same conditions as experiments to provide accurate predictions on structural and mechanical behaviors of rehabilitated RC beams. For the rehabilitated RC beam models, integrated temperature–structural analyses are performed in advance to obtain geometries of the fire damaged RC beams. After spalled and damaged parts are removed, rehabilitated part is added to the damaged model with material properties of polymeric mortar. Three dimensional continuum brick elements are used for both temperature and structural analyses. The same loading and boundary conditions as experiments are implemented to the rehabilitated beam models and nonlinear geometrical analyses are performed. Structural analytical results show good rehabilitation effects, when the result predicted from the rehabilitated models are compared to structural behaviors of the non-damaged RC beams. In this study, fire damaged high strength concrete beams are rehabilitated using polymeric mortar. From four point loading tests, it is found that such rehabilitation is able to make the structural performance of fire damaged beams similar to non-damaged RC beams. The predictions from the finite element models show good agreements with the experimental results and the modeling approaches can be used to investigate applicability of various rehabilitation methods for further study.Keywords: fire, high strength concrete, rehabilitation, reinforced concrete beam
Procedia PDF Downloads 445123 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique
Procedia PDF Downloads 594122 Resonant Tunnelling Diode Output Characteristics Dependence on Structural Parameters: Simulations Based on Non-Equilibrium Green Functions
Authors: Saif Alomari
Abstract:
The paper aims at giving physical and mathematical descriptions of how the structural parameters of a resonant tunnelling diode (RTD) affect its output characteristics. Specifically, the value of the peak voltage, peak current, peak to valley current ratio (PVCR), and the difference between peak and valley voltages and currents ΔV and ΔI. A simulation-based approach using the Non-Equilibrium Green Function (NEGF) formalism based on the Silvaco ATLAS simulator is employed to conduct a series of designed experiments. These experiments show how the doping concentration in the emitter and collector layers, their thicknesses, and the width of the barriers and the quantum well influence the above-mentioned output characteristics. Each of these parameters was systematically changed while holding others fixed in each set of experiments. Factorial experiments are outside the scope of this work and will be investigated in future. The physics involved in the operation of the device is thoroughly explained and mathematical models based on curve fitting and underlaying physical principles are deduced. The models can be used to design devices with predictable output characteristics. These models were found absent in the literature that the author acanned. Results show that the doping concentration in each region has an effect on the value of the peak voltage. It is found that increasing the carrier concentration in the collector region shifts the peak to lower values, whereas increasing it in the emitter shifts the peak to higher values. In the collector’s case, the shift is either controlled by the built-in potential resulting from the concentration gradient or the conductivity enhancement in the collector. The shift to higher voltages is found to be also related to the location of the Fermi-level. The thicknesses of these layers play a role in the location of the peak as well. It was found that increasing the thickness of each region shifts the peak to higher values until a specific characteristic length, afterwards the peak becomes independent of the thickness. Finally, it is shown that the thickness of the barriers can be optimized for a particular well width to produce the highest PVCR or the highest ΔV and ΔI. The location of the peak voltage is important in optoelectronic applications of RTDs where the operating point of the device is usually the peak voltage point. Furthermore, the PVCR, ΔV, and ΔI are of great importance for building RTD-based oscillators as they affect the frequency response and output power of the oscillator.Keywords: peak to valley ratio, peak voltage shift, resonant tunneling diodes, structural parameters
Procedia PDF Downloads 142121 Real-Time Quantitative Polymerase Chain Reaction Assay for the Detection of microRNAs Using Bi-Directional Extension Sequences
Authors: Kyung Jin Kim, Jiwon Kwak, Jae-Hoon Lee, Soo Suk Lee
Abstract:
MicroRNAs (miRNA) are a class of endogenous, single-stranded, small, and non-protein coding RNA molecules typically 20-25 nucleotides long. They are thought to regulate the expression of other genes in a broad range by binding to 3’- untranslated regions (3’-UTRs) of specific mRNAs. The detection of miRNAs is very important for understanding of the function of these molecules and in the diagnosis of variety of human diseases. However, detection of miRNAs is very challenging because of their short length and high sequence similarities within miRNA families. So, a simple-to-use, low-cost, and highly sensitive method for the detection of miRNAs is desirable. In this study, we demonstrate a novel bi-directional extension (BDE) assay. In the first step, a specific linear RT primer is hybridized to 6-10 base pairs from the 3’-end of a target miRNA molecule and then reverse transcribed to generate a cDNA strand. After reverse transcription, the cDNA was hybridized to the 3’-end which is BDE sequence; it played role as the PCR template. The PCR template was amplified in an SYBR green-based quantitative real-time PCR. To prove the concept, we used human brain total RNA. It could be detected quantitatively in the range of seven orders of magnitude with excellent linearity and reproducibility. To evaluate the performance of BDE assay, we contrasted sensitivity and specificity of the BDE assay against a commercially available poly (A) tailing method using miRNAs for let-7e extracted from A549 human epithelial lung cancer cells. The BDE assay displayed good performance compared with a poly (A) tailing method in terms of specificity and sensitivity; the CT values differed by 2.5 and the melting curve showed a sharper than poly (A) tailing methods. We have demonstrated an innovative, cost-effective BDE assay that allows improved sensitivity and specificity in detection of miRNAs. Dynamic range of the SYBR green-based RT-qPCR for miR-145 could be represented quantitatively over a range of 7 orders of magnitude from 0.1 pg to 1.0 μg of human brain total RNA. Finally, the BDE assay for detection of miRNA species such as let-7e shows good performance compared with a poly (A) tailing method in terms of specificity and sensitivity. Thus BDE proves a simple, low cost, and highly sensitive assay for various miRNAs and should provide significant contributions in research on miRNA biology and application of disease diagnostics with miRNAs as targets.Keywords: bi-directional extension (BDE), microRNA (miRNA), poly (A) tailing assay, reverse transcription, RT-qPCR
Procedia PDF Downloads 166120 Dose Saving and Image Quality Evaluation for Computed Tomography Head Scanning with Eye Protection
Authors: Yuan-Hao Lee, Chia-Wei Lee, Ming-Fang Lin, Tzu-Huei Wu, Chih-Hsiang Ko, Wing P. Chan
Abstract:
Computed tomography (CT) scan of the head is a good method for investigating cranial lesions. However, radiation-induced oxidative stress can be accumulated in the eyes and promote carcinogenesis and cataract. In this regard, we aimed to protect the eyes with barium sulfate shield(s) during CT scans and investigate the resultant image quality and radiation dose to the eye. Patients who underwent health examinations were selectively enrolled in this study in compliance with the protocol approved by the Ethics Committee of the Joint Institutional Review Board at Taipei Medical University. Participants’ brains were scanned with a water-based marker simultaneously by a multislice CT scanner (SOMATON Definition Flash) under a fixed tube current-time setting or automatic tube current modulation (TCM). The lens dose was measured by Gafchromic films, whose dose response curve was previously fitted using thermoluminescent dosimeters, with or without barium sulfate or bismuth-antimony shield laid above. For the assessment of image quality CT images at slice planes that exhibit the interested regions on the zygomatic, orbital and nasal bones of the head phantom as well as the water-based marker were used for calculating the signal-to-noise and contrast-to-noise ratios. The application of barium sulfate and bismuth-antimony shields decreased 24% and 47% of the lens dose on average, respectively. Under topogram-based TCM, the dose saving power of bismuth-antimony shield was mitigated whereas that of barium sulfate shield was enhanced. On the other hand, the signal-to-noise and contrast-to-noise ratios of DSCT images were decreased separately by barium sulfate and bismuth-antimony shield, resulting in an overall reduction of the CNR. In contrast, the integration of topogram-based TCM elevated signal difference between the ROIs on the zygomatic bones and eyeballs while preferentially decreasing the signal-to-noise ratios upon the use of barium sulfate shield. The results of this study indicate that the balance between eye exposure and image quality can be optimized by combining eye shields with topogram-based TCM on the multislice scanner. Eye shielding could change the photon attenuation characteristics of tissues that are close to the shield. The application of both shields on eye protection hence is not recommended for seeking intraorbital lesions.Keywords: computed tomography, barium sulfate shield, dose saving, image quality
Procedia PDF Downloads 268119 Experimental Studies of the Reverse Load-Unloading Effect on the Mechanical, Linear and Nonlinear Elastic Properties of n-AMg6/C60 Nanocomposite
Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy, Vyacheslav M. Prokhorov
Abstract:
The paper presents the results of an experimental study of the effect of reverse mechanical load-unloading on the mechanical, linear, and nonlinear elastic properties of n-AMg6/C60 nanocomposite. Samples for experimental studies of n-AMg6/C60 nanocomposite were obtained by grinding AMg6 polycrystalline alloy in a planetary mill with 0.3 wt % of C60 fullerite in an argon atmosphere. The resulting product consisted of 200-500-micron agglomerates of nanoparticles. X-ray coherent scattering (CSL) method has shown that the average nanoparticle size is 40-60 nm. The resulting preform was extruded at high temperature. Modifications of C60 fullerite interferes the process of recrystallization at grain boundaries. In the samples of n-AMg6/C60 nanocomposite, the load curve is measured: the dependence of the mechanical stress σ on the strain of the sample ε under its multi-cycle load-unloading process till its destruction. The hysteresis dependence σ = σ(ε) was observed, and insignificant residual strain ε < 0.005 were recorded. At σ≈500 MPa and ε≈0.025, the sample was destroyed. The destruction of the sample was fragile. Microhardness was measured before and after destruction of the sample. It was found that the loading-unloading process led to an increase in its microhardness. The effect of the reversible mechanical stress on the linear and nonlinear elastic properties of the n-AMg6/C60 nanocomposite was studied experimentally by ultrasonic method on the automated complex Ritec RAM-5000 SNAP SYSTEM. In the n-AMg6/C60 nanocomposite, the velocities of the longitudinal and shear bulk waves were measured with the pulse method, and all the second-order elasticity coefficients and their dependence on the magnitude of the reversible mechanical stress applied to the sample were calculated. Studies of nonlinear elastic properties of the n-AMg6/C60 nanocomposite at reversible load-unloading of the sample were carried out with the spectral method. At arbitrary values of the strain of the sample (up to its breakage), the dependence of the amplitude of the second longitudinal acoustic harmonic at a frequency of 2f = 10MHz on the amplitude of the first harmonic at a frequency f = 5MHz of the acoustic wave is measured. Based on the results of these measurements, the values of the nonlinear acoustic parameter in the n-AMg6/C60 nanocomposite sample at different mechanical stress were determined. The obtained results can be used in solid-state physics, materials science, for development of new techniques for nondestructive testing of structural materials using methods of nonlinear acoustic diagnostics. This study was supported by the Russian Science Foundation (project №14-22-00042).Keywords: nanocomposite, generation of acoustic harmonics, nonlinear acoustic parameter, hysteresis
Procedia PDF Downloads 151118 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer
Procedia PDF Downloads 108117 Overcoming Obstacles in UHTHigh-protein Whey Beverages by Microparticulation Process: Scientific and Technological Aspects
Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh, Seyed Jalal Razavi Zahedkolaei
Abstract:
Herein, a shelf stable (no refrigeration required) UHT processed, aseptically packaged whey protein drink was formulated by using a new strategy in microparticulate process. Applying thermal and two-dimensional mechanical treatments simultaneously, a modified protein (MWPC-80) was produced. Then the physical, thermal and thermodynamic properties of MWPC-80 were assessed using particle size analysis, dynamic temperature sweep (DTS), and differential scanning calorimetric (DSC) tests. Finally, using MWPC-80, a new RTD beverage was formulated, and shelf stability was assessed for three months at ambient temperature (25 °C). Non-isothermal dynamic temperature sweep was performed, and the results were analyzed by a combination of classic rate equation, Arrhenius equation, and time-temperature relationship. Generally, results showed that temperature dependency of the modified sample was significantly (Pvalue<0.05) less than the control one contained WPC-80. The changes in elastic modulus of the MWPC did not show any critical point at all the processed stages, whereas, the control sample showed two critical points during heating (82.5 °C) and cooling (71.10 °C) stages. Thermal properties of samples (WPC-80 & MWPC-80) were assessed using DSC with 4 °C /min heating speed at 20-90 °C heating range. Results did not show any thermal peak in MWPC DSC curve, which suggested high thermal resistance. On the other hands, WPC-80 sample showed a significant thermal peak with thermodynamic properties of ∆G:942.52 Kj/mol ∆H:857.04 Kj/mole and ∆S:-1.22Kj/mole°K. Dynamic light scattering was performed and results showed 0.7 µm and 15 nm average particle size for MWPC-80 and WPC-80 samples, respectively. Moreover, particle size distribution of MWPC-80 and WPC-80 were Gaussian-Lutresian and normal, respectively. After verification of microparticulation process by DTS, PSD and DSC analyses, a 10% why protein beverage (10% w/w/ MWPC-80, 0.6% w/w vanilla flavoring agent, 0.1% masking flavor, 0.05% stevia natural sweetener and 0.25% citrate buffer) was formulated and UHT treatment was performed at 137 °C and 4 s. Shelf life study did not show any jellification or precipitation of MWPC-80 contained beverage during three months storage at ambient temperature, whereas, WPC-80 contained beverage showed significant precipitation and jellification after thermal processing, even at 3% w/w concentration. Consumer knowledge on nutritional advantages of whey protein increased the request for using this protein in different food systems especially RTD beverages. These results could make a huge difference in this industry.Keywords: high protein whey beverage, micropartiqulation, two-dimentional mechanical treatments, thermodynamic properties
Procedia PDF Downloads 74116 Transgenerational Impact of Intrauterine Hyperglycaemia to F2 Offspring without Pre-Diabetic Exposure on F1 Male Offspring
Authors: Jun Ren, Zhen-Hua Ming, He-Feng Huang, Jian-Zhong Sheng
Abstract:
Adverse intrauterine stimulus during critical or sensitive periods in early life, may lead to health risk not only in later life span, but also further generations. Intrauterine hyperglycaemia, as a major feature of gestational diabetes mellitus (GDM), is a typical adverse environment for both F1 fetus and F1 gamete cells development. However, there is scare information of phenotypic difference of metabolic memory between somatic cells and germ cells exposed by intrauterine hyperglycaemia. The direct transmission effect of intrauterine hyperglycaemia per se has not been assessed either. In this study, we built a GDM mice model and selected male GDM offspring without pre-diabetic phenotype as our founders, to exclude postnatal diabetic influence on gametes, thereby investigate the direct transmission effect of intrauterine hyperglycaemia exposure on F2 offspring, and we further compared the metabolic difference of affected F1-GDM male offspring and F2 offspring. A GDM mouse model of intrauterine hyperglycemia was established by intraperitoneal injection of streptozotocin after pregnancy. Pups of GDM mother were fostered by normal control mothers. All the mice were fed with standard food. Male GDM offspring without metabolic dysfunction phenotype were crossed with normal female mice to obtain F2 offspring. Body weight, glucose tolerance test, insulin tolerance test and homeostasis model of insulin resistance (HOMA-IR) index were measured in both generations at 8 week of age. Some of F1-GDM male mice showed impaired glucose tolerance (p < 0.001), none of F1-GDM male mice showed impaired insulin sensitivity. Body weight of F1-GDM mice showed no significance with control mice. Some of F2-GDM offspring exhibited impaired glucose tolerance (p < 0.001), all the F2-GDM offspring exhibited higher HOMA-IR index (p < 0.01 of normal glucose tolerance individuals vs. control, p < 0.05 of glucose intolerance individuals vs. control). All the F2-GDM offspring exhibited higher ITT curve than control (p < 0.001 of normal glucose tolerance individuals, p < 0.05 of glucose intolerance individuals, vs. control). F2-GDM offspring had higher body weight than control mice (p < 0.001 of normal glucose tolerance individuals, p < 0.001 of glucose intolerance individuals, vs. control). While glucose intolerance is the only phenotype that F1-GDM male mice may exhibit, F2 male generation of healthy F1-GDM father showed insulin resistance, increased body weight and/or impaired glucose tolerance. These findings imply that intrauterine hyperglycaemia exposure affects germ cells and somatic cells differently, thus F1 and F2 offspring demonstrated distinct metabolic dysfunction phenotypes. And intrauterine hyperglycaemia exposure per se has a strong influence on F2 generation, independent of postnatal metabolic dysfunction exposure.Keywords: inheritance, insulin resistance, intrauterine hyperglycaemia, offspring
Procedia PDF Downloads 238115 Low SPOP Expression and High MDM2 expression Are Associated with Tumor Progression and Predict Poor Prognosis in Hepatocellular Carcinoma
Authors: Chang Liang, Weizhi Gong, Yan Zhang
Abstract:
Purpose: Hepatocellular carcinoma (HCC) is a malignant tumor with a high mortality rate and poor prognosis worldwide. Murine double minute 2 (MDM2) regulates the tumor suppressor p53, increasing cancer risk and accelerating tumor progression. Speckle-type POX virus and zinc finger protein (SPOP), a key of subunit of Cullin-Ring E3 ligase, inhibits tumor genesis and progression by the ubiquitination of its downstream substrates. This study aimed to clarify whether SPOP and MDM2 are mutually regulated in HCC and the correlation between SPOP and MDM2 and the prognosis of HCC patients. Methods: First, the expression of SPOP and MDM2 in HCC tissues were detected by TCGA database. Then, 53 paired samples of HCC tumor and adjacent tissues were collected to evaluate the expression of SPOP and MDM2 using immunohistochemistry. Chi-square test or Fisher’s exact test were used to analyze the relationship between clinicopathological features and the expression levels of SPOP and MDM2. In addition, Kaplan‒Meier curve analysis and log-rank test were used to investigate the effects of SPOP and MDM2 on the survival of HCC patients. Last, the Multivariate Cox proportional risk regression model analyzed whether the different expression levels of SPOP and MDM2 were independent risk factors for the prognosis of HCC patients. Results: Bioinformatics analysis revealed the low expression of SPOP and high expression of MDM2 were related to worse prognosis of HCC patients. The relationship between the expression of SPOP and MDM2 and tumor stem-like features showed an opposite trend. The immunohistochemistry showed the expression of SPOP protein was significantly downregulated while MDM2 protein significantly upregulated in HCC tissue compared to that in para-cancerous tissue. Tumors with low SPOP expression were related to worse T stage and Barcelona Clinic Liver Cancer (BCLC) stage, but tumors with high MDM2 expression were related to worse T stage, M stage, and BCLC stage. Kaplan–Meier curves showed HCC patients with high SPOP expression and low MDM2 expression had better survival than those with low SPOP expression and high MDM2 expression (P < 0.05). A multivariate Cox proportional risk regression model confirmed that a high MDM2 expression level was an independent risk factor for poor prognosis in HCC patients (P <0.05). Conclusion: The expression of SPOP protein was significantly downregulated, while the expression of MDM2 significantly upregulated in HCC. The low expression of SPOP and high expression. of MDM2 were associated with malignant progression and poor prognosis of HCC patients, indicating a potential therapeutic target for HCC patients.Keywords: hepatocellular carcinoma, murine double minute 2, speckle-type POX virus and zinc finger protein, ubiquitination
Procedia PDF Downloads 144114 Improving Predictions of Coastal Benthic Invertebrate Occurrence and Density Using a Multi-Scalar Approach
Authors: Stephanie Watson, Fabrice Stephenson, Conrad Pilditch, Carolyn Lundquist
Abstract:
Spatial data detailing both the distribution and density of functionally important marine species are needed to inform management decisions. Species distribution models (SDMs) have proven helpful in this regard; however, models often focus only on species occurrences derived from spatially expansive datasets and lack the resolution and detail required to inform regional management decisions. Boosted regression trees (BRT) were used to produce high-resolution SDMs (250 m) at two spatial scales predicting probability of occurrence, abundance (count per sample unit), density (count per km2) and uncertainty for seven coastal seafloor taxa that vary in habitat usage and distribution to examine prediction differences and implications for coastal management. We investigated if small scale regionally focussed models (82,000 km2) can provide improved predictions compared to data-rich national scale models (4.2 million km2). We explored the variability in predictions across model type (occurrence vs abundance) and model scale to determine if specific taxa models or model types are more robust to geographical variability. National scale occurrence models correlated well with broad-scale environmental predictors, resulting in higher AUC (Area under the receiver operating curve) and deviance explained scores; however, they tended to overpredict in the coastal environment and lacked spatially differentiated detail for some taxa. Regional models had lower overall performance, but for some taxa, spatial predictions were more differentiated at a localised ecological scale. National density models were often spatially refined and highlighted areas of ecological relevance producing more useful outputs than regional-scale models. The utility of a two-scale approach aids the selection of the most optimal combination of models to create a spatially informative density model, as results contrasted for specific taxa between model type and scale. However, it is vital that robust predictions of occurrence and abundance are generated as inputs for the combined density model as areas that do not spatially align between models can be discarded. This study demonstrates the variability in SDM outputs created over different geographical scales and highlights implications and opportunities for managers utilising these tools for regional conservation, particularly in data-limited environments.Keywords: Benthic ecology, spatial modelling, multi-scalar modelling, marine conservation.
Procedia PDF Downloads 77113 Application of Geosynthetics for the Recovery of Located Road on Geological Failure
Authors: Rideci Farias, Haroldo Paranhos
Abstract:
The present work deals with the use of drainage geo-composite as a deep drainage and geogrid element to reinforce the base of the body of the landfill destined to the road pavement on geological faults in the stretch of the TO-342 Highway, between the cities of Miracema and Miranorte, in the State of Tocantins / TO, Brazil, which for many years was the main link between TO-010 and BR-153, after the city of Palmas, also in the state of Tocantins / TO, Brazil. For this application, geotechnical and geological studies were carried out by means of SPT percussion drilling, drilling and rotary drilling, to understand the problem, identifying the type of faults, filling material and the definition of the water table. According to the geological and geotechnical studies carried out, the area where the route was defined, passes through a zone of longitudinal fault to the runway, with strong breaking / fracturing, with presence of voids, intense alteration and with advanced argilization of the rock and with the filling up parts of the faults by organic and compressible soils leachate from other horizons. This geology presents as a geotechnical aggravating agent a medium of high hydraulic load and very low resistance to penetration. For more than 20 years, the region presented constant excessive deformations in the upper layers of the pavement, which after routine services of regularization, reconformation, re-compaction of the layers and application of the asphalt coating. The faults were quickly propagated to the surface of the asphalt pavement, generating a longitudinal shear, forming steps (unevenness), close to 40 cm, causing numerous accidents and discomfort to the drivers, since the geometric positioning was in a horizontal curve. Several projects were presented to the region's highway department to solve the problem. Due to the need for partial closure of the runway, the short time for execution, the use of geosynthetics was proposed and the most adequate solution for the problem was taken into account the movement of existing geological faults and the position of the water level in relation to several Layers of pavement and failure. In order to avoid any flow of water in the body of the landfill and in the filling material of the faults, a drainage curtain solution was used, carried out at 4.0 meters depth, with drainage geo-composite and as reinforcement element and inhibitor of the possible A geogrid of 200 kN / m of resistance was inserted at the base of the reconstituted landfill. Recent evaluations, after 13 years of application of the solution, show the efficiency of the technique used, supported by the geotechnical studies carried out in the area.Keywords: geosynthetics, geocomposite, geogrid, road, recovery, geological failure
Procedia PDF Downloads 170112 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID
Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis
Abstract:
Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.Keywords: artificial intelligence, COVID, neural network, machine learning
Procedia PDF Downloads 93111 A Preliminary in vitro Investigation of the Acetylcholinesterase and α-Amylase Inhibition Potential of Pomegranate Peel Extracts
Authors: Zoi Konsoula
Abstract:
The increasing prevalence of Alzheimer’s disease (AD) and diabetes mellitus (DM) constitutes them major global health problems. Recently, the inhibition of key enzyme activity is considered a potential treatment of both diseases. Specifically, inhibition of acetylcholinesterase (AChE), the key enzyme involved in the breakdown of the neurotransmitter acetylcholine, is a promising approach for the treatment of AD, while inhibition of α-amylase retards the hydrolysis of carbohydrates and, thus, reduces hyperglycemia. Unfortunately, commercially available AChE and α-amylase inhibitors are reported to possess side effects. Consequently, there is a need to develop safe and effective treatments for both diseases. In the present study, pomegranate peel (PP) was extracted using various solvents of increasing polarity, while two extraction methods were employed, the conventional maceration and the ultrasound assisted extraction (UAE). The concentration of bioactive phytoconstituents, such as total phenolics (TPC) and total flavonoids (TFC) in the prepared extracts was evaluated by the Folin-Ciocalteu and the aluminum-flavonoid complex method, respectively. Furthermore, the anti-neurodegenerative and anti-hyperglycemic activity of all extracts was determined using AChE and α-amylase inhibitory activity assays, respectively. The inhibitory activity of the extracts against AChE and α-amylase was characterized by estimating their IC₅₀ value using a dose-response curve, while galanthamine and acarbose were used as positive controls, respectively. Finally, the kinetics of AChE and α-amylase in the presence of the most inhibitory potent extracts was determined by the Lineweaver-Burk plot. The methanolic extract prepared using the UAE contained the highest amount of phytoconstituents, followed by the respective ethanolic extract. All extracts inhibited acetylcholinesterase in a dose-dependent manner, while the increased anticholinesterase activity of the methanolic (IC₅₀ = 32 μg/mL) and ethanolic (IC₅₀ = 42 μg/mL) extract was positively correlated with their TPC content. Furthermore, the activity of the aforementioned extracts was comparable to galanthamine. Similar results were obtained in the case of α-amylase, however, all extracts showed lower inhibitory effect on the carbohydrate hydrolyzing enzyme than on AChE, since the IC₅₀ value ranged from 84 to 100 μg/mL. Also, the α-amylase inhibitory effect of the extracts was lower than acarbose. Finally, the methanolic and ethanolic extracts prepared by UAE inhibited both enzymes in a mixed (competitive/noncompetitive) manner since the Kₘ value of both enzymes increased in the presence of extracts, while the Vmax value decreased. The results of the present study indicate that PP may be a useful source of active compounds for the management of AD and DM. Moreover, taking into consideration that PP is an agro-industrial waste product, its valorization could not only result in economic efficiency but also reduce the environmental pollution.Keywords: acetylcholinesterase, Alzheimer’s disease, α-amylase, diabetes mellitus, pomegranate
Procedia PDF Downloads 122110 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 68109 Prediction of Coronary Artery Stenosis Severity Based on Machine Learning Algorithms
Authors: Yu-Jia Jian, Emily Chia-Yu Su, Hui-Ling Hsu, Jian-Jhih Chen
Abstract:
Coronary artery is the major supplier of myocardial blood flow. When fat and cholesterol are deposit in the coronary arterial wall, narrowing and stenosis of the artery occurs, which may lead to myocardial ischemia and eventually infarction. According to the World Health Organization (WHO), estimated 740 million people have died of coronary heart disease in 2015. According to Statistics from Ministry of Health and Welfare in Taiwan, heart disease (except for hypertensive diseases) ranked the second among the top 10 causes of death from 2013 to 2016, and it still shows a growing trend. According to American Heart Association (AHA), the risk factors for coronary heart disease including: age (> 65 years), sex (men to women with 2:1 ratio), obesity, diabetes, hypertension, hyperlipidemia, smoking, family history, lack of exercise and more. We have collected a dataset of 421 patients from a hospital located in northern Taiwan who received coronary computed tomography (CT) angiography. There were 300 males (71.26%) and 121 females (28.74%), with age ranging from 24 to 92 years, and a mean age of 56.3 years. Prior to coronary CT angiography, basic data of the patients, including age, gender, obesity index (BMI), diastolic blood pressure, systolic blood pressure, diabetes, hypertension, hyperlipidemia, smoking, family history of coronary heart disease and exercise habits, were collected and used as input variables. The output variable of the prediction module is the degree of coronary artery stenosis. The output variable of the prediction module is the narrow constriction of the coronary artery. In this study, the dataset was randomly divided into 80% as training set and 20% as test set. Four machine learning algorithms, including logistic regression, stepwise regression, neural network and decision tree, were incorporated to generate prediction results. We used area under curve (AUC) / accuracy (Acc.) to compare the four models, the best model is neural network, followed by stepwise logistic regression, decision tree, and logistic regression, with 0.68 / 79 %, 0.68 / 74%, 0.65 / 78%, and 0.65 / 74%, respectively. Sensitivity of neural network was 27.3%, specificity was 90.8%, stepwise Logistic regression sensitivity was 18.2%, specificity was 92.3%, decision tree sensitivity was 13.6%, specificity was 100%, logistic regression sensitivity was 27.3%, specificity 89.2%. From the result of this study, we hope to improve the accuracy by improving the module parameters or other methods in the future and we hope to solve the problem of low sensitivity by adjusting the imbalanced proportion of positive and negative data.Keywords: decision support, computed tomography, coronary artery, machine learning
Procedia PDF Downloads 229108 Change of Substrate in Solid State Fermentation Can Produce Proteases and Phytases with Extremely Distinct Biochemical Characteristics and Promising Applications for Animal Nutrition
Authors: Paula K. Novelli, Margarida M. Barros, Luciana F. Flueri
Abstract:
Utilization of agricultural by-products, wheat ban and soybean bran, as substrate for solid state fermentation (SSF) was studied, aiming the achievement of different enzymes from Aspergillus sp. with distinct biological characteristics and its application and improvement on animal nutrition. Aspergillus niger and Aspergillus oryzea were studied as they showed very high yield of phytase and protease production, respectively. Phytase activity was measure using p-nitrophenilphosphate as substrate and a standard curve of p-nitrophenol, as the enzymatic activity unit was the quantity of enzyme necessary to release one μmol of p-nitrophenol. Protease activity was measure using azocasein as substrate. Activity for phytase and protease substantially increased when the different biochemical characteristics were considered in the study. Optimum pH and stability of the phytase produced by A. niger with wheat bran as substrate was between 4.0 - 5.0 and optimum temperature of activity was 37oC. Phytase fermented in soybean bran showed constant values at all pHs studied, for optimal and stability, but low production. Phytase with both substrates showed stable activity for temperatures higher than 80oC. Protease from A. niger showed very distinct behavior of optimum pH, acid for wheat bran and basic for soybean bran, respectively and optimal values of temperature and stability at 50oC. Phytase produced by A. oryzae in wheat bran had optimum pH and temperature of 9 and 37oC, respectively, but it was very unstable. On the other hand, proteases were stable at high temperatures, all pH’s studied and showed very high yield when fermented in wheat bran, however when it was fermented in soybean bran the production was very low. Subsequently the upscale production of phytase from A. niger and proteases from A. oryzae were applied as an enzyme additive in fish fed for digestibility studies. Phytases and proteases were produced with stable enzyme activity of 7,000 U.g-1 and 2,500 U.g-1, respectively. When those enzymes were applied in a plant protein based fish diet for digestibility studies, they increased protein, mineral, energy and lipids availability, showing that these new enzymes can improve animal production and performance. In conclusion, the substrate, as well as, the microorganism species can affect the biochemical character of the enzyme produced. Moreover, the production of these enzymes by SSF can be up to 90% cheaper than commercial ones produced with the same fungi species but submerged fermentation. Add to that these cheap enzymes can be easily applied as animal diet additives to improve production and performance.Keywords: agricultural by-products, animal nutrition, enzymes production, solid state fermentation
Procedia PDF Downloads 326