Search results for: experimental simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11364

Search results for: experimental simulation

834 Morphological Differentiation and Temporal Variability in Essential Oil Yield and Composition among Origanum vulgare ssp. hirtum L., Origanum onites L. and Origanum x intercedens from Ikaria Island (Greece)

Authors: A.Assariotakis, P. Vahamidis, P. Tarantilis, G. Economou

Abstract:

Greece, due to its geographical location and the particular climatic conditions, presents high biodiversity of Medicinal and Aromatic Plants. Among them, the genus Origanum not only presents a wide distribution, but it also has great economic importance. After extensive surveys in Ikaria Island (Greece), 3 species of the genus Origanum were identified, namely, Origanum vulgare ssp. hirtum (Greek oregano), Origanum onites (Turkish oregano) and Origanum x intercedens (hybrid), a naturally occurring hybrid between O. hirtum and O. onites. The purpose of this study was to determine their morphological as well as their temporal variability in essential oil yield and composition under field conditions. For this reason, a plantation of each species was created using vegetative propagation and was established at the experimental field of the Agricultural University of Athens (A.U.A.). From the establishment year and for the following two years (3 years of observations), several observations were taken during each growing season with the purpose of identifying the morphological differences among the studied species. Each year collected plant (at bloom stage) material was air-dried at room temperature in the shade. The essential oil content was determined by hydrodistillation using a Clevenger-type apparatus. The chemical composition of essential oils was investigated by Gas Chromatography-Mass Spectrometry (GC – MS). Significant differences were observed among the three oregano species in terms of plant height, leaf size, inflorescence features, as well as concerning their biological cycle. O. intercedens inflorescence presented more similarities with O. hirtum than with O. onites. It was found that calyx morphology could serve as a clear distinction feature between O. intercedens and O. hirtum. The calyx in O. hirtum presents five isometric teeth whereas in O. intercedens two high and three shorter. Essential oil content was significantly affected by genotype and year. O. hirtum presented higher essential oil content than the other two species during the first year of cultivation, however during the second year the hybrid (O. intercedens) recorded the highest values. Carvacrol, p-cymene and γ-terpinene were the main essential oil constituents of the three studied species. In O. hirtum carvacrol content varied from 84,28 - 93,35%, in O. onites from 86,97 - 91,89%, whereas in O. intercedens it was recorded the highest carvacrol content, namely from 89,25 - 97,23%.

Keywords: variability, oregano biotypes, essential oil, carvacrol

Procedia PDF Downloads 123
833 An Evaluation of the Lae City Road Network Improvement Project

Authors: Murray Matarab Konzang

Abstract:

Lae Port Development Project, Four Lane Highway and other development in the extraction industry which have direct road link to Lae City are predicted to have significant impact on its road network system. This paper evaluates Lae roads improvement program with forecast on planning, economic and the installation of bypasses to ease congestion, effective and convenient transport service for bulk goods and reduce travel time. Land-use transportation study and plans for local area traffic management scheme will be considered. City roads are faced with increased number of traffic and some inadequate road pavement width, poor transport plans, and facilities to meet this transportation demand. Lae also has drainage system which might not hold a 100 year flood. Proper evaluation, plan, design and intersection analysis is needed to evaluate road network system thus recommend improvement and estimate future growth. Repetitive and cyclic loading by heavy commercial vehicles with different axle configurations apply on the flexible pavement which weakens and tear the pavement surface thus small cracks occur. Rain water seeps through and overtime it creates potholes. Effective planning starts from experimental research and appropriate design standards to enable firm embankment, proper drains and quality pavement material. This paper will address traffic problems as well as road pavement, capacities of intersections, and pedestrian flow during peak hours. The outcome of this research will be to identify heavily trafficked road sections and recommend treatments to reduce traffic congestions, road classification, and proposal for bypass routes and improvement. First part of this study will describe transport or traffic related problems within the city. Second part would be to identify challenges imposed by traffic and road related problems and thirdly to recommend solutions after the analyzing traffic data that will indicate current capacities of road intersections and finally recommended treatment for improvement and future growth.

Keywords: Lae, road network, highway, vehicle traffic, planning

Procedia PDF Downloads 355
832 Recursion, Merge and Event Sequence: A Bio-Mathematical Perspective

Authors: Noury Bakrim

Abstract:

Formalization is indeed a foundational Mathematical Linguistics as demonstrated by the pioneering works. While dialoguing with this frame, we nonetheless propone, in our approach of language as a real object, a mathematical linguistics/biosemiotics defined as a dialectical synthesis between induction and computational deduction. Therefore, relying on the parametric interaction of cycles, rules, and features giving way to a sub-hypothetic biological point of view, we first hypothesize a factorial equation as an explanatory principle within Category Mathematics of the Ergobrain: our computation proposal of Universal Grammar rules per cycle or a scalar determination (multiplying right/left columns of the determinant matrix and right/left columns of the logarithmic matrix) of the transformable matrix for rule addition/deletion and cycles within representational mapping/cycle heredity basing on the factorial example, being the logarithmic exponent or power of rule deletion/addition. It enables us to propone an extension of minimalist merge/label notions to a Language Merge (as a computing principle) within cycle recursion relying on combinatorial mapping of rules hierarchies on external Entax of the Event Sequence. Therefore, to define combinatorial maps as language merge of features and combinatorial hierarchical restrictions (governing, commanding, and other rules), we secondly hypothesize from our results feature/hierarchy exponentiation on graph representation deriving from Gromov's Symbolic Dynamics where combinatorial vertices from Fe are set to combinatorial vertices of Hie and edges from Fe to Hie such as for all combinatorial group, there are restriction maps representing different derivational levels that are subgraphs: the intersection on I defines pullbacks and deletion rules (under restriction maps) then under disjunction edges H such that for the combinatorial map P belonging to Hie exponentiation by intersection there are pullbacks and projections that are equal to restriction maps RM₁ and RM₂. The model will draw on experimental biomathematics as well as structural frames with focus on Amazigh and English (cases from phonology/micro-semantics, Syntax) shift from Structure to event (especially Amazigh formant principle resolving its morphological heterogeneity).

Keywords: rule/cycle addition/deletion, bio-mathematical methodology, general merge calculation, feature exponentiation, combinatorial maps, event sequence

Procedia PDF Downloads 122
831 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale

Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal

Abstract:

Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.

Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable

Procedia PDF Downloads 296
830 Effects of Lateness Gene on Yield and Related Traits in Indica Rice

Authors: B. B. Rana, M. Yokota, Y. Shimizu, Y. Koide, I. Takamure, T. Kawano, M. Murai

Abstract:

Various genes which control or affect heading time have been found in rice. Out of them, Se1 and E1 loci play important roles in determining heading time by controlling photosensitivity. An isogenic-line pair of late and early lines were developed from progenies of the F1 from Suweon 258 × 36U. A lateness gene tentatively designated as “Ex” was found to control the difference in heading time between the early and late lines mentioned above. The present study was conducted to examine the effect of Ex on yield and related traits. Indica-type variety Suweon 258 was crossed with 36U, which is an Ur1 (Undulate rachis-1) isogenic line of IR36. In the F2 population, comparatively early-heading, late-heading and intermediate-heading plants were segregated. Segregation similar to that by the three types of heading was observed in the F3 and later generations. A late-heading plant and an early-heading plant were selected in the F8 population from an intermediate-heading F7 plant, for developing L and E of the isogenic-line pair, respectively. Experiments for L and E were conducted by randomized block design with three replications. Transplanting was conducted on May 3 at a planting distance of 30 cm × 15 cm with two seedlings per hill to an experimental field of the Faculty of Agriculture, Kochi University. Chemical fertilizers containing N, P2O5 and K2O were applied at the nitrogen levels of 4 g/m2, 9 g/m2 and 18 g/m2 in total being denoted by "N4", "N9" and "N18", respectively. Yield, yield components and other traits were measured. Ex delayed 80%-heading by 17 or 18 days in L as compared with E. In total brown rice yield (g/m2), L was 635, 606 and 590, and E was 577, 548 and 501, respectively, at N18, N9 and N4, indicating that Ex increased this trait by 10% to 18%. Ex increased yield-1.5 mm sieve (g/m2) b 9% to 15% at the three fertilizer levels. Ex increased the spikelet number per panicle by 16% to 22%. As a result, the spikelet number per m2 was increased by 11% to 18% at the three fertilizer levels. Ex decreased 1000-grain weight (g) by 2 to 4%. L was not significantly different from E in ripened-grain percentage, fertilized-spikelet percentage and percentage of ripened grains to fertilized spikelets. Hence, it is inferred that Ex increased yield by increasing spikelet number per panicle. Hence, Ex could be utilized to develop high yielding varieties for warmer districts.

Keywords: heading time, lateness gene, photosensitivity, yield, yield components

Procedia PDF Downloads 194
829 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 169
828 Particle Deflection in a PDMS Microchannel Caused by a Plane Travelling Surface Acoustic Wave

Authors: Florian Keipert, Hagen Schmitd

Abstract:

The size selective separation of different species in a microfluidic system is an actual task in biological or medical research. Former works dealt with the utilisation of the acoustic radiation force (ARF) caused by a plane travelling Surface Acoustic Wave (tSAW). In literature the ARF is described by a dimensionless parameter κ, depending on the wavelength and the particle diameter. To our knowledge research was done for values 0.2 < κ < 5.8 showing that the ARF is dominating the acoustic streaming force (ASF) for κ > 1.2. As a consequence the particle separation is limited by κ. In addition the dependence on the electrical power level was examined but only for κ > 1 pointing out an increased particle deflection for higher electrical power levels. Nevertheless a detailed study on the ASF and ARF especially for κ < 1 is still missing. In our setup we used a tSAW with a wavelength λ = 90 µm and 3 µm PS particles corresponding to κ = 0.3. Herewith the influence of the applied electrical power level on the particle deflection in a polydimethylsiloxan micro channel was investigated. Our results show an increased particle deflection for an increased electrical power level, which coincides with the reported results for κ > 1. Therefore particle separation is in contrast to literature also possible for lower κ values. Thereby the experimental setup can be generally simplified by a coordinated electrical power level for the specific particle size. Furthermore this raises the question of whether this particle deflection is caused only by the ARF as adopted so far or by the ASF or the sum of both forces. To investigate this fact a 0% - 24% saline solution was used and thus the mismatch between the compressibility of the PS particle and the working fluid could be changed. Therefore it is possible to change the relative strength between ARF and ASF and consequently the particle deflection. We observed a decreasing in the particle deflection for an increased NaCl content up to a 12% saline solution and subsequently an increasing of the particle deflection. Our observation could be explained by the acoustic contrast factor Φ, which depends on the compressibility mismatch. The compressibility of water is increased by the NaCl and the range of a 0% - 24% saline solution covers the PS particle compressibility. Hence the particle deflection reaches a minimum value for the accordance between compressibility of PS particle and saline solution. This minimum value can be estimated as the particle deflection only caused by the ASF. Knowing the particle deflection due to the ASF the particle deflection caused by the ARF can be calculated and thus finally the relation between both forces. Concluding, the particle deflection and therefore the size selective particle separation generated by a tSAW can be achieved for values κ < 1, simplifying actual setups by adjusting the electrical power level. Beyond we studied for the first time the relative strength between ARF and ASF to characterise the particle deflection in a microchannel.

Keywords: ARF, ASF, particle separation, saline solution, tSAW

Procedia PDF Downloads 253
827 Vibroacoustic Modulation of Wideband Vibrations and its Possible Application for Windmill Blade Diagnostics

Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu

Abstract:

Wind turbine has become one of the most popular energy productions. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the vibroacoustic modulation are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.

Keywords: vibro-acoustic modulation, detecting of envelope modulation on noise, damage, turbine blades

Procedia PDF Downloads 93
826 A Local Tensor Clustering Algorithm to Annotate Uncharacterized Genes with Many Biological Networks

Authors: Paul Shize Li, Frank Alber

Abstract:

A fundamental task of clinical genomics is to unravel the functions of genes and their associations with disorders. Although experimental biology has made efforts to discover and elucidate the molecular mechanisms of individual genes in the past decades, still about 40% of human genes have unknown functions, not to mention the diseases they may be related to. For those biologists who are interested in a particular gene with unknown functions, a powerful computational method tailored for inferring the functions and disease relevance of uncharacterized genes is strongly needed. Studies have shown that genes strongly linked to each other in multiple biological networks are more likely to have similar functions. This indicates that the densely connected subgraphs in multiple biological networks are useful in the functional and phenotypic annotation of uncharacterized genes. Therefore, in this work, we have developed an integrative network approach to identify the frequent local clusters, which are defined as those densely connected subgraphs that frequently occur in multiple biological networks and consist of the query gene that has few or no disease or function annotations. This is a local clustering algorithm that models multiple biological networks sharing the same gene set as a three-dimensional matrix, the so-called tensor, and employs the tensor-based optimization method to efficiently find the frequent local clusters. Specifically, massive public gene expression data sets that comprehensively cover dynamic, physiological, and environmental conditions are used to generate hundreds of gene co-expression networks. By integrating these gene co-expression networks, for a given uncharacterized gene that is of biologist’s interest, the proposed method can be applied to identify the frequent local clusters that consist of this uncharacterized gene. Finally, those frequent local clusters are used for function and disease annotation of this uncharacterized gene. This local tensor clustering algorithm outperformed the competing tensor-based algorithm in both module discovery and running time. We also demonstrated the use of the proposed method on real data of hundreds of gene co-expression data and showed that it can comprehensively characterize the query gene. Therefore, this study provides a new tool for annotating the uncharacterized genes and has great potential to assist clinical genomic diagnostics.

Keywords: local tensor clustering, query gene, gene co-expression network, gene annotation

Procedia PDF Downloads 160
825 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 168
824 Nitriding of Super-Ferritic Stainless Steel by Plasma Immersion Ion Implantation in Radio Frequency and Microwave Plasma System

Authors: H. Bhuyan, S. Mändl, M. Favre, M. Cisternas, A. Henriquez, E. Wyndham, M. Walczak, D. Manova

Abstract:

The 470 Li-24 Cr and 460Li-21 Cr are two alloys belonging to the next generation of super-ferritic nickel free stainless steel grades, containing titanium (Ti), niobium (Nb) and small percentage of carbon (C) and nitrogen (N). The addition of Ti and Nb improves in general the corrosion resistance while the low interstitial content of C and N assures finer precipitates and greater ductility compared to conventional ferritic grades. These grades are considered an economic alternative to AISI 316L and 304 due to comparable or superior corrosion. However, since 316L and 304 can be nitrided to improve the mechanical surface properties like hardness and wear; it is hypothesize that the tribological properties of these super-ferritic stainless steels grades can also be improved by plasma nitriding. Thus two sets of plasma immersion ion implantation experiments have been carried out, one with a high pressure capacitively coupled radio frequency plasma at PUC Chile and the other using a low pressure microwave plasma at IOM Leipzig, in order to explore further improvements in the mechanical properties of 470 Li-24 Cr and 460Li-21 Cr steel. Nitrided and unnitrided substrates have been subsequently investigated using different surface characterization techniques including secondary ion mass spectroscopy, scanning electron microscopy, energy dispersive x-ray analysis, Vickers hardness, wear resistance, as well as corrosion test. In most of the characterizations no major differences have been observed for nitrided 470 Li-24 Cr and 460Li-21 Cr. Due to the ion bombardment, an increase in the surface roughness is observed for higher treatment temperature, independent of the steel types. The formation of chromium nitride compound takes place only at a treatment temperature around 4000C-4500C, or above. However, corrosion properties deteriorate after treatment at higher temperatures. The physical characterization results show up to 25 at.% of nitrogen for a diffusion zone of 4-6 m, and a 4-5 times increase in hardness for different experimental conditions. The samples implanted with temperature higher than 400 °C presented a wear resistance around two orders of magnitude higher than the untreated substrates. The hardness is apparently affected by the different roughness of the samples and their different profile of nitrogen.

Keywords: ion implantation, plasma, RF and microwave plasma, stainless steel

Procedia PDF Downloads 461
823 The Comparison Study of Methanol and Water Extract of Chuanxiong Rhizoma: A Fingerprint Analysis

Authors: Li Chun Zhao, Zhi Chao Hu, Xi Qiang Liu, Man Lai Lee, Chak Shing Yeung, Man Fei Xu, Yuen Yee Kwan, Alan H. M. Ho, Nickie W. K. Chan, Bin Deng, Zhong Zhen Zhao, Min Xu

Abstract:

Background: Chuangxiong Rhizoma (Chuangxion, CX) is one of the most frequently used herbs in Chinese medicine because of its wide therapeutic effects such as vasorelaxation and anti-inflammation. Aim: The purposes of this study are (1) to perform non-targeted / targeted analyses of CX methanol extract and water extract, and compare the present data with previously LC-MS or GC-MS fingerprints; (2) to examine the difference between CX methanol extract and water extract for preliminarily evaluating whether current compound markers of methanol extract from crude CX materials could be suitable for quality control of CX water extract. Method: CX methanol extract was prepared according to the Hong Kong Chinese Materia Medica Standards. DG water extract was prepared by boiling with pure water for three times (one hour each). UHPLC-Q-TOF-MS/MS fingerprint analysis was performed by C18 column (1.7 µm, 2.1 × 100 mm) with Agilent 1290 Infinity system. Experimental data were analyzed by Agilent MassHunter Software. A database was established based on 13 published LC-MS and GC-MS CX fingerprint analyses. Total 18 targeted compounds in database were selected as markers to compare present data with previous data, and these markers also used to compare CX methanol extract and water extract. Result: (1) Non-targeted analysis indicated that there were 133 compounds identified in CX methanol extract, while 325 compounds in CX water extract that was more than double of CX methanol extract. (2) Targeted analysis further indicated that 9 in 18 targeted compounds were identified in CX methanol extract, while 12 in 18 targeted compounds in CX water extract that showed a lower lose-rate of water extract when compared with methanol extract. (3) By comparing CX methanol extract and water extract, Senkyunolide A (+1578%), Ferulic acid (+529%) and Senkyunolide H (+169%) were significantly higher in water extract when compared with methanol extract. (4) Other bioactive compounds such as Tetramethylpyrazine were only found in CX water extract. Conclusion: Many new compounds in both CX methanol and water extracts were found by using UHPLC Q-TOF MS/MS analysis when compared with previous published reports. A new standard reference including non-targeted compound profiling and targeted markers functioned especially for quality control of CX water extract (herbal decoction) should be established in future. (This project was supported by Hong Kong Baptist University (FRG2/14-15/109) & Natural Science Foundation of Guangdong Province (2014A030313414)).

Keywords: Chuanxiong rhizoma, fingerprint analysis, targeted analysis, quality control

Procedia PDF Downloads 491
822 Biomechanical Analysis on Skin and Jejunum of Chemically Prepared Cat Cadavers Used in Surgery Training

Authors: Raphael C. Zero, Thiago A. S. S. Rocha, Marita V. Cardozo, Caio C. C. Santos, Alisson D. S. Fechis, Antonio C. Shimano, FabríCio S. Oliveira

Abstract:

Biomechanical analysis is an important factor in tissue studies. The objective of this study was to determine the feasibility of a new anatomical technique and quantify the changes in skin and the jejunum resistance of cats’ corpses throughout the process. Eight adult cat cadavers were used. For every kilogram of weight, 120ml of fixative solution (95% 96GL ethyl alcohol and 5% pure glycerin) was applied via the external common carotid artery. Next, the carcasses were placed in a container with 96 GL ethyl alcohol for 60 days. After fixing, all carcasses were preserved in a 30% sodium chloride solution for 60 days. Before fixation, control samples were collected from fresh cadavers and after fixation, three skin and jejunum fragments from each cadaver were tested monthly for strength and displacement until complete rupture in a universal testing machine. All results were analyzed by F-test (P <0.05). In the jejunum, the force required to rupture the fresh samples and the samples fixed in alcohol for 60 days was 31.27±19.14N and 29.25±11.69N, respectively. For the samples preserved in the sodium chloride solution for 30 and 60 days, the strength was 26.17±16.18N and 30.57±13.77N, respectively. In relation to the displacement required for the rupture of the samples, the values of fresh specimens and those fixed in alcohol for 60 days was 2.79±0.73mm and 2.80±1.13mm, respectively. For the samples preserved for 30 and 60 days with sodium chloride solution, the displacement was 2.53±1.03mm and 2.83±1.27mm, respectively. There was no statistical difference between the samples (P=0.68 with respect to strength, and P=0.75 with respect to displacement). In the skin, the force needed to rupture the fresh samples and the samples fixed for 60 days in alcohol was 223.86±131.5N and 211.86±137.53N respectively. For the samples preserved in sodium chloride solution for 30 and 60 days, the force was 227.73±129.06 and 224.78±143.83N, respectively. In relation to the displacement required for the rupture of the samples, the values of fresh specimens and those fixed in alcohol for 60 days were 3.67±1.03mm and 4.11±0.87mm, respectively. For the samples preserved for 30 and 60 days with sodium chloride solution, the displacement was 4.21±0.93mm and 3.93±0.71mm, respectively. There was no statistical difference between the samples (P=0.65 with respect to strength, and P=0.98 with respect to displacement). The resistance of the skin and intestines of the cat carcasses suffered little change when subjected to alcohol fixation and preservation in sodium chloride solution, each for 60 days, which is promising for use in surgery training. All experimental procedures were approved by the Municipal Legal Department (protocol 02.2014.000027-1). The project was funded by FAPESP (protocol 2015-08259-9).

Keywords: anatomy, conservation, fixation, small animal

Procedia PDF Downloads 284
821 Evaluating the Small-Strain Mechanical Properties of Cement-Treated Clayey Soils Based on the Confining Pressure

Authors: Muhammad Akmal Putera, Noriyuki Yasufuku, Adel Alowaisy, Ahmad Rifai

Abstract:

Indonesia’s government has planned a project for a high-speed railway connecting the capital cities, Jakarta and Surabaya, about 700 km. Based on that location, it has been planning construction above the lowland soil region. The lowland soil region comprises cohesive soil with high water content and high compressibility index, which in fact, led to a settlement problem. Among the variety of railway track structures, the adoption of the ballastless track was used effectively to reduce the settlement; it provided a lightweight structure and minimized workspace. Contradictorily, deploying this thin layer structure above the lowland area was compensated with several problems, such as lack of bearing capacity and deflection behavior during traffic loading. It is necessary to combine with ground improvement to assure a settlement behavior on the clayey soil. Reflecting on the assurance of strength increment and working period, those were convinced by adopting methods such as cement-treated soil as the substructure of railway track. Particularly, evaluating mechanical properties in the field has been well known by using the plate load test and cone penetration test. However, observing an increment of mechanical properties has uncertainty, especially for evaluating cement-treated soil on the substructure. The current quality control of cement-treated soils was established by laboratory tests. Moreover, using small strain devices measurement in the laboratory can predict more reliable results that are identical to field measurement tests. Aims of this research are to show an intercorrelation of confining pressure with the initial condition of the Young modulus (E_o), Poisson ratio (υ_o) and Shear modulus (G_o) within small strain ranges. Furthermore, discrepancies between those parameters were also investigated. Based on the experimental result confirmed the intercorrelation between cement content and confining pressure with a power function. In addition, higher cement ratios have discrepancies, conversely with low mixing ratios.

Keywords: amount of cement, elastic zone, high-speed railway, lightweight structure

Procedia PDF Downloads 135
820 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty

Abstract:

The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.

Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal

Procedia PDF Downloads 165
819 Analysis of Reflection Coefficients of Reflected and Transmitted Waves at the Interface Between Viscous Fluid and Hygro-Thermo-Orthotropic Medium

Authors: Anand Kumar Yadav

Abstract:

Purpose – The purpose of this paper is to investigate the fluctuation of amplitude ratios of various transmitted and reflected waves. Design/methodology/approach – The reflection and transmission of plane waves on the interface between an orthotropic hygro-thermo-elastic half-space (OHTHS) and a viscous-fluid half-space (VFHS) were investigated in this study with reference to coupled hygro-thermo-elasticity. Findings – The interface, where y = 0, is struck by the principal (P) plane waves as they travel through the VFHS. Two waves are reflected in VFHS, and four waves are transmitted in OHTHS as a result namely longitudinal displacement, Pwave − , thermal diffusion TDwave − and moisture diffusion mDwave − and shear vertical SV wave. Expressions for the reflection and transmitted coefficient are developed for the incidence of a hygrothermal plane wave. It is noted that these ratios are graphically displayed and are observed under the influence of coupled hygro-thermo-elasticity. Research limitations/implications – There isn't much study on the model under consideration, which combines OHTHS and VFHS with coupled hygro-thermo-elasticity, according to the existing literature Practical implications – The current model can be applied in many different areas, such as soil dynamics, nuclear reactors, high particle accelerators, earthquake engineering, and other areas where linked hygrothermo-elasticity is important. In a range of technical and geophysical settings, wave propagation in a viscous fluid-thermoelastic medium with various characteristics, such as initial stress, magnetic field, porosity, temperature, etc., gives essential information regarding the presence of new and modified waves. This model may prove useful in modifying earthquake estimates for experimental seismologists, new material designers, and researchers. Social implications – Researchers may use coupled hygro-thermo-elasticity to categories the material, where the parameter is a new indication of its ability to conduct heat in interaction with diverse materials. Originality/value – The submitted text is the sole creation of the team of writers, and all authors equally contributed to its creation.

Keywords: hygro-thermo-elasticity, viscous fluid, reflection coefficient, transmission coefficient, moisture concentration

Procedia PDF Downloads 64
818 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)

Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri

Abstract:

Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.

Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase

Procedia PDF Downloads 228
817 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 90
816 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 102
815 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 156
814 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 187
813 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 213
812 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements

Authors: Henok Hailemariam, Frank Wuttke

Abstract:

Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.

Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence

Procedia PDF Downloads 358
811 Rhizoremediation of Contaminated Soils in Sub-Saharan Africa: Experimental Insights of Microbe Growth and Effects of Paspalum Spp. for Degrading Hydrocarbons in Soils

Authors: David Adade-Boateng, Benard Fei Baffoe, Colin A. Booth, Michael A. Fullen

Abstract:

Remediation of diesel fuel, oil and grease in contaminated soils obtained from a mine site in Ghana are explored using rhizoremediation technology with different levels of nutrient amendments (i.e. N (nitrogen) in Compost (0.2, 0.5 and 0.8%), Urea (0.2, 0.5 and 0.8%) and Topsoil (0.2, 0.5 and 0.8%)) for a native species. A Ghanaian native grass species, Paspalum spp. from the Poaceae family, indicative across Sub-Saharan Africa, was selected following the development of essential and desirable growth criteria. Vegetative parts of the species were subjected to ten treatments in a Randomized Complete Block Design (RCBD) in three replicates. The plant-associated microbial community was examined in Paspalum spp. An assessment of the influence of Paspalum spp on the abundance and activity of micro-organisms in the rhizosphere revealed a build-up of microbial communities over a three month period. This was assessed using the MPN method, which showed rhizospheric samples from the treatments were significantly different (P <0.05). Multiple comparisons showed how microbial populations built-up in the rhizosphere for the different treatments. Treatments G (0.2% compost), H (0.5% compost) and I (0.8% compost) performed significantly better done other treatments, while treatments D (0.2% topsoil) and F (0.8% topsoil) were insignificant. Furthermore, treatment A (0.2% urea), B (0.5% urea), C (0.8% urea) and E (0.5% topsoil) also performed the same. Residual diesel and oil concentrations (as total petroleum hydrocarbons, TPH and oil and grease) were measured using infra-red spectroscopy and gravimetric methods, respectively. The presence of single species successfully enhanced the removal of hydrocarbons from soil. Paspalum spp. subjected to compost levels (0.5% and 0.8%) and topsoil levels (0.5% and 0.8%) showed significantly lower residual hydrocarbon concentrations compared to those treated with Urea. A strong relationship (p<0.001) between the abundance of hydrocarbon degrading micro-organisms in the rhizosphere and hydrocarbon biodegradation was demonstrated for rhizospheric samples with treatment G (0.2% compost), H (0.5% compost) and I (0.8% compost) (P <0.001). The same level of amendment with 0.8% compost (N-level) can improve the application effectiveness. These findings have wide-reaching implications for the environmental management of soils contaminated by hydrocarbons in Sub-Saharan Africa. However, it is necessary to further investigate the in situ rhizoremediation potential of Paspalum spp. at the field scale.

Keywords: rhizoremediation, microbial population, rhizospheric sample, treatments

Procedia PDF Downloads 321
810 Reduction of Nitrogen Monoxide with Carbon Monoxide from Gas Streams by 10% wt. Cu-Ce-Fe-Co/Activated Carbon

Authors: K. L. Pan, M. B. Chang

Abstract:

Nitrogen oxides (NOₓ) is regarded as one of the most important air pollutants. It not only causes adverse environmental effects but also harms human lungs and respiratory system. As a post-combustion treatment, selective catalytic reduction (SCR) possess the highest NO removal efficiency ( ≥ 85%), which is considered as the most effective technique for removing NO from gas streams. However, injection of reducing agent such as NH₃ is requested, and it is costly and may cause secondary pollution. Reduction of NO with carbon monoxide (CO) as reducing agent has been previously investigated. In this process, the key step involves the NO adsorption and dissociation. Also, the high performance mainly relies on the amounts of oxygen vacancy on catalyst surface and redox ability of catalyst, because oxygen vacancy can activate the N-O bond to promote its dissociation. Additionally, perfect redox ability can promote the adsorption of NO and oxidation of CO. Typically, noble metals such as iridium (Ir), platinum (Pt), and palladium (Pd) are used as catalyst for the reduction of NO with CO; however, high cost has limited their applications. Recently, transition metal oxides have been investigated for the reduction of NO with CO, especially CuₓOy, CoₓOy, Fe₂O₃, and MnOₓ are considered as effective catalysts. However, deactivation is inevitable as oxygen (O₂) exists in the gas streams because active sites (oxygen vacancies) of catalyst are occupied by O₂. In this study, Cu-Ce-Fe-Co is prepared and supported on activated carbon by impregnation method to form 10% wt. Cu-Ce-Fe-Co/activated carbon catalyst. Generally, addition of activated carbon on catalyst can bring several advantages: (1) NO can be effectively adsorbed by interaction between catalyst and activated carbon, resulting in the improvement of NO removal, (2) direct NO decomposition may be achieved over carbon associated with catalyst, and (3) reduction of NO could be enhanced by a reducing agent over carbon-supported catalyst. Therefore, 10% wt. Cu-Ce-Fe-Co/activated carbon may have better performance for reduction of NO with CO. Experimental results indicate that NO conversion achieved with 10% wt. Cu-Ce-Fe-Co/activated carbon reaches 83% at 150°C with 300 ppm NO and 10,000 ppm CO. As temperature is further increased to 200°C, 100% NO conversion could be achieved, implying that 10% wt. Cu-Ce-Fe-Co/activated carbon prepared has good activity for the reduction of NO with CO. In order to investigate the effect of O₂ on reduction of NO with CO, 1-5% O₂ are introduced into the system. The results indicate that NO conversions still maintain at ≥ 90% with 1-5% O₂ conditions at 200°C. It is worth noting that effect of O₂ on reduction of NO with CO could be significantly improved as carbon is used as support. It is inferred that carbon support can react with O₂ to produce CO₂ as O₂ exists in the gas streams. Overall, 10% wt. Cu-Ce-Fe-Co/activated carbon is demonstrated with good potential for reduction of NO with CO, and possible mechanisms will be elucidated in this paper.

Keywords: nitrogen oxides (NOₓ), carbon monoxide (CO), reduction of NO with CO, carbon material, catalysis

Procedia PDF Downloads 254
809 Qualitative Modeling of Transforming Growth Factor Beta-Associated Biological Regulatory Network: Insight into Renal Fibrosis

Authors: Ayesha Waqar Khan, Mariam Altaf, Jamil Ahmad, Shaheen Shahzad

Abstract:

Kidney fibrosis is an anticipated outcome of possibly all types of progressive chronic kidney disease (CKD). Epithelial-mesenchymal transition (EMT) signaling pathway is responsible for production of matrix-producing fibroblasts and myofibroblasts in diseased kidney. In this study, a discrete model of TGF-beta (transforming growth factor) and CTGF (connective tissue growth factor) was constructed using Rene Thomas formalism to investigate renal fibrosis turn over. The kinetic logic proposed by Rene Thomas is a renowned approach for modeling of Biological Regulatory Networks (BRNs). This modeling approach uses a set of constraints which represents the dynamics of the BRN thus analyzing the pathway and predicting critical trajectories that lead to a normal or diseased state. The molecular connection between TGF-beta, Smad 2/3 (transcription factor) phosphorylation and CTGF is modeled using GenoTech. The order of BRN is CTGF, TGF-B, and SMAD3 respectively. The predicted cycle depicts activation of TGF-B (TGF-β) via cleavage of its own pro-domain (0,1,0) and presentation to TGFR-II receptor phosphorylating SMAD3 (Smad2/3) in the state (0,1,1). Later TGF-B is turned off (0,0,1) thereby activating SMAD3 that further stimulates the expression of CTGF in the state (1,0,1) and itself turns off in (1,0,0). Elevated CTGF expression reactivates TGF-B (1,1,0) and the cycle continues. The predicted model has generated one cycle and two steady states. Cyclic behavior in this study represents the diseased state in which all three proteins contribute to renal fibrosis. The proposed model is in accordance with the experimental findings of the existing diseased state. Extended cycle results in enhanced CTGF expression through Smad2/3 and Smad4 translocation in the nucleus. The results suggest that the system converges towards organ fibrogenesis if CTGF remains constructively active along with Smad2/3 and Smad 4 that plays an important role in kidney fibrosis. Therefore, modeling regulatory pathways of kidney fibrosis will escort to the progress of therapeutic tools and real-world useful applications such as predictive and preventive medicine.

Keywords: CTGF, renal fibrosis signaling pathway, system biology, qualitative modeling

Procedia PDF Downloads 176
808 Bioreactor for Cell-Based Impedance Measuring with Diamond Coated Gold Interdigitated Electrodes

Authors: Roman Matejka, Vaclav Prochazka, Tibor Izak, Jana Stepanovska, Martina Travnickova, Alexander Kromka

Abstract:

Cell-based impedance spectroscopy is suitable method for electrical monitoring of cell activity especially on substrates that cannot be easily inspected by optical microscope (without fluorescent markers) like decellularized tissues, nano-fibrous scaffold etc. Special sensor for this measurement was developed. This sensor consists of corning glass substrate with gold interdigitated electrodes covered with diamond layer. This diamond layer provides biocompatible non-conductive surface for cells. Also, a special PPFC flow cultivation chamber was developed. This chamber is able to fix sensor in place. The spring contacts are connecting sensor pads with external measuring device. Construction allows real-time live cell imaging. Combining with perfusion system allows medium circulation and generating shear stress stimulation. Experimental evaluation consist of several setups, including pure sensor without any coating and also collagen and fibrin coating was done. The Adipose derived stem cells (ASC) and Human umbilical vein endothelial cells (HUVEC) were seeded onto sensor in cultivation chamber. Then the chamber was installed into microscope system for live-cell imaging. The impedance measurement was utilized by vector impedance analyzer. The measured range was from 10 Hz to 40 kHz. These impedance measurements were correlated with live-cell microscopic imaging and immunofluorescent staining. Data analysis of measured signals showed response to cell adhesion of substrates, their proliferation and also change after shear stress stimulation which are important parameters during cultivation. Further experiments plan to use decellularized tissue as scaffold fixed on sensor. This kind of impedance sensor can provide feedback about cell culture conditions on opaque surfaces and scaffolds that can be used in tissue engineering in development artificial prostheses. This work was supported by the Ministry of Health, grants No. 15-29153A and 15-33018A.

Keywords: bio-impedance measuring, bioreactor, cell cultivation, diamond layer, gold interdigitated electrodes, tissue engineering

Procedia PDF Downloads 300
807 Synthesis of Iron Oxide Nanoparticles Using Different Stabilizers and Study of Their Size and Properties

Authors: Mohammad Hassan Ramezan zadeh 1 , Majid Seifi 2 , Hoda Hekmat ara 2 1Biomedical Engineering Department, Near East University, Nicosia, Cyprus 2Physics Department, Guilan University , P.O. Box 41335-1914, Rasht, Iran.

Abstract:

Magnetic nano particles of ferric chloride were synthesised using a co-precipitation technique. For the optimal results, ferric chloride at room temperature was added to different surfactant with different ratio of metal ions/surfactant. The samples were characterised using transmission electron microscopy, X-ray diffraction and Fourier transform infrared spectrum to show the presence of nanoparticles, structure and morphology. Magnetic measurements were also carried out on samples using a Vibrating Sample Magnetometer. To show the effect of surfactant on size distribution and crystalline structure of produced nanoparticles, surfactants with various charge such as anionic cetyl trimethyl ammonium bromide (CTAB), cationic sodium dodecyl sulphate (SDS) and neutral TritonX-100 was employed. By changing the surfactant and ratio of metal ions/surfactant the size and crystalline structure of these nanoparticles were controlled. We also show that using anionic stabilizer leads to smallest size and narrowest size distribution and the most crystalline (polycrystalline) structure. In developing our production technique, many parameters were varied. Efforts at reproducing good yields indicated which of the experimental parameters were the most critical and how carefully they had to be controlled. The conditions reported here were the best that we encountered but the range of possible parameter choice is so large that these probably only represent a local optimum. The samples for our chemical process were prepared by adding 0.675 gr ferric chloride (FeCl3, 6H2O) to three different surfactant in water solution. The solution was sonicated for about 30 min until a transparent solution was achieved. Then 0.5 gr sodium hydroxide (NaOH) as a reduction agent was poured to the reaction drop by drop which resulted to participate reddish brown Fe2O3 nanoparticles. After washing with ethanol the obtained powder was calcinated in 600°C for 2h. Here, the sample 1 contained CTAB as a surfactant with ratio of metal ions/surfactant 1/2, sample 2 with CTAB and ratio 1/1, sample 3 with SDS and ratio 1/2, sample 4 SDS 1/1, sample 5 is triton-X-100 with 1/2 and sample 6 triton-X-100 with 1/1.

Keywords: iron oxide nanoparticles, stabilizer, co-precipitation, surfactant

Procedia PDF Downloads 246
806 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 394
805 Cationic Solid Lipid Nanoparticles Conjugated with Anti-Melantransferrin and Apolipoprotein E for Delivering Doxorubicin to U87MG Cells

Authors: Yung-Chih Kuo, Yung-I Lou

Abstract:

Cationic solid lipid nanoparticles (CSLNs) with anti-melanotransferrin (AMT) and apolipoprotein E (ApoE) were used to carry antimitotic doxorubicin (Dox) across the blood–brain barrier (BBB) for glioblastoma multiforme (GBM) treatment. Dox-loaded CSLNs were prepared in microemulsion, grafted covalently with AMT and ApoE, and applied to human brain microvascular endothelial cells (HBMECs), human astrocytes, and U87MG cells. Experimental results revealed that an increase in the weight percentage of stearyl amine (SA) from 0% to 20% increased the size of AMT-ApoE-Dox-CSLNs. In addition, an increase in the stirring rate from 150 rpm to 450 rpm decreased the size of AMT-ApoE-Dox-CSLNs. An increase in the weight percentage of SA from 0% to 20% enhanced the zeta potential of AMT-ApoE-Dox-CSLNs. Moreover, an increase in the stirring rate from 150 rpm to 450 rpm reduced the zeta potential of AMT-ApoE-Dox-CSLNs. AMT-ApoE-Dox-CSLNs exhibited a spheroid-like geometry, a minor irregular boundary deviating from spheroid, and a somewhat distorted surface with a few zigzags and sharp angles. The encapsulation efficiency of Dox in CSLNs decreased with increasing weight percentage of Dox and the order in the encapsulation efficiency of Dox was 10% SA > 20% SA > 0% SA. However, the reverse order was true for the release rate of Dox, suggesting that AMT-ApoE-Dox-CSLNs containing 10% SA had better-sustained release characteristics. An increase in the concentration of AMT from 2.5 to 7.5 μg/mL slightly decreased the grafting efficiency of AMT and an increase in that from 7.5 to 10 μg/mL significantly decreased the grafting efficiency. Furthermore, an increase in the concentration of ApoE from 2.5 to 5 μg/mL slightly reduced the grafting efficiency of ApoE and an increase in that from 5 to 10 μg/mL significantly reduced the grafting efficiency. Also, AMT-ApoE-Dox-CSLNs at 10 μg/mL of ApoE could slightly reduce the transendothelial electrical resistance (TEER) and increase the permeability of propidium iodide (PI). An incorporation of 10 μg/mL of ApoE could reduce the TEER and increase the permeability of PI. AMT-ApoE-Dox-CSLNs at 10 μg/mL of AMT and 5-10 μg/mL of ApoE could significantly enhance the permeability of Dox across the BBB. AMT-ApoE-Dox-CSLNs did not induce serious cytotoxicity to HBMECs. The viability of HBMECs was in the following order: AMT-ApoE-Dox-CSLNs = AMT-Dox-CSLNs = Dox-CSLNs > Dox. The order in the efficacy of inhibiting U87MG cells was AMT-ApoE-Dox-CSLNs > AMT-Dox-CSLNs > Dox-CSLNs > Dox. A surface modification of AMT and ApoE could promote the delivery of AMT-ApoE-Dox-CSLNs to cross the BBB via melanotransferrin and low density lipoprotein receptor. Thus, AMT-ApoE-Dox-CSLNs have appropriate physicochemical properties and can be a potential colloidal delivery system for brain tumor chemotherapy.

Keywords: anti-melanotransferrin, apolipoprotein E, cationic catanionic solid lipid nanoparticle, doxorubicin, U87MG cells

Procedia PDF Downloads 278