Search results for: particle filtering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2002

Search results for: particle filtering

412 Challenges in the Characterization of Black Mass in the Recovery of Graphite from Spent Lithium Ion Batteries

Authors: Anna Vanderbruggen, Kai Bachmann, Martin Rudolph, Rodrigo Serna

Abstract:

Recycling of lithium-ion batteries has attracted a lot of attention in recent years and focuses primarily on valuable metals such as cobalt, nickel, and lithium. Despite the growth in graphite consumption and the fact that it is classified as a critical raw material in the European Union, USA, and Australia, there is little work focusing on graphite recycling. Thus, graphite is usually considered waste in recycling treatments, where graphite particles are concentrated in the “black mass”, a fine fraction below 1mm, which also contains the foils and the active cathode particles such as LiCoO2 or LiNiMnCoO2. To characterize the material, various analytical methods are applied, including X-Ray Fluorescence (XRF), X-Ray Diffraction (XRD), Atomic Absorption Spectrometry (AAS), and SEM-based automated mineralogy. The latter consists of the combination of a scanning electron microscopy (SEM) image analysis and energy-dispersive X-ray spectroscopy (EDS). It is a powerful and well-known method for primary material characterization; however, it has not yet been applied to secondary material such as black mass, which is a challenging material to analyze due to fine alloy particles and to the lack of an existing dedicated database. The aim of this research is to characterize the black mass depending on the metals recycling process in order to understand the liberation mechanisms of the active particles from the foils and their effect on the graphite particle surfaces and to understand their impact on the subsequent graphite flotation. Three industrial processes were taken into account: purely mechanical, pyrolysis-mechanical, and mechanical-hydrometallurgy. In summary, this article explores various and common challenges for graphite and secondary material characterization.

Keywords: automated mineralogy, characterization, graphite, lithium ion battery, recycling

Procedia PDF Downloads 247
411 An Investigation of the Fracture Behavior of Model MgO-C Refractories Using the Discrete Element Method

Authors: Júlia Cristina Bonaldo, Christophe L. Martin, Martiniano Piccico, Keith Beale, Roop Kishore, Severine Romero-Baivier

Abstract:

Refractory composite materials employed in steel casting applications are prone to cracking and material damage because of the very high operating temperature (thermal shock) and mismatched properties of the constituent phases. The fracture behavior of a model MgO-C composite refractory is investigated to quantify and characterize its thermal shock resistance, employing a cold crushing test and Brazilian test with fractographic analysis. The discrete element method (DEM) is used to generate numerical refractory composites. The composite in DEM is represented by an assembly of bonded particle clusters forming perfectly spherical aggregates and single spherical particles. For the stresses to converge with a low standard deviation and a minimum number of particles to allow reasonable CPU calculation time, representative volume element (RVE) numerical packings are created with various numbers of particles. Key microscopic properties are calibrated sequentially by comparing stress-strain curves from crushing experimental data. Comparing simulations with experiments also allows for the evaluation of crack propagation, fracture energy, and strength. The crack propagation during Brazilian experimental tests is monitored with digital image correlation (DIC). Simulations and experiments reveal three distinct types of fracture. The crack may spread throughout the aggregate, at the aggregate-matrix interface, or throughout the matrix.

Keywords: refractory composite, fracture mechanics, crack propagation, DEM

Procedia PDF Downloads 81
410 The Study of Spray Drying Process for Skimmed Coconut Milk

Authors: Jaruwan Duangchuen, Siwalak Pathaveerat

Abstract:

Coconut (Cocos nucifera) belongs to the family Arecaceae. Coconut juice and meat are consumed as food and dessert in several regions of the world. Coconut juice contains low proteins, and arginine is the main amino acid content. Coconut meat is the endosperm of coconut that has nutritional value. It composes of carbohydrate, protein and fat. The objective of this study is utilization of by-products from the virgin coconut oil extraction process by using the skimmed coconut milk as a powder. The skimmed coconut milk was separated from the coconut milk in virgin coconut oil extraction process that consists approximately of protein 6.4%, carbohydrate 7.2%, dietary fiber 0.27 %, sugar 6.27%, fat 3.6 % and moisture content of 86.93%. This skimmed coconut milk can be made to powder for value - added product by using spray drying. The factors effect to the yield and properties of dry skimmed coconut milk in spraying process are inlet, outlet air temperature and the maltodextrin concentration. The percentage of maltodextrin content (15, 20%), outlet air temperature (80 ºC, 85 ºC, 90 ºC) and inlet air temperature (190 ºC, 200 ºC, 210 ºC) were conducted to the skimmed coconut milk spray drying process. The spray dryer was kept air flow rate (0.2698 m3 /s). The result that shown 2.22 -3.23% of moisture content, solubility, bulk density (0.4-0.67g/mL), solubility, wettability (4.04 -19.25 min) for solubility in the water, color, particle size were analyzed for the powder samples. The maximum yield (18.00%) of spray dried coconut milk powder was obtained at 210 °C of temperature, 80°C of outlet temperature and 20% maltodextrin for 27.27 second for drying time. For the amino analysis shown that the high amino acids are Glutamine (16.28%), Arginine (10.32%) and Glycerin (9.59%) by using HPLP method (UV detector).

Keywords: skimmed coconut milk, spray drying, virgin coconut oil process (VCO), maltodextrin

Procedia PDF Downloads 336
409 Design and Radio Frequency Characterization of Radial Reentrant Narrow Gap Cavity for the Inductive Output Tube

Authors: Meenu Kaushik, Ayon K. Bandhoyadhayay, Lalit M. Joshi

Abstract:

Inductive output tubes (IOTs) are widely used as microwave power amplifiers for broadcast and scientific applications. It is capable of amplifying radio frequency (RF) power with very good efficiency. Its compactness, reliability, high efficiency, high linearity and low operating cost make this device suitable for various applications. The device consists of an integrated structure of electron gun and RF cavity, collector and focusing structure. The working principle of IOT is a combination of triode and klystron. The cathode lies in the electron gun produces a stream of electrons. A control grid is placed in close proximity to the cathode. Basically, the input part of IOT is the integrated structure of gridded electron gun which acts as an input cavity thereby providing the interaction gap where the input RF signal is applied to make it interact with the produced electron beam for supporting the amplification phenomena. The paper presents the design, fabrication and testing of a radial re-entrant cavity for implementing in the input structure of IOT at 350 MHz operating frequency. The model’s suitability has been discussed and a generalized mathematical relation has been introduced for getting the proper transverse magnetic (TM) resonating mode in the radial narrow gap RF cavities. The structural modeling has been carried out in CST and SUPERFISH codes. The cavity is fabricated with the Aluminum material and the RF characterization is done using vector network analyzer (VNA) and the results are presented for the resonant frequency peaks obtained in VNA.

Keywords: inductive output tubes, IOT, radial cavity, coaxial cavity, particle accelerators

Procedia PDF Downloads 125
408 Characterization of Nano Coefficient of Friction through Lfm of Superhydrophobic/Oleophobic Coatings Applied on 316l Ss

Authors: Hamza Shams, Sajid Saleem, Bilal A. Siddiqui

Abstract:

This paper investigates the coefficient of friction at nano-levels of commercially available superhydrophobic/oleophobic coatings when applied over 316L SS. 316L Stainless Steel or Marine Stainless Steel has been selected for its widespread uses in structures, marine and biomedical applications. The coatings were investigated in harsh sand-storm and sea water environments. The particle size of the sand during the procedure was carefully selected to simulate sand-storm conditions. Sand speed during the procedure was carefully modulated to simulate actual wind speed during a sand-storm. Sample preparation was carried out using prescribed methodology by the coating manufacturer. The coating’s adhesion and thickness was verified before and after the experiment with the use of Scanning Electron Microscopy (SEM). The value for nano-level coefficient of friction has been determined using Lateral Force Microscopy (LFM). The analysis has been used to formulate a value of friction coefficient which in turn is associative of the amount of wear the coating can bear before the exposure of the base substrate to the harsh environment. The analysis aims to validate the coefficient of friction value as marketed by the coating manufacturers and more importantly test the coating in real-life applications to justify its use. It is expected that the coating would resist exposure to the harsh environment for a considerable amount of time. Further, it would prevent the sample from getting corroded in the process.

Keywords: 316L SS, scanning electron microscopy, lateral force microscopy, marine stainless steel, oleophobic coating, superhydrophobic coating

Procedia PDF Downloads 488
407 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 283
406 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 224
405 Performance Evaluation of Filtration System for Groundwater Recharging Well in the Presence of Medium Sand-Mixed Storm Water

Authors: Krishna Kumar Singh, Praveen Jain

Abstract:

The collection of storm water runoff and forcing it into the groundwater is the need of the hour to sustain the ground water table. However, the runoff entraps various types of sediments and other floating objects whose removal are essential to avoid pollution of ground water and blocking of pores of aquifer. However, it requires regular cleaning and maintenance due to the problem of clogging. To evaluate the performance of filter system consisting of coarse sand (CS), gravel (G) and pebble (P) layers, a laboratory experiment was conducted in a rectangular column. The effect of variable thickness of CS, G and P layers of the filtration unit of the recharge shaft on the recharge rate and the sediment concentration of effluent water were evaluated. Medium sand (MS) of three particle sizes, viz. 0.150–0.300 mm (T1), 0.300–0.425 mm (T2) and 0.425–0.600 mm of thickness 25 cm, 30 cm, and 35 cm respectively in the top layer of the filter system and having seven influent sediment concentrations of 250–3,000 mg/l were used for the experimental study. The performance was evaluated in terms of recharge rates and clogging time. The results indicated that 100 % suspended solids were entrapped in the upper 10 cm layer of MS, the recharge rates declined sharply for influent concentrations of more than 1,000 mg/l. All treatments with a higher thickness of MS media indicated recharge rate slightly more than that of all treatment with a lower thickness of MS media respectively. The performance of storm water infiltration systems was highly dependent on the formation of a clogging layer at the filter. An empirical relationship has been derived between recharge rates, inflow sediment load, size of MS and thickness of MS with using MLR.

Keywords: groundwater, medium sand-mixed storm water filter, inflow sediment load

Procedia PDF Downloads 392
404 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy

Authors: P. Selva, B. Lorraina, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard

Abstract:

Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.

Keywords: cruciform specimen, multiaxial fatigue, nickel-based superalloy

Procedia PDF Downloads 296
403 Key Parameters for Controlling Swell of Expansive Soil-Hydraulic Cement Admixture

Authors: Aung Phyo Kyaw, Kuo Chieh Chao

Abstract:

Expansive soils are more complicated than normal soils, although the soil itself is not very complicated. When evaluating foundation performance on expansive soil, it is important to consider soil expansion. The primary focus of this study is on hydraulic cement and expansive soil mixtures, and the research aims to identify key parameters for controlling the swell of the expansive soil-hydraulic cement mixture. Treatment depths can be determined using hydraulic cement ratios of 4%, 8%, 12%, and 15% for treating expansive soil. To understand the effect of hydraulic cement percentages on the swelling of expansive soil-hydraulic admixture, performing the consolidation-swell test σ''ᶜˢ is crucial. This investigation primarily focuses on consolidation-swell tests σ''ᶜˢ, although the heave index Cₕ is also needed to determine total heave. The heave index can be measured using the percent swell in the specific inundation stress in both the consolidation-swell test and the constant-volume test swelling pressure. Obtaining the relationship between swelling pressure and σ''ᶜⱽ determined from the "constant volume test" is useful in predicting heave from a single oedometer test. The relationship between σ''ᶜˢ and σ''ᶜⱽ is based on experimental results of expansive soil behavior and facilitates heave prediction for each soil. In this method, the soil property "m" is used as a parameter, and common soil property tests include compaction, particle size distribution, and the Atterberg limit. The Electricity Generating Authority of Thailand (EGAT) provided the soil sample for this study, and all laboratory testing is performed according to American Society for Testing and Materials (ASTM) standards.

Keywords: expansive soil, swelling pressure, total heave, treatment depth

Procedia PDF Downloads 85
402 Development of Hierarchically Structured Tablets with 3D Printed Inclusions for Controlled Drug Release

Authors: Veronika Lesáková, Silvia Slezáková, František Štěpánek

Abstract:

Drug dosage forms consisting of multi-unit particle systems (MUPS) for modified drug release provide a promising route for overcoming the limitation of conventional tablets. Despite the conventional use of pellets as units for MUP systems, 3D printed polymers loaded with a drug seem like an interesting candidate due to the control over dosing that 3D printing mechanisms offer. Further, 3D printing offers high flexibility and control over the spatial structuring of a printed object. The final MUPS tablets include PVP and HPC as granulate with other excipients, enabling the compaction process of this mixture with 3D printed inclusions, also termed minitablets. In this study, we have developed the multi-step production process for MUPS tablets, including the 3D printing technology. The MUPS tablets with incorporated 3D printed minitablets are a complex system for drug delivery, providing modified drug release. Such structured tablets promise to reduce drug fluctuations in blood, risk of local toxicity, and increase bioavailability, resulting in an improved therapeutic effect due to the fast transfer into the small intestine, where particles are evenly distributed. Drug loaded 3D printed minitablets were compacted into the excipient mixture, influencing drug release through varying parameters, such as minitablets size, matrix composition, and compaction parameters. Further, the mechanical properties and morphology of the final MUPS tablets were analyzed as many properties, such as plasticity and elasticity, can significantly influence the dissolution profile of the drug.

Keywords: 3D printing, dissolution kinetics, drug delivery, hot-melt extrusion

Procedia PDF Downloads 93
401 Experimental Study of Flow Characteristics for a Cylinder with Respect to Attached Flexible Strip Body of Various Reynolds Number

Authors: S. Teksin, S. Yayla

Abstract:

The aim of the present study was to investigate details of flow structure in downstream of a circular cylinder base mounted on a flat surface in a rectangular duct with the dimensions of 8000 x 1000 x 750 mm in deep water flow for the Reynolds number 2500, 5000 and 7500. A flexible strip was attached to behind the cylinder and compared the bare body. Also, it was analyzed that how boundary layer affects the structure of flow around the cylinder. Diameter of the cylinder was 60 mm and the length of the flexible splitter plate which had a certain modulus of elasticity was 150 mm (L/D=2.5). Time-averaged velocity vectors, vortex contours, streamwise and transverse velocity components were investigated via Particle Image Velocimetry (PIV). Velocity vectors and vortex contours were displayed through the sections in which boundary layer effect was not present. On the other hand, streamwise and transverse velocity components were monitored for both cases, i.e. with and without boundary layer effect. Experiment results showed that the vortex formation occured in a larger area for L/D=2.5 and the point where the vortex was maximum from the base of the cylinder was shifted. Streamwise and transverse velocity component contours were symmetrical with reference to the center of the cylinder for all cases. All Froud numbers based on the Reynolds numbers were quite smaller than 1. The flow characteristics of velocity component values of attached circular cylinder arrangement decreased approximately twenty five percent comparing to bare cylinder case.

Keywords: partical image velocimetry, elastic plate, cylinder, flow structure

Procedia PDF Downloads 315
400 Hybrid-Nanoengineering™: A New Platform for Nanomedicine

Authors: Mewa Singh

Abstract:

Nanomedicine, a fusion of nanotechnology and medicine, is an emerging technology ideally suited to the targeted therapies. Nanoparticles overcome the low selectivity of anti-cancer drugs toward the tumor as compared to normal tissue and hence result-in less severe side-effects. Our new technology, HYBRID-NANOENGINEERING™, uses a new molecule (MR007) in the creation of nanoparticles that not only helps in nanonizing the medicine but also provides synergy to the medicine. The simplified manufacturing process will result in reduced manufacturing costs. Treatment is made more convenient because hybrid nanomedicines can be produced in oral, injectable or transdermal formulations. The manufacturing process uses no protein, oil or detergents. The particle size is below 180 nm with a narrow distribution of size. Importantly, these properties confer great stability of the structure. The formulation does not aggregate in plasma and is stable over a wide range of pH. The final hybrid formulation is stable for at least 18 months as a powder. More than 97 drugs, including paclitaxel, docetaxel, tamoxifen, doxorubicinm prednisone, and artemisinin have been nanonized in water soluble formulations. Preclinical studies on cell cultures of tumors show promising results. Our HYBRID-NANOENGINEERING™ platform enables the design and development of hybrid nano-pharmaceuticals that combine efficacy with tolerability, giving patients hope for both extended overall survival and improved quality of life. This study would discuss or present this new discovery of HYBRID-NANOENGINEERING™ which targets drug delivery, synergistic, and potentiating effects, and barriers of drug delivery and advanced drug delivery systems.

Keywords: nano-medicine, nano-particles, drug delivery system, pharmaceuticals

Procedia PDF Downloads 486
399 Secure Optimized Ingress Filtering in Future Internet Communication

Authors: Bander Alzahrani, Mohammed Alreshoodi

Abstract:

Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.

Keywords: forwarding identifier, filling factor, information centric network, topology manager

Procedia PDF Downloads 154
398 pH-Responsive Carrier Based on Polymer Particle

Authors: Florin G. Borcan, Ramona C. Albulescu, Adela Chirita-Emandi

Abstract:

pH-responsive drug delivery systems are gaining more importance because these systems deliver the drug at a specific time in regards to pathophysiological necessity, resulting in improved patient therapeutic efficacy and compliance. Polyurethane materials are well-known for industrial applications (elastomers and foams used in different insulations and automotive), but they are versatile biocompatible materials with many applications in medicine, as artificial skin for the premature neonate, membrane in the hybrid artificial pancreas, prosthetic heart valves, etc. This study aimed to obtain the physico-chemical characterization of a drug delivery system based on polyurethane microparticles. The synthesis is based on a polyaddition reaction between an aqueous phase (mixture of polyethylene-glycol M=200, 1,4-butanediol and Tween® 20) and an organic phase (lysin-diisocyanate in acetone) combined with simultaneous emulsification. Different active agents (omeprazole, amoxicillin, metoclopramide) were used to verify the release profile of the macromolecular particles in different pH mediums. Zetasizer measurements were performed using an instrument based on two modules: a Vasco size analyzer and a Wallis Zeta potential analyzer (Cordouan Technol., France) in samples that were kept in various solutions with different pH and the maximum absorbance in UV-Vis spectra were collected on a UVi Line 9,400 Spectrophotometer (SI Analytics, Germany). The results of this investigation have revealed that these particles are proper for a prolonged release in gastric medium where they can assure an almost constant concentration of the active agents for 1-2 weeks, while they can be disassembled faster in a medium with neutral pHs, such as the intestinal fluid.

Keywords: lysin-diisocyanate, nanostructures, polyurethane, Zetasizer

Procedia PDF Downloads 184
397 BiFeO3-CoFe2O4-PbTiO3 Composites: Structural, Multiferroic and Optical Characteristics

Authors: Nidhi Adhlakha, K. L. Yadav

Abstract:

Three phase magnetoelectric (ME) composites (1-x)(0.7BiFeO3-0.3CoFe2O4)-xPbTiO3 (or equivalently written as (1-x)(0.7BFO-0.3CFO)-xPT) with x variations 0, 0.30, 0.35, 0.40, 0.45 and 1.0 were synthesized using hybrid processing route. The effects of PT addition on structural, multiferroic and optical properties have been subsequently investigated. A detailed Rietveld refinement analysis of X-ray diffraction patterns has been performed, which confirms the presence of structural phases of individual constituents in the composites. Field emission scanning electron microscopy (FESEM) images are taken for microstructural analysis and grain size determination. Transmission electron microscopy (TEM) analysis of 0.3CFO-0.7BFO reveals the average particle size to be lying in the window of 8-10 nm. The temperature dependent dielectric constant at various frequencies (1 kHz, 10 kHz, 50 kHz, 100 kHz and 500 kHz) has been studied and the dielectric study reveals that the increase of dielectric constant and decrease of average dielectric loss of composites with incorporation of PT content. The room temperature ferromagnetic behavior of composites is confirmed through the observation of Magnetization vs. Magnetic field (M-H) hysteresis loops. The variation of magnetization with temperature indicates the presence of spin glass behavior in composites. Magnetoelectric coupling is evidenced in the composites through the observation of the dependence of the dielectric constant on the magnetic field, and magnetodielectric response of 2.05 % is observed for 45 mol% addition of PT content. The fractional change of magnetic field induced dielectric constant can also be expressed as ∆ε_r~γM^2 and the value of γ is found to be ~1.08×10-2 (emu/g)-2 for composite with x=0.40. Fourier transformed infrared (FTIR) spectroscopy of samples is carried out to analyze various bonds formation in the composites.

Keywords: composite, X-ray diffraction, dielectric properties, optical properties

Procedia PDF Downloads 309
396 Spectroscopic Relation between Open Cluster and Globular Cluster

Authors: Robin Singh, Mayank Nautiyal, Priyank Jain, Vatasta Koul, Vaibhav Sharma

Abstract:

The curiosity to investigate the space and its mysteries was dependably the main impetus of human interest, as the particle of livings exists from the "debut de l'Univers" (beginning of the Universe) typified with its few other living things. The sharp drive to uncover the secrets of stars and their unusual deportment was dependably an ignitor of stars investigation. As humankind lives in civilizations and states, stars likewise live in provinces named ‘clusters’. Clusters are separates into 2 composes i.e. open clusters and globular clusters. An open cluster is a gathering of thousand stars that were moulded from a comparable goliath sub-nuclear cloud and for the most part; contain Propulsion I (extremely metal-rich) and Propulsion II (mild metal-rich), where globular clusters are around gathering of more than thirty thousand stars that circles a galactic focus and basically contain Propulsion III (to a great degree metal-poor) stars. Futurology of this paper lies in the spectroscopic investigation of globular clusters like M92 and NGC419 and open clusters like M34 and IC2391 in different color bands by using software like VIREO virtual observatory, Aladin, CMUNIWIN, and MS-Excel. Assessing the outcome Hertzsprung-Russel (HR) diagram with exemplary cosmological models like Einstein model, De Sitter and Planck survey demonstrate for a superior age estimation of respective clusters. Colour-Magnitude Diagram of these clusters was obtained by photometric analysis in g and r bands which further transformed into BV bands which will unravel the idea of stars exhibit in the individual clusters.

Keywords: color magnitude diagram, globular clusters, open clusters, Einstein model

Procedia PDF Downloads 226
395 Engineering of Filtration Systems in Egyptian Cement Plants: Industrial Case Study

Authors: Mohamed. A. Saad

Abstract:

The paper represents a case study regarding the conversion of Electro-Static Precipitators (ESP`s) into Fabric Filters (FF). Seven cement production companies were established in Egypt during the period 1927 to 1980 and 6 new companies were established to cope with the increasing cement demand in 1980's. The cement production market shares in Egypt indicate that there are six multinational companies in the local market, they are interested in the environmental conditions improving and so decided to achieve emission reduction project. The experimental work in the present study is divided into two main parts: (I) Measuring Efficiency of Filter Fabrics with detailed description of a designed apparatus. The paper also reveals the factors that should be optimized in order to assist problem diagnosis, solving and increasing the life of bag filters. (II) Methods to mitigate dust emissions in Egyptian cement plants with a special focus on converting the Electrostatic Precipitators (ESP`s) into Fabric Filters (FF) using the same ESP casing, bottom hoppers, dust transportation system, and ESP ductwork. Only the fan system for the higher pressure drop with the fabric filter was replaced. The proper selection of bag material was a prime factor with regard to gas composition, temperature and particle size. Fiberglass with PTFE membrane coated bags was selected. This fabric is rated for a continuous temperature of 250 C and a surge temperature of 280C. The dust emission recorded was less than 20 mg/m3 from the production line fitted with fabric filters which is super compared with the ESP`s working lines stack.

Keywords: Engineering Electrostatic Precipitator, filtration, dust collectors, cement

Procedia PDF Downloads 255
394 Antibacterial Wound Dressing Based on Metal Nanoparticles Containing Cellulose Nanofibers

Authors: Mohamed Gouda

Abstract:

Antibacterial wound dressings based on cellulose nanofibers containing different metal nanoparticles (CMC-MNPs) were synthesized using an electrospinning technique. First, the composite of carboxymethyl cellulose containing different metal nanoparticles (CMC/MNPs), such as copper nanoparticles (CuNPs), iron nanoparticles (FeNPs), zinc nanoparticles (ZnNPs), cadmium nanoparticles (CdNPs) and cobalt nanoparticles (CoNPs) were synthesized, and finally, these composites were transferred to the electrospinning process. Synthesized CMC-MNPs were characterized using scanning electron microscopy (SEM) coupled with high-energy dispersive X-ray (EDX) and UV-visible spectroscopy used to confirm nanoparticle formation. The SEM images clearly showed regular flat shapes with semi-porous surfaces. All MNPs were well distributed inside the backbone of the cellulose without aggregation. The average particle diameters were 29-39 nm for ZnNPs, 29-33 nm for CdNPs, 25-33 nm for CoNPs, 23-27 nm for CuNPs and 22-26 nm for FeNPs. Surface morphology, water uptake and release of MNPs from the nanofibers in water and antimicrobial efficacy were studied. SEM images revealed that electrospun CMC-MNPs nanofibers are smooth and uniformly distributed without bead formation with average fiber diameters in the range of 300 to 450 nm. Fiber diameters were not affected by the presence of MNPs. TEM images showed that MNPs are present in/on the electrospun CMC-MNPs nanofibers. The diameter of the electrospun nanofibers containing MNPs was in the range of 300–450 nm. The MNPs were observed to be spherical in shape. The CMC-MNPs nanofibers showed good hydrophilic properties and had excellent antibacterial activity against the Gram-negative bacteria Escherichia coli and the Gram-positive bacteria Staphylococcus aureus.

Keywords: electrospinning technique, metal nanoparticles, cellulosic nanofibers, wound dressing

Procedia PDF Downloads 329
393 Ultrasound-Assisted Extraction of Carotenoids from Tangerine Peel Using Ostrich Oil as a Green Solvent and Optimization of the Process by Response Surface Methodology

Authors: Fariba Tadayon, Nika Gharahgolooyan, Ateke Tadayon, Mostafa Jafarian

Abstract:

Carotenoid pigments are a various group of lipophilic compounds that generate the yellow to red colors of many plants, foods and flowers. A well-known type of carotenoids which is pro-vitamin A is β-carotene. Due to the color of citrus fruit’s peel, the peel can be a good source of different carotenoids. Ostrich oil is one of the most valuable foundations in many branches of industry, medicine, cosmetics and nutrition. The animal-based ostrich oil could be considered as an alternative and green solvent. Following this study, wastes of citrus peel will recycle by a simple method and extracted carotenoids can increase properties of ostrich oil. In this work, a simple and efficient method for extraction of carotenoids from tangerine peel was designed. Ultrasound-assisted extraction (UAE) showed significant effect on the extraction rate by increasing the mass transfer rate. Ostrich oil can be used as a green solvent in many studies to eliminate petroleum-based solvents. Since tangerine peel is a complex source of different carotenoids separation and determination was performed by high-performance liquid chromatography (HPLC). In addition, the ability of ostrich oil and sunflower oil in carotenoid extraction from tangerine peel and carrot was compared. The highest yield of β-carotene extracted from tangerine peel using sunflower oil and ostrich oil were 75.741 and 88.110 (mg/L), respectively. Optimization of the process was achieved by response surface methodology (RSM) and the optimal extraction conditions were tangerine peel powder particle size of 0.180 mm, ultrasonic intensity of 19 W/cm2 and sonication time of 30 minutes.

Keywords: β-carotene, carotenoids, citrus peel, ostrich oil, response surface methodology, ultrasound-assisted extraction

Procedia PDF Downloads 316
392 Simplified Empirical Method for Predicting Liquefaction Potential and Its Application to Kaohsiung Areas in Taiwan

Authors: Darn H. Hsiao, Zhu-Yun Zheng

Abstract:

Since Taiwan is located between the Eurasian and Filipino plates and earthquakes often thus occur. The coastal plains in western Taiwan are alluvial plains, and the soils of the alluvium are mostly from the Lao-Shan belt in the central mountainous area of ​​southern Taiwan. It could come mostly from sand/shale and slate. The previous investigation found that the soils in the Kaohsiung area of ​​southern Taiwan are mainly composed of slate, shale, quartz, low-plastic clay, silt, silty sand and so on. It can also be found from the past earthquakes that the soil in Kaohsiung is highly susceptible to soil subsidence due to liquefaction. Insufficient bearing capacity of building will cause soil liquefaction disasters. In this study, the boring drilling data from nine districts among the Love River Basin in the city center, and some factors affecting liquefaction include the content of fines (FC), standard penetration test N value (SPT N), the thickness of clay layer near ground-surface, and the thickness of possible liquefied soil were further discussed for liquefaction potential as well as groundwater level. The results show that the liquefaction potential is higher in the areas near the riverside, the backfill area, and the west area of ​​the study area. This paper also uses the old paleo-geological map, soil particle distribution curve, compared with LPI map calculated from the analysis results. After all the parameters finally were studied for five sub zones in the Love River Basin by maximum-minimum method, it is found that both of standard penetration test N value and the thickness of the clay layer will be most influential.

Keywords: liquefaction, western Taiwan, liquefaction potential map, high liquefaction potential areas

Procedia PDF Downloads 119
391 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability

Authors: Akshay B. Pawar, Rohit Y. Parasnis

Abstract:

Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.

Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot

Procedia PDF Downloads 324
390 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 163
389 QUALIFYING AGGREGATES PRODUCED IN KANO-NIGERIA FOR USE IN SUPERPAVE DESIGN METHOD

Authors: Ahmad Idris, Bishir Kado, Murtala Umar, Armaya`u Suleiman Labo

Abstract:

Superpave is the short form of Superior Performing Asphalt Pavement and represents a basis for specifying component materials, asphalt mixture design and analysis, and pavement performance prediction. This new technology is the result of long research projects conducted by the strategic Highway Research program (SHRP) of the Federal Highway Administration. This research was aimed at examining the suitability of Aggregates found in Kano for used in Superpave design method. Aggregates samples were collected from different sources in Kano Nigeria and their Engineering properties, as they relate to the SUPERPAVE design requirements were determined. The average result of Coarse Aggregate Angularity in Kano was found to be 87% and 86% of one fractured face and two or more fractured faces respectively with a standard of 80% and 85% respectively. Fine Aggregate Angularity average result was found to be 47% with a requirement of 45% minimum. A flat and elongated particle which was found to be 10% has a maximum criterion of 10%. Sand equivalent was found to be 51% with the criteria of 45% minimum. Strength tests were also carried out, and the results reflect the requirements of the standards. The tests include Impact value test, Aggregate crushing value, and Aggregate Abrasion tests and the results are 27.5%, 26.7%, and 13%, respectively, with the maximum criteria of 30%. Specific gravity was also carried out and the result was found to have an average value of 2.52 with a criterion of 2.6 to 2.9 and Water absorption was found to be 1.41% with maximum criteria of 0.6%. From the study, the result of the tests indicated that the aggregates properties has met the requirements of Superpave design method based on the specifications of ASTMD 5821, ASTM D 4791, AASHTO T176, AASHTO T33 and BS815.

Keywords: Superpave, aggregates, asphalt mix, Kano

Procedia PDF Downloads 391
388 A Linguistic Product of K-Pop: A Corpus-Based Study on the Korean-Originated Chinese Neologism Simida

Authors: Hui Shi

Abstract:

This article examines the online popularity of Chinese neologism simida, which is a loanword derived from Korean declarative sentence-final suffix seumnida. Facilitated by corpus data obtained from Weibo, the Chinese counterpart of Twitter, this study analyzes the morphological and syntactical processes behind simida’s coinage, as well as the causes of its prevalence on Chinese social media. The findings show that simida is used by Weibo bloggers in two manners: (1) as an alternative word of 'Korea' and 'Korean'; (2) as a redundant sentence-final particle which adds a Korean-like speech style to a statement. Additionally, Weibo user profile analysis further reveals demographical distribution patterns concerning this neologism and highlights young Weibo users in the third-tier cities as the leading adopters of simida. These results are accounted for under the theoretical framework of social indexicality, especially how variations generate style in the indexical field. This article argues that the creation of such an ethnically-targeted neologism is a linguistic demonstration of Chinese netizen’s two-sided attitudes toward the previously heated Korean-wave. The exotic suffix seumnida is borrowed to Chinese as simida due to its high-frequency in Korean cultural exports. Therefore, it gradually becomes a replacement of Korea-related lexical items due to markedness, regardless of semantic prosody. Its innovative implantation to Chinese syntax, on the other hand, reflects Chinese netizens’ active manipulation of language for their online identity building. This study has implications for research on the linguistic construction of identity and style and lays the groundwork for linguistic creativity in the Chinese new media.

Keywords: Chinese neologism, loanword, humor, new media

Procedia PDF Downloads 175
387 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 76
386 Simulation, Optimization, and Analysis Approach of Microgrid Systems

Authors: Saqib Ali

Abstract:

Sources are classified into two depending upon the factor of reviving. These sources, which cannot be revived into their original shape once they are consumed, are considered as nonrenewable energy resources, i.e., (coal, fuel) Moreover, those energy resources which are revivable to the original condition even after being consumed are known as renewable energy resources, i.e., (wind, solar, hydel) Renewable energy is a cost-effective way to generate clean and green electrical energy Now a day’s majority of the countries are paying heed to energy generation from RES Pakistan is mostly relying on conventional energy resources which are mostly nonrenewable in nature coal, fuel is one of the major resources, and with the advent of time their prices are increasing on the other hand RES have great potential in the country with the deployment of RES greater reliability and an effective power system can be obtained In this thesis, a similar concept is being used and a hybrid power system is proposed which is composed of intermixing of renewable and nonrenewable sources The Source side is composed of solar, wind, fuel cells which will be used in an optimal manner to serve load The goal is to provide an economical, reliable, uninterruptable power supply. This is achieved by optimal controller (PI, PD, PID, FOPID) Optimization techniques are applied to the controllers to achieve the desired results. Advanced algorithms (Particle swarm optimization, Flower Pollination Algorithm) will be used to extract the desired output from the controller Detailed comparison in the form of tables and results will be provided, which will highlight the efficiency of the proposed system.

Keywords: distributed generation, demand-side management, hybrid power system, micro grid, renewable energy resources, supply-side management

Procedia PDF Downloads 98
385 A Novel Rapid Well Control Technique Modelled in Computational Fluid Dynamics Software

Authors: Michael Williams

Abstract:

The ability to control a flowing well is of the utmost important. During the kill phase, heavy weight kill mud is circulated around the well. While increasing bottom hole pressure near wellbore formation, the damage is increased. The addition of high density spherical objects has the potential to minimise this near wellbore damage, increase bottom hole pressure and reduce operational time to kill the well. This operational time saving is seen in the rapid deployment of high density spherical objects instead of building high density drilling fluid. The research aims to model the well kill process using a Computational Fluid Dynamics software. A model has been created as a proof of concept to analyse the flow of micron sized spherical objects in the drilling fluid. Initial results show that this new methodology of spherical objects in drilling fluid agrees with traditional stream lines seen in non-particle flow. Additional models have been created to demonstrate that areas of higher flow rate around the bit can lead to increased probability of wash out of formations but do not affect the flow of micron sized spherical objects. Interestingly, areas that experience dimensional changes such as tool joints and various BHA components do not appear at this initial stage to experience increased velocity or create areas of turbulent flow, which could lead to further borehole stability. In conclusion, the initial models of this novel well control methodology have not demonstrated any adverse flow patterns, which would conclude that this model may be viable under field conditions.

Keywords: well control, fluid mechanics, safety, environment

Procedia PDF Downloads 173
384 Synthesis and Study of Properties of Polyaniline/Nickel Sulphide Nanocomposites

Authors: Okpaneje Onyinye Theresa, Ugwu Laeticia Udodiri, Okereke Ngozi Agatha, Okoli Nonso Livinus

Abstract:

This work is on the synthesis and study of the optical characterization of polyaniline/nickel sulphide nanocomposite. Polyaniline (PANI) and nickel sulphide (NiS) nanoparticles were synthesized by oxidative chemical polymerization and sol-gel method. The polyaniline nickel sulphide nanocomposites with various concentrations of NiS were synthesized by in-situ polymerization of aniline monomer. In each case, the nickel sulphide nanoparticles were uniformly dispersed in the aniline hydrochloride before the initiation of oxidative chemical polymerization using ammonium persulphate. The samples formed were subjected to optical characterization using an ultraviolet (UV)-visible light (VIS) spectrophotometer (model: 756S UV – VIS). Optical analysis of the synthesized nanoparticles and nanocomposites showed absorption of radiation within VIS regions. The Tauc model was used to obtain the optical band gap. Energy band gap values of PANI and NiS were found to be 2.50 eV and 1.95 eV, respectively. PANI/NiSnanocomposites has an energy band gap that decreased from 2.25 eV to 1.90 eV as the amount of NiS increased (from 0.5g to 2.0g). These optical results showed that these nanocomposites are potential materials to be considered in solar cells and optoelectronics devices. The structural analysis confirmed the formation of polyaniline and hexagonal nickel sulphide with an average crystallite size of 25.521 nm, while average crystallite sizes of PANI/NiSnanocomposites ranged from 19.458 nm to 25.108 nm. Average particle sizes obtained from the SEM images ranged from 23.24 nm to 51.88 nm. Compositional results confirmed the presence of desired elements that made up the nanoparticles and nanocomposites.

Keywords: polyaniline, nickel sulphide, polyaniline-nickel sulphide nanocomposite, optical characterization, structural analysis, morphological properties, compositional properties

Procedia PDF Downloads 116
383 Biochar Affects Compressive Strength of Portland Cement Composites: A Meta-Analysis

Authors: Zhihao Zhao, Ali El-Nagger, Johnson Kau, Chris Olson, Douglas Tomlinson, Scott X. Chang

Abstract:

One strategy to reduce CO₂ emissions from cement production is to reduce the amount of Portland cement produced by replacing it with supplementary cementitious materials (SCMs). Biochar is a potential SCM that is an eco-friendly and stable porous pyrolytic material. However, the effects of biochar addition on the performances of Portland cement composites are not fully understood. This meta-analysis investigated the impact of biochar addition on the 7- and 28-day compressive strength of Portland cement composites based on 606 paired observations. Biochar feedstock type, pyrolysis conditions, pre-treatments and modifications, biochar dosage, and curing type all influenced the compressive strength of Portland cement composites. Biochars obtained from plant-based feedstocks (except rice and hardwood) improved the 28-day compressive strength of Portland cement composites by 3-13%. Biochars produced at pyrolysis temperatures higher than 450 °C, with a heating rate of around 10 °C/min, increased the 28-day compressive strength more effectively. Furthermore, the addition of biochars with small particle sizes increased the compressive strength of Portland cement composites by 2-7% compared to those without biochar addition. Biochar dosage of < 2.5% of the binder weight enhanced both compressive strengths and common curing methods maintained the effect of biochar addition. However, when mixing the cement, adding fine and coarse aggregates such as sand and gravel affects the concrete and mortar's compressive strength, diminishing the effect of biochar addition and making the biochar effect nonsignificant. We conclude that appropriate biochar addition could maintain or enhance the mechanical performance of Portland cement composites, and future research should explore the mechanisms of biochar effects on the performance of cement composites.

Keywords: biochar, Portland cement, constructure, compressive strength, meta-analysis

Procedia PDF Downloads 69