Search results for: optimized geometric
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2146

Search results for: optimized geometric

586 Membrane Distillation Process Modeling: Dynamical Approach

Authors: Fadi Eleiwi, Taous Meriem Laleg-Kirati

Abstract:

This paper presents a complete dynamic modeling of a membrane distillation process. The model contains two consistent dynamic models. A 2D advection-diffusion equation for modeling the whole process and a modified heat equation for modeling the membrane itself. The complete model describes the temperature diffusion phenomenon across the feed, membrane, permeate containers and boundary layers of the membrane. It gives an online and complete temperature profile for each point in the domain. It explains heat conduction and convection mechanisms that take place inside the process in terms of mathematical parameters, and justify process behavior during transient and steady state phases. The process is monitored for any sudden change in the performance at any instance of time. In addition, it assists maintaining production rates as desired, and gives recommendations during membrane fabrication stages. System performance and parameters can be optimized and controlled using this complete dynamic model. Evolution of membrane boundary temperature with time, vapor mass transfer along the process, and temperature difference between membrane boundary layers are depicted and included. Simulations were performed over the complete model with real membrane specifications. The plots show consistency between 2D advection-diffusion model and the expected behavior of the systems as well as literature. Evolution of heat inside the membrane starting from transient response till reaching steady state response for fixed and varying times is illustrated.

Keywords: membrane distillation, dynamical modeling, advection-diffusion equation, thermal equilibrium, heat equation

Procedia PDF Downloads 268
585 Ultrasonic Extraction of Phenolics from Leaves of Shallots and Peels of Potatoes for Biofortification of Cheese

Authors: Lila Boulekbache-Makhlouf, Fatiha Brahmi

Abstract:

This study was carried out with the aim of enriching fresh cheese with the food by-products, which are the leaves of shallots and the peels of potatoes. Firstly, the conditions for extracting the total polyphenols using ultrasound are optimized. Then, the contents of total polyphenols PPT , flavonoids and antioxidant activity were evaluated for the extracts obtained by adopting the optimal parameter. On the other hand, we have carried out some physicochemical, microbiological and sensory analyzes of the cheese produced. The maximum total polyphenols value of 70.44 mg GAE gallic acid equivalent / g of dry matter DM of shallot leaves was reached with 40% (v/v) ethanol, an extraction time of 90 min and a temperature of 10 °C. While, the maximum TPP total polyphenols content of potato peels of 45.03 ± 4.16 mg gallic acid equivalent / g of dry matter DM was obtained using an ethanol /water mixture (40%, v/v), a time of 30 min and a temperature of 60 °C and the flavonoid contents were 13.99 and 7.52 QE quercetin equivalent/g dry matter DM, respectively. From the antioxidant tests, we deduced that the potato peels present a higher antioxidant power with the concentration of extracts causing a 50% inhibition IC50s of 125.42 ± 2.78 μg/mL for 2,2-diphényl 1-picrylhydrazyle DPPH, of 87.21 ± 7.72 μg/mL for phosphomolybdate and 200.77 ± 13.38 μg/mL for iron chelation, compared with the results obtained for shallot leaves which were 204.29 ± 0.09, 45.85 ± 3,46 and 1004.10 ± 145.73 μg/mL, respectively. The results of the physicochemical analyzes have shown that the formulated cheese was compliant with standards. Microbiological analyzes show that the hygienic quality of the cheese produced was satisfactory. According to the sensory analysis, the experts liked the cheese enriched with the powder and pieces of the leaves of the shallots.

Keywords: shallots leaves, potato peels, ultrasound extraction, phenolics, cheese

Procedia PDF Downloads 84
584 A Techno-Economic Simulation Model to Reveal the Relevance of Construction Process Impact Factors for External Thermal Insulation Composite System (ETICS)

Authors: Virgo Sulakatko

Abstract:

The reduction of energy consumption of the built environment has been one of the topics tackled by European Commission during the last decade. Increased energy efficiency requirements have increased the renovation rate of apartment buildings covered with External Thermal Insulation Composite System (ETICS). Due to fast and optimized application process, a large extent of quality assurance is depending on the specific activities of artisans and are often not controlled. The on-site degradation factors (DF) have the technical influence to the façade and cause future costs to the owner. Besides the thermal conductivity, the building envelope needs to ensure the mechanical resistance and stability, fire-, noise-, corrosion and weather protection, and long-term durability. As the shortcomings of the construction phase become problematic after some years, the common value of the renovation is reduced. Previous work on the subject has identified and rated the relevance of DF to the technical requirements and developed a method to reveal the economic value of repair works. The future costs can be traded off to increased the quality assurance during the construction process. The proposed framework is describing the joint simulation of the technical importance and economic value of the on-site DFs of ETICS. The model is providing new knowledge to improve the resource allocation during the construction process by enabling to identify and diminish the most relevant degradation factors and increase economic value to the owner.

Keywords: ETICS, construction technology, construction management, life cycle costing

Procedia PDF Downloads 416
583 A Validated High-Performance Liquid Chromatography-UV Method for Determination of Malondialdehyde-Application to Study in Chronic Ciprofloxacin Treated Rats

Authors: Anil P. Dewani, Ravindra L. Bakal, Anil V. Chandewar

Abstract:

Present work demonstrates the applicability of high-performance liquid chromatography (HPLC) with UV detection for the determination of malondialdehyde as malondialdehyde-thiobarbituric acid complex (MDA-TBA) in-vivo in rats. The HPLC-UV method for MDA-TBA was achieved by isocratic mode on a reverse-phase C18 column (250mm×4.6mm) at a flow rate of 1.0mLmin−1 followed by UV detection at 278 nm. The chromatographic conditions were optimized by varying the concentration and pH followed by changes in percentage of organic phase optimal mobile phase consisted of mixture of water (0.2% Triethylamine pH adjusted to 2.3 by ortho-phosphoric acid) and acetonitrile in ratio (80:20 % v/v). The retention time of MDA-TBA complex was 3.7 min. The developed method was sensitive as limit of detection and quantification (LOD and LOQ) for MDA-TBA complex were (standard deviation and slope of calibration curve) 110 ng/ml and 363 ng/ml respectively. The method was linear for MDA spiked in plasma and subjected to derivatization at concentrations ranging from 100 to 1000 ng/ml. The precision of developed method measured in terms of relative standard deviations for intra-day and inter-day studies was 1.6–5.0% and 1.9–3.6% respectively. The HPLC method was applied for monitoring MDA levels in rats subjected to chronic treatment of ciprofloxacin (CFL) (5mg/kg/day) for 21 days. Results were compared by findings in control group rats. Mean peak areas of both study groups was subjected for statistical treatment to unpaired student t-test to find p-values. The p value was < 0.001 indicating significant results and suggesting increased MDA levels in rats subjected to chronic treatment of CFL of 21 days.

Keywords: MDA, TBA, ciprofloxacin, HPLC-UV

Procedia PDF Downloads 319
582 Development of an Interactive Display-Control Layout Design System for Trains Based on Train Drivers’ Mental Models

Authors: Hyeonkyeong Yang, Minseok Son, Taekbeom Yoo, Woojin Park

Abstract:

Human error is the most salient contributing factor to railway accidents. To reduce the frequency of human errors, many researchers and train designers have adopted ergonomic design principles for designing display-control layout in rail cab. There exist a number of approaches for designing the display control layout based on optimization methods. However, the ergonomically optimized layout design may not be the best design for train drivers, since the drivers have their own mental models based on their experiences. Consequently, the drivers may prefer the existing display-control layout design over the optimal design, and even show better driving performance using the existing design compared to that using the optimal design. Thus, in addition to ergonomic design principles, train drivers’ mental models also need to be considered for designing display-control layout in rail cab. This paper developed an ergonomic assessment system of display-control layout design, and an interactive layout design system that can generate design alternatives and calculate ergonomic assessment score in real-time. The design alternatives generated from the interactive layout design system may not include the optimal design from the ergonomics point of view. However, the system’s strength is that it considers train drivers’ mental models, which can help generate alternatives that are more friendly and easier to use for train drivers. Also, with the developed system, non-experts in ergonomics, such as train drivers, can refine the design alternatives and improve ergonomic assessment score in real-time.

Keywords: display-control layout design, interactive layout design system, mental model, train drivers

Procedia PDF Downloads 302
581 Protein Extraction by Enzyme-Assisted Extraction followed by Alkaline Extraction from Red Seaweed Eucheuma denticulatum (Spinosum) Used in Carrageenan Production

Authors: Alireza Naseri, Susan L. Holdt, Charlotte Jacobsen

Abstract:

In 2014, the global amount of carrageenan production was 60,000 ton with a value of US$ 626 million. From this number, it can be estimated that the total dried seaweed consumption for this production was at least 300,000 ton/year. The protein content of these types of seaweed is 5 – 25%. If just half of this total amount of protein could be extracted, 18,000 ton/year of a high-value protein product would be obtained. The overall aim of this study was to develop a technology that will ensure further utilization of the seaweed that is used only as raw materials for carrageenan production as single extraction at present. More specifically, proteins should be extracted from the seaweed either before or after extraction of carrageenan with focus on maintaining the quality of carrageenan as a main product. Different mechanical, chemical and enzymatic technologies were evaluated. The optimized process was implemented in lab scale and based on its results; the new experiments were done a pilot and larger scale. In order to calculate the efficiency of the new upstream multi-extraction process, protein content was tested before and after extraction. After this step, the extraction of carrageenan was done and carrageenan content and the effect of extraction on yield were evaluated. The functionality and quality of carrageenan were measured based on rheological parameters. The results showed that by using the new multi-extraction process (submitted patent); it is possible to extract almost 50% of total protein without any negative impact on the carrageenan quality. Moreover, compared to the routine carrageenan extraction process, the new multi-extraction process could increase the yield of carrageenan and the rheological properties such as gel strength in the final carrageenan had a promising improvement. The extracted protein has initially been screened as a plant protein source in typical food applications. Further work will be carried out in order to improve properties such as color, solubility, and taste.

Keywords: carrageenan, extraction, protein, seaweed

Procedia PDF Downloads 282
580 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation

Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski

Abstract:

In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.

Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming

Procedia PDF Downloads 400
579 A Low-Power Two-Stage Seismic Sensor Scheme for Earthquake Early Warning System

Authors: Arvind Srivastav, Tarun Kanti Bhattacharyya

Abstract:

The north-eastern, Himalayan, and Eastern Ghats Belt of India comprise of earthquake-prone, remote, and hilly terrains. Earthquakes have caused enormous damages in these regions in the past. A wireless sensor network based earthquake early warning system (EEWS) is being developed to mitigate the damages caused by earthquakes. It consists of sensor nodes, distributed over the region, that perform majority voting of the output of the seismic sensors in the vicinity, and relay a message to a base station to alert the residents when an earthquake is detected. At the heart of the EEWS is a low-power two-stage seismic sensor that continuously tracks seismic events from incoming three-axis accelerometer signal at the first-stage, and, in the presence of a seismic event, triggers the second-stage P-wave detector that detects the onset of P-wave in an earthquake event. The parameters of the P-wave detector have been optimized for minimizing detection time and maximizing the accuracy of detection.Working of the sensor scheme has been verified with seven earthquakes data retrieved from IRIS. In all test cases, the scheme detected the onset of P-wave accurately. Also, it has been established that the P-wave onset detection time reduces linearly with the sampling rate. It has been verified with test data; the detection time for data sampled at 10Hz was around 2 seconds which reduced to 0.3 second for the data sampled at 100Hz.

Keywords: earthquake early warning system, EEWS, STA/LTA, polarization, wavelet, event detector, P-wave detector

Procedia PDF Downloads 174
578 Amperometric Biosensor for Glucose Determination Based on a Recombinant Mn Peroxidase from Corn Cross-linked to a Gold Electrode

Authors: Anahita Izadyar, My Ni Van, Kayleigh Amber Rodriguez, Ilwoo Seok, Elizabeth E. Hood

Abstract:

Using a recombinant enzyme derived from corn and a simple modification, we fabricated a facile, fast, and cost-beneficial biosensor to measure glucose. The Nafion/ Plant Produced Mn Peroxidase (PPMP)– glucose oxidase (GOx)- Bovine serum albumin (BSA) /Au electrode showed an excellent amperometric response to detect glucose. This biosensor is capable of responding to a wide range of glucose—20.0 µM−15.0 mM and has a lower detection limit (LOD) of 2.90µM. The reproducibility response using six electrodes is also very substantial and indicates the high capability of this biosensor to detect a wide range of 3.10±0.19µM to 13.2±1.8 mM glucose concentration. Selectivity of this electrode was investigated in an optimized experimental solution contains 10% diet green tea with citrus containing ascorbic acid (AA), and citric acid (CA) in a wide concentration of glucose at 0.02 to 14.0mM with an LOD of 3.10µM. Reproducibility was also investigated using 4 electrodes in this sample and shows notable results in the wide concentration range of 3.35±0.45µM to of 13.0 ± 0.81 mM. We also used other voltammetry methods to evaluate this biosensor. We applied linear sweep voltammetry (LSV) and this technique shows a wide range of 0.10−15.0 mM to detect glucose with a lower detection limit of 19.5µM. The performance and strength of this enzyme biosensor were the simplicity, wide linear ranges, sensitivities, selectivity, and low limits of detection. We expect that the modified biosensor has the potential for monitoring various biofluids.

Keywords: plant-produced manganese peroxidase, enzyme-based biosensors, glucose, modified gold electrode, glucose oxidase

Procedia PDF Downloads 138
577 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine

Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski

Abstract:

The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine

Procedia PDF Downloads 196
576 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.

Keywords: geo-referencing, ortho-rectification, video frame, self-calibration

Procedia PDF Downloads 476
575 Development of a Laboratory Laser-Produced Plasma “Water Window” X-Ray Source for Radiobiology Experiments

Authors: Daniel Adjei, Mesfin Getachew Ayele, Przemyslaw Wachulak, Andrzej Bartnik, Luděk Vyšín, Henryk Fiedorowicz, Inam Ul Ahad, Lukasz Wegrzynski, Anna Wiechecka, Janusz Lekki, Wojciech M. Kwiatek

Abstract:

Laser produced plasma light sources, emitting high intensity pulses of X-rays, delivering high doses are useful to understand the mechanisms of high dose effects on biological samples. In this study, a desk-top laser plasma soft X-ray source, developed for radio biology research, is presented. The source is based on a double-stream gas puff target, irradiated with a commercial Nd:YAG laser (EKSPLA), which generates laser pulses of 4 ns time duration and energy up to 800 mJ at 10 Hz repetition rate. The source has been optimized for maximum emission in the “water window” wavelength range from 2.3 nm to 4.4 nm by using pure gas (argon, nitrogen and krypton) and spectral filtering. Results of the source characterization measurements and dosimetry of the produced soft X-ray radiation are shown and discussed. The high brightness of the laser produced plasma soft X-ray source and the low penetration depth of the produced X-ray radiation in biological specimen allows a high dose rate to be delivered to the specimen of over 28 Gy/shot; and 280 Gy/s at the maximum repetition rate of the laser system. The source has a unique capability for irradiation of cells with high pulse dose both in vacuum and He-environment. Demonstration of the source to induce DNA double- and single strand breaks will be discussed.

Keywords: laser produced plasma, soft X-rays, radio biology experiments, dosimetry

Procedia PDF Downloads 584
574 Al-Ti-W Metallic Glass Thin Films Deposited by Magnetron Sputtering Technology to Protect Steel Against Hydrogen Embrittlement

Authors: Issam Lakdhar, Akram Alhussein, Juan Creus

Abstract:

With the huge increase in world energy consumption, researchers are working to find other alternative sources of energy instead of fossil fuel one causing many environmental problems as the production of greenhouse effect gases. Hydrogen is considered a green energy source, which its combustion does not cause environmental pollution. The transport and the storage of the gas molecules or the other products containing this smallest chemical element in metallic structures (pipelines, tanks) are crucial issues. The dissolve and the permeation of hydrogen into the metal lattice lead to the formation of hydride phases and the embrittlement of structures. To protect the metallic structures, a surface treatment could be a good solution. Among the different techniques, magnetron sputtering is used to elaborate micrometric coatings capable of slowing down or stop hydrogen permeation. In the plasma environment, the deposition parameters of new thin-film metallic glasses Al-Ti-W were optimized and controlled in order to obtain, hydrogen barrier. Many characterizations were carried out (SEM, XRD and Nano-indentation…) to control the composition and understand the influence of film microstructure and chemical composition on the hydrogen permeation through the coatings. The coating performance was evaluated under two hydrogen production methods: chemical and electrochemical (cathodic protection) techniques. The hydrogen quantity absorbed was experimentally determined using the Thermal-Desorption Spectroscopy method (TDS)). An ideal ATW thin film was developed and showed excellent behavior against the diffusion of hydrogen.

Keywords: thin films, hydrogen, PVD, plasma technology, electrochemical properties

Procedia PDF Downloads 180
573 Generating 3D Battery Cathode Microstructures using Gaussian Mixture Models and Pix2Pix

Authors: Wesley Teskey, Vedran Glavas, Julian Wegener

Abstract:

Generating battery cathode microstructures is an important area of research, given the proliferation of the use of automotive batteries. Currently, finite element analysis (FEA) is often used for simulations of battery cathode microstructures before physical batteries can be manufactured and tested to verify the simulation results. Unfortunately, a key drawback of using FEA is that this method of simulation is very slow in terms of computational runtime. Generative AI offers the key advantage of speed when compared to FEA, and because of this, generative AI is capable of evaluating very large numbers of candidate microstructures. Given AI generated candidate microstructures, a subset of the promising microstructures can be selected for further validation using FEA. Leveraging the speed advantage of AI allows for a better final microstructural selection because high speed allows for the evaluation of many more candidate microstructures. For the approach presented, battery cathode 3D candidate microstructures are generated using Gaussian Mixture Models (GMMs) and pix2pix. This approach first uses GMMs to generate a population of spheres (representing the “active material” of the cathode). Once spheres have been sampled from the GMM, they are placed within a microstructure. Subsequently, the pix2pix sweeps over the 3D microstructure (iteratively) slice by slice and adds details to the microstructure to determine what portions of the microstructure will become electrolyte and what part of the microstructure will become binder. In this manner, each subsequent slice of the microstructure is evaluated using pix2pix, where the inputs into pix2pix are the previously processed layers of the microstructure. By feeding into pix2pix previously fully processed layers of the microstructure, pix2pix can be used to ensure candidate microstructures represent a realistic physical reality. More specifically, in order for the microstructure to represent a realistic physical reality, the locations of electrolyte and binder in each layer of the microstructure must reasonably match the locations of electrolyte and binder in previous layers to ensure geometric continuity. Using the above outlined approach, a 10x to 100x speed increase was possible when generating candidate microstructures using AI when compared to using a FEA only approach for this task. A key metric for evaluating microstructures was the battery specific power value that the microstructures would be able to produce. The best generative AI result obtained was a 12% increase in specific power for a candidate microstructure when compared to what a FEA only approach was capable of producing. This 12% increase in specific power was verified by FEA simulation.

Keywords: finite element analysis, gaussian mixture models, generative design, Pix2Pix, structural design

Procedia PDF Downloads 102
572 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model

Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.

Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.

Procedia PDF Downloads 136
571 Enhanced Solar-Driven Evaporation Process via F-Mwcnts/Pvdf Photothermal Membrane for Forward Osmosis Draw Solution Recovery

Authors: Ayat N. El-Shazly, Dina Magdy Abdo, Hamdy Maamoun Abdel-Ghafar, Xiangju Song, Heqing Jiang

Abstract:

Product water recovery and draw solution (DS) reuse is the most energy-intensive stage in forwarding osmosis (FO) technology. Sucrose solution is the most suitable DS for FO application in food and beverages. However, sucrose DS recovery by conventional pressure-driven or thermal-driven concentration techniques consumes high energy. Herein, we developed a spontaneous and sustainable solar-driven evaporation process based on a photothermal membrane for the concentration and recovery of sucrose solution. The photothermal membrane is composed of multi-walled carbon nanotubes (f-MWCNTs)photothermal layer on a hydrophilic polyvinylidene fluoride (PVDF) substrate. The f-MWCNTs photothermal layer with a rough surface and interconnected network structures not only improves the light-harvesting and light-to-heat conversion performance but also facilitates the transport of water molecules. The hydrophilic PVDF substrate can promote the rapid transport of water for adequate water supply to the photothermal layer. As a result, the optimized f-MWCNTs/PVDF photothermal membrane exhibits an excellent light absorption of 95%, and a high surface temperature of 74 °C at 1 kW m−2 . Besides, it realizes an evaporation rate of 1.17 kg m−2 h−1 for 5% (w/v) of sucrose solution, which is about 5 times higher than that of the natural evaporation. The designed photothermal evaporation process is capable of concentrating sucrose solution efficiently from 5% to 75% (w/v), which has great potential in FO process and juice concentration.

Keywords: solar, pothothermal, membrane, MWCNT

Procedia PDF Downloads 98
570 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)

Authors: Silvia Arrate, Waldo Salud, Eloy París

Abstract:

The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.

Keywords: cutting tools, data science, prediction, TBM, wear

Procedia PDF Downloads 44
569 Characterization of 2,4,6-Trinitrotoluene (Tnt)-Metabolizing Bacillus Cereus Sp TUHP2 Isolated from TNT-Polluted Soils in the Vellore District, Tamilnadu, India

Authors: S. Hannah Elizabeth, A. Panneerselvam

Abstract:

Objective: The main objective was to evaluate the degradative properties of Bacillus cereus sp TUHP2 isolated from TNT-Polluted soils in the Vellore District, Tamil Nadu, India. Methods: Among the 3 bacterial genera isolated from different soil samples, one potent TNT degrading strain Bacillus cereus sp TUHP2 was identified. The morphological, physiological and the biochemical properties of the strain Bacillus cereus sp TUHP2 was confirmed by conventional methods and genotypic characterization was carried out using 16S r-DNA partial gene amplification and sequencing. The broken down by products of DNT in the extract was determined by Gas Chromatogram- Mass spectrometry (GC-MS). Supernatant samples from the broth studied at 24 h interval were analyzed by HPLC analysis and the effect on various nutritional and environmental factors were analysed and optimized for the isolate. Results: Out of three isolates one strain TUHP2 were found to have potent efficiency to degrade TNT and revealed the genus Bacillus. 16S rDNA gene sequence analysis showed highest homology (98%) with Bacillus cereus and was assigned as Bacillus cereus sp TUHP2. Based on the energy of the predicted models, the secondary structure predicted by MFE showed the more stable structure with a minimum energy. Products of TNT Transformation showed colour change in the medium during cultivation. TNT derivates such as 2HADNT and 4HADNT were detected by HPLC chromatogram and 2ADNT, 4ADNT by GC/MS analysis. Conclusion: Hence this study presents the clear evidence for the biodegradation process of TNT by strain Bacillus cereus sp TUHP2.

Keywords: bioremediation, biodegradation, biotransformation, sequencing

Procedia PDF Downloads 458
568 Chitosan Coated Liposome Incorporated Cyanobacterial Pigment for Nasal Administration in the Brain Stroke

Authors: Kyou Hee Shim, Hwa Sung Shin

Abstract:

When a thrombolysis agent is administered to treat ischemic stroke, excessive reactive oxygen species are generated due to a sudden provision of oxygen and occurs secondary damage cell necrosis. Thus, it is necessary to administrate adjuvant as well as thrombolysis agent to protect and reduce damaged tissue. As cerebral blood vessels have specific structure called blood-brain barrier (BBB), it is not easy to transfer substances from blood to tissue. Therefore, development of a drug carrier is required to increase drug delivery efficiency to brain tissue. In this study, cyanobacterial pigment from the blue-green algae known for having neuroprotective effect as well as antioxidant effect was nasally administrated for bypassing BBB. In order to deliver cyanobacterial pigment efficiently, the nano-sized liposome was used as a carrier. Liposomes were coated with a positive charge of chitosan since negative residues are present at the nasal mucosa the first gateway of nasal administration. Characteristics of liposome including morphology, size and zeta potential were analyzed by transmission electron microscope (TEM) and zeta analyzer. As a result of cytotoxic test, the liposomes were not harmful. Also, being administered a drug to the ischemic stroke animal model, we could confirm that the pharmacological effect of the pigment delivered by chitosan coated liposome was enhanced compared to that of non-coated liposome. Consequently, chitosan coated liposome could be considered as an optimized drug delivery system for the treatment of acute ischemic stroke.

Keywords: ischemic stroke, cyanobacterial pigment, liposome, chitosan, nasal administration

Procedia PDF Downloads 222
567 Design and Implementation of Low-code Model-building Methods

Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu

Abstract:

This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.

Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment

Procedia PDF Downloads 19
566 Introduction to Multi-Agent Deep Deterministic Policy Gradient

Authors: Xu Jie

Abstract:

As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.

Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents

Procedia PDF Downloads 14
565 Fabrication and Characterisation of Additive Manufactured Ti-6Al-4V Parts by Laser Powder Bed Fusion Technique

Authors: Norica Godja, Andreas Schindel, Luka Payrits, Zsolt Pasztor, Bálint Hegedüs, Petr Homola, Jan Horňas, Jiří Běhal, Roman Ruzek, Martin Holzleitner, Sascha Senck

Abstract:

In order to reduce fuel consumption and CO₂ emissions in the aviation sector, innovative solutions are being sought to reduce the weight of aircraft, including additive manufacturing (AM). Of particular importance are the excellent mechanical properties that are required for aircraft structures. Ti6Al4V alloys, with their high mechanical properties in relation to weight, can reduce the weight of aircraft structures compared to structures made of steel and aluminium. Currently, conventional processes such as casting and CNC machining are used to obtain the desired structures, resulting in high raw material removal, which in turn leads to higher costs and impacts the environment. Additive manufacturing (AM) offers advantages in terms of weight, lead time, design, and functionality and enables the realisation of alternative geometric shapes with high mechanical properties. However, there are currently technological shortcomings that have led to AM not being approved for structural components with high safety requirements. An assessment of damage tolerance for AM parts is required, and quality control needs to be improved. Pores and other defects cannot be completely avoided at present, but they should be kept to a minimum during manufacture. The mechanical properties of the manufactured parts can be further improved by various treatments. The influence of different treatment methods (heat treatment, CNC milling, electropolishing, chemical polishing) and operating parameters were investigated by scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM/EDX), X-ray diffraction (XRD), electron backscatter diffraction (EBSD) and measurements with a focused ion beam (FIB), taking into account surface roughness, possible anomalies in the chemical composition of the surface and possible cracks. The results of the characterisation of the constructed and treated samples are discussed and presented in this paper. These results were generated within the framework of the 3TANIUM project, which is financed by EU with the contract number 101007830.

Keywords: Ti6Al4V alloys, laser powder bed fusion, damage tolerance, heat treatment, electropolishing, potential cracking

Procedia PDF Downloads 80
564 Optimal Management of Forest Stands under Wind Risk in Czech Republic

Authors: Zohreh Mohammadi, Jan Kaspar, Peter Lohmander, Robert Marusak, Harald Vacik, Ljusk Ola Eriksson

Abstract:

Storms are important damaging agents in European forest ecosystems. In the latest decades, significant economic losses in European forestry occurred due to storms. This study investigates the problem of optimal harvest planning when forest stands risk to be felled by storms. One of the most applicable mathematical methods which are being used to optimize forest management is stochastic dynamic programming (SDP). This method belongs to the adaptive optimization class. Sequential decisions, such as harvest decisions, can be optimized based on sequential information about events that cannot be perfectly predicted, such as the future storms and the future states of wind protection from other forest stands. In this paper, stochastic dynamic programming is used to maximize the expected present value of the profits from an area consisting of several forest stands. The region of analysis is the Czech Republic. The harvest decisions, in a particular time period, should be simultaneously taken in all neighbor stands. The reason is that different stands protect each other from possible winds. The optimal harvest age of a particular stand is a function of wind speed and different wind protection effects. The optimal harvest age often decreases with wind speed, but it cannot be determined for one stand at a time. When we consider a particular stand, this stand also protects other stands. Furthermore, the particular stand is protected by neighbor stands. In some forest stands, it may even be rational to increase the harvest age under the influence of stronger winds, in order to protect more valuable stands in the neighborhood. It is important to integrate wind risk in forestry decision-making.

Keywords: Czech republic, forest stands, stochastic dynamic programming, wind risk

Procedia PDF Downloads 140
563 Check Red Blood Cells Concentrations of a Blood Sample by Using Photoconductive Antenna

Authors: Ahmed Banda, Alaa Maghrabi, Aiman Fakieh

Abstract:

Terahertz (THz) range lies in the area between 0.1 to 10 THz. The process of generating and detecting THz can be done through different techniques. One of the most familiar techniques is done through a photoconductive antenna (PCA). The process of generating THz radiation at PCA includes applying a laser pump in femtosecond and DC voltage difference. However, photocurrent is generated at PCA, which its value is affected by different parameters (e.g., dielectric properties, DC voltage difference and incident power of laser pump). THz radiation is used for biomedical applications. However, different biomedical fields need new technologies to meet patients’ needs (e.g. blood-related conditions). In this work, a novel method to check the red blood cells (RBCs) concentration of a blood sample using PCA is presented. RBCs constitute 44% of total blood volume. RBCs contain Hemoglobin that transfers oxygen from lungs to body organs. Then it returns to the lungs carrying carbon dioxide, which the body then gets rid of in the process of exhalation. The configuration has been simulated and optimized using COMSOL Multiphysics. The differentiation of RBCs concentration affects its dielectric properties (e.g., the relative permittivity of RBCs in the blood sample). However, the effects of four blood samples (with different concentrations of RBCs) on photocurrent value have been tested. Photocurrent peak value and RBCs concentration are inversely proportional to each other due to the change of dielectric properties of RBCs. It was noticed that photocurrent peak value has dropped from 162.99 nA to 108.66 nA when RBCs concentration has risen from 0% to 100% of a blood sample. The optimization of this method helps to launch new products for diagnosing blood-related conditions (e.g., anemia and leukemia). The resultant electric field from DC components can not be used to count the RBCs of the blood sample.

Keywords: biomedical applications, photoconductive antenna, photocurrent, red blood cells, THz radiation

Procedia PDF Downloads 198
562 Estimation and Removal of Chlorophenolic Compounds from Paper Mill Waste Water by Electrochemical Treatment

Authors: R. Sharma, S. Kumar, C. Sharma

Abstract:

A number of toxic chlorophenolic compounds are formed during pulp bleaching. The nature and concentration of these chlorophenolic compounds largely depends upon the amount and nature of bleaching chemicals used. These compounds are highly recalcitrant and difficult to remove but are partially removed by the biochemical treatment processes adopted by the paper industry. Identification and estimation of these chlorophenolic compounds has been carried out in the primary and secondary clarified effluents from the paper mill by GCMS. Twenty-six chorophenolic compounds have been identified and estimated in paper mill waste waters. Electrochemical treatment is an efficient method for oxidation of pollutants and has successfully been used to treat textile and oil waste water. Electrochemical treatment using less expensive anode material, stainless steel electrodes has been tried to study their removal. The electrochemical assembly comprised a DC power supply, a magnetic stirrer and stainless steel (316 L) electrode. The optimization of operating conditions has been carried out and treatment has been performed under optimized treatment conditions. Results indicate that 68.7% and 83.8% of cholorphenolic compounds are removed during 2 h of electrochemical treatment from primary and secondary clarified effluent respectively. Further, there is a reduction of 65.1, 60 and 92.6% of COD, AOX and color, respectively for primary clarified and 83.8%, 75.9% and 96.8% of COD, AOX and color, respectively for secondary clarified effluent. EC treatment has also been found to increase significantly the biodegradability index of wastewater because of conversion of non- biodegradable fraction into biodegradable fraction. Thus, electrochemical treatment is an efficient method for the degradation of cholorophenolic compounds, removal of color, AOX and other recalcitrant organic matter present in paper mill waste water.

Keywords: chlorophenolics, effluent, electrochemical treatment, wastewater

Procedia PDF Downloads 382
561 The application of Gel Dosimeters and Comparison with other Dosimeters in Radiotherapy: A Literature Review

Authors: Sujan Mahamud

Abstract:

Purpose: A major challenge in radiotherapy treatment is to deliver precise dose of radiation to the tumor with minimum dose to the healthy normal tissues. Recently, gel dosimetry has emerged as a powerful tool to measure three-dimensional (3D) dose distribution for complex delivery verification and quality assurance. These dosimeters act both as a phantom and detector, thus confirming the versatility of dosimetry technique. The aim of the study is to know the application of Gel Dosimeters in Radiotherapy and find out the comparison with 1D and 2D dimensional dosimeters. Methods and Materials: The study is carried out from Gel Dosimeter literatures. Secondary data and images have been collected from different sources such as different guidelines, books, and internet, etc. Result: Analyzing, verifying, and comparing data from treatment planning system (TPS) is determined that gel dosimeter is a very excellent powerful tool to measure three-dimensional (3D) dose distribution. The TPS calculated data were in very good agreement with the dose distribution measured by the ferrous gel. The overall uncertainty in the ferrous-gel dose determination was considerably reduced using an optimized MRI acquisition protocol and a new MRI scanner. The method developed for comparing measuring gel data with calculated treatment plans, the gel dosimetry method, was proven to be a useful for radiation treatment planning verification. In 1D and 2D Film, the depth dose and lateral for RMSD are 1.8% and 2%, and max (Di-Dj) are 2.5% and 8%. Other side 2D+ ( 3D) Film Gel and Plan Gel for RMSDstruct and RMSDstoch are 2.3% & 3.6% and 1% & 1% and system deviation are -0.6% and 2.5%. The study is investigated that the result fined 2D+ (3D) Film Dosimeter is better than the 1D and 2D Dosimeter. Discussion: Gel Dosimeters is quality control and quality assurance tool which will used the future clinical application.

Keywords: gel dosimeters, phantom, rmsd, QC, detector

Procedia PDF Downloads 149
560 Model Predictive Control Applied to Thermal Regulation of Thermoforming Process Based on the Armax Linear Model and a Quadratic Criterion Formulation

Authors: Moaine Jebara, Lionel Boillereaux, Sofiane Belhabib, Michel Havet, Alain Sarda, Pierre Mousseau, Rémi Deterre

Abstract:

Energy consumption efficiency is a major concern for the material processing industry such as thermoforming process and molding. Indeed, these systems should deliver the right amount of energy at the right time to the processed material. Recent technical development, as well as the particularities of the heating system dynamics, made the Model Predictive Control (MPC) one of the best candidates for thermal control of several production processes like molding and composite thermoforming to name a few. The main principle of this technique is to use a dynamic model of the process inside the controller in real time in order to anticipate the future behavior of the process which allows the current timeslot to be optimized while taking future timeslots into account. This study presents a procedure based on a predictive control that brings balance between optimality, simplicity, and flexibility of its implementation. The development of this approach is progressive starting from the case of a single zone before its extension to the multizone and/or multisource case, taking thus into account the thermal couplings between the adjacent zones. After a quadratic formulation of the MPC criterion to ensure the thermal control, the linear expression is retained in order to reduce calculation time thanks to the use of the ARMAX linear decomposition methods. The effectiveness of this approach is illustrated by experiment and simulation.

Keywords: energy efficiency, linear decomposition methods, model predictive control, mold heating systems

Procedia PDF Downloads 264
559 Optimization of Smart Beta Allocation by Momentum Exposure

Authors: J. B. Frisch, D. Evandiloff, P. Martin, N. Ouizille, F. Pires

Abstract:

Smart Beta strategies intend to be an asset management revolution with reference to classical cap-weighted indices. Indeed, these strategies allow a better control on portfolios risk factors and an optimized asset allocation by taking into account specific risks or wishes to generate alpha by outperforming indices called 'Beta'. Among many strategies independently used, this paper focuses on four of them: Minimum Variance Portfolio, Equal Risk Contribution Portfolio, Maximum Diversification Portfolio, and Equal-Weighted Portfolio. Their efficiency has been proven under constraints like momentum or market phenomenon, suggesting a reconsideration of cap-weighting.
 To further increase strategy return efficiency, it is proposed here to compare their strengths and weaknesses inside time intervals corresponding to specific identifiable market phases, in order to define adapted strategies depending on pre-specified situations. 
Results are presented as performance curves from different combinations compared to a benchmark. If a combination outperforms the applicable benchmark in well-defined actual market conditions, it will be preferred. It is mainly shown that such investment 'rules', based on both historical data and evolution of Smart Beta strategies, and implemented according to available specific market data, are providing very interesting optimal results with higher return performance and lower risk.
 Such combinations have not been fully exploited yet and justify present approach aimed at identifying relevant elements characterizing them.

Keywords: smart beta, minimum variance portfolio, equal risk contribution portfolio, maximum diversification portfolio, equal weighted portfolio, combinations

Procedia PDF Downloads 336
558 Electrochemical Detection of the Chemotherapy Agent Methotrexate in vitro from Physiological Fluids Using Functionalized Carbon Nanotube past Electrodes

Authors: Shekher Kummari, V. Sunil Kumar, K. Vengatajalabathy Gobi

Abstract:

A simple, cost-effective, reusable and reagent-free electrochemical biosensor is developed with functionalized multiwall carbon nanotube paste electrode (f-CNTPE) for the sensitive and selective determination of the important chemotherapeutic drug methotrexate (MTX), which is widely used for the treatment of various cancer and autoimmune diseases. The electrochemical response of the fabricated electrode towards the detection of MTX is examined by cyclic voltammetry (CV), differential pulse voltammetry (DPV) and square wave voltammetry (SWV). CV studies have shown that f-CNTPE electrode system exhibited an excellent electrocatalytic activity towards the oxidation of MTX in phosphate buffer (0.2 M) compared with a conventional carbon paste electrode (CPE). The oxidation peak current is enhanced by nearly two times in magnitude. Applying the DPV method under optimized conditions, a linear calibration plot is achieved over a wide range of concentration from 4.0×10⁻⁷ M to 5.5×10⁻⁶ M with the detection limit 1.6×10⁻⁷ M. further, by applying the SWV method a parabolic calibration plot was achieved starting from a very low concentration of 1.0×10⁻⁸ M, and the sensor could detect as low as 2.9×10⁻⁹ M MTX in 10 s and 10 nM were detected in steady state current-time analysis. The f-CNTPE shows very good selectivity towards the specific recognition of MTX in the presence of important biological interference. The electrochemical biosensor detects MTX in-vitro directly from pharmaceutical sample, undiluted urine and human blood serum samples at a concentration range 5.0×10⁻⁷ M with good recovery limits.

Keywords: amperometry, electrochemical detection, human blood serum, methotrexate, MWCNT, SWV

Procedia PDF Downloads 304
557 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products

Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad

Abstract:

The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.

Keywords: Alumina-Coated magnetite nanoparticles, Magnetic Mixed Hemimicell Solid-Phase Extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample

Procedia PDF Downloads 290