Search results for: fine particle losses
3329 Investigating Kinetics and Mathematical Modeling of Batch Clarification Process for Non-Centrifugal Sugar Production
Authors: Divya Vats, Sanjay Mahajani
Abstract:
The clarification of sugarcane juice plays a pivotal role in the production of non-centrifugal sugar (NCS), profoundly influencing the quality of the final NCS product. In this study, we have investigated the kinetics and mathematical modeling of the batch clarification process. The turbidity of the clarified cane juice (NTU) emerges as the determinant of the end product’s color. Moreover, this parameter underscores the significance of considering other variables as performance indicators for accessing the efficacy of the clarification process. Temperature-controlled experiments were meticulously conducted in a laboratory-scale batch mode. The primary objective was to discern the essential and optimized parameters crucial for augmenting the clarity of cane juice. Additionally, we explored the impact of pH and flocculant loading on the kinetics. Particle Image Velocimetry (PIV) is employed to comprehend the particle-particle and fluid-particle interaction. This technique facilitated a comprehensive understanding, paving the way for the subsequent multiphase computational fluid dynamics (CFD) simulations using the Eulerian-Lagrangian approach in the Ansys fluent. Impressively, these simulations accurately replicated comparable velocity profiles. The final mechanism of this study helps to make a mathematical model and presents a valuable framework for transitioning from the traditional batch process to a continuous process. The ultimate aim is to attain heightened productivity and unwavering consistency in product quality.Keywords: non-centrifugal sugar, particle image velocimetry, computational fluid dynamics, mathematical modeling, turbidity
Procedia PDF Downloads 713328 Particle Gradient Generation in a Microchannel Using a Single IDT
Authors: Florian Kiebert, Hagen Schmidt
Abstract:
Standing surface acoustic waves (sSAWs) have already been used to manipulate particles in a microfluidic channel made of polydimethylsiloxan (PDMS). Usually two identical facing interdigital transducers (IDTs) are exploited to form an sSAW. Further, it has been reported that an sSAW can be generated by a single IDT using a superstrate resonating cavity or a PDMS post. Nevertheless, both setups utilising a traveling surface acoustic wave (tSAW) to create an sSAW for particle manipulation are costly. We present a simplified setup with a tSAW and a PDMS channel to form an sSAW. The incident tSAW is reflected at the rear PDMS channel wall and superimposed with the reflected tSAW. This superpositioned waves generates an sSAW but only at regions where the distance to the rear channel wall is smaller as the attenuation length of the tSAW minus the channel width. Therefore in a channel of 500µm width a tSAW with a wavelength λ = 120 µm causes a sSAW over the whole channel, whereas a tSAW with λ = 60 µm only forms an sSAW next to the rear wall of the channel, taken into account the attenuation length of a tSAW in water. Hence, it is possible to concentrate and trap particles in a defined region of the channel by adjusting the relation between the channel width and tSAW wavelength. Moreover, it is possible to generate a particle gradient over the channel width by picking the right ratio between channel wall and wavelength. The particles are moved towards the rear wall by the acoustic streaming force (ASF) and the acoustic radiation force (ARF) caused by the tSAW generated bulk acoustic wave (BAW). At regions in the channel were the sSAW is dominating the ARF focuses the particles in the pressure nodes formed by the sSAW caused BAW. On the one side the ARF generated by the sSAW traps the particle at the center of the tSAW beam, i. e. of the IDT aperture. On the other side, the ASF leads to two vortices, one on the left and on the right side of the focus region, deflecting the particles out of it. Through variation of the applied power it is possible to vary the number of particles trapped in the focus points, because near to the rear wall the amplitude of the reflected tSAW is higher and, therefore, the ARF of the sSAW is stronger. So in the vicinity of the rear wall the concentration of particles is higher but decreases with increasing distance to the wall, forming a gradient of particles. The particle gradient depends on the applied power as well as on the flow rate. Thus by variation of these two parameters it is possible to change the particle gradient. Furthermore, we show that the particle gradient can be modified by changing the relation between the channel width and tSAW wavelength. Concluding a single IDT generates an sSAW in a PDMS microchannel enables particle gradient generation in a well-defined microfluidic flow system utilising the ARF and ASF of a tSAW and an sSAW.Keywords: ARF, ASF, particle manipulation, sSAW, tSAW
Procedia PDF Downloads 3343327 Comparison of ANFIS Update Methods Using Genetic Algorithm, Particle Swarm Optimization, and Artificial Bee Colony
Authors: Michael R. Phangtriastu, Herriyandi Herriyandi, Diaz D. Santika
Abstract:
This paper presents a comparison of the implementation of metaheuristic algorithms to train the antecedent parameters and consequence parameters in the adaptive network-based fuzzy inference system (ANFIS). The algorithms compared are genetic algorithm (GA), particle swarm optimization (PSO), and artificial bee colony (ABC). The objective of this paper is to benchmark well-known metaheuristic algorithms. The algorithms are applied to several data set with different nature. The combinations of the algorithms' parameters are tested. In all algorithms, a different number of populations are tested. In PSO, combinations of velocity are tested. In ABC, a different number of limit abandonment are tested. Experiments find out that ABC is more reliable than other algorithms, ABC manages to get better mean square error (MSE) than other algorithms in all data set.Keywords: ANFIS, artificial bee colony, genetic algorithm, metaheuristic algorithm, particle swarm optimization
Procedia PDF Downloads 3523326 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology
Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani
Abstract:
Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography
Procedia PDF Downloads 4243325 Use of Waste Road-Asphalt as Aggregate in Pavement Block Production
Authors: Babagana Mohammed, Abdulmuminu Mustapha Ali, Solomon Ibrahim, Buba Ahmad Umdagas
Abstract:
This research investigated the possibility of replacing coarse and fine aggregates with waste road-asphalt (RWA), when sieved appropriately, in concrete production. Interlock pavement block is used widely in many parts of the world as modern day solution to outdoor flooring applications. The weight-percentage replacements of both coarse and fine aggregates with RWA at 0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% respectively using a concrete mix ratio of 1:2:4 and water-to-cement ratio of 0.45 were carried out. The interlock block samples produced were then cured for 28days. Unconfined compressive strength (UCS) and the water absorption properties of the samples were then tested. Comparison of the results of the RWA-containing samples to those of the respective control samples shows significant benefits of using RWA in interlock block production. UCS results of RWA-containing samples compared well with those of the control samples and the RWA content also influenced the lowering of the water absorption of the samples. Overall, the research shows that it is possible to replace both coarse and fine aggregates with RWA materials when sieved appropriately, hence indicating that RWA could be recycled beneficially.Keywords: aggregate, block-production, pavement, road-asphalt, use, waste
Procedia PDF Downloads 1953324 Infrared Thermography Applications for Building Investigation
Authors: Hamid Yazdani, Raheleh Akbar
Abstract:
Infrared thermography is a modern non-destructive measuring method for the examination of redeveloped and non-renovated buildings. Infrared cameras provide a means for temperature measurement in building constructions from the inside, as well as from the outside. Thus, heat bridges can be detected. It has been shown that infrared thermography is applicable for insulation inspection, identifying air leakage and heat losses sources, finding the exact position of heating tubes or for discovering the reasons why mold, moisture is growing in a particular area, and it is also used in conservation field to detect hidden characteristics, degradations of building structures. The paper gives a brief description of the theoretical background of infrared thermography.Keywords: infrared thermography, examination of buildings, emissivity, heat losses sources
Procedia PDF Downloads 5203323 Irradion: Portable Small Animal Imaging and Irradiation Unit
Authors: Josef Uher, Jana Boháčová, Richard Kadeřábek
Abstract:
In this paper, we present a multi-robot imaging and irradiation research platform referred to as Irradion, with full capabilities of portable arbitrary path computed tomography (CT). Irradion is an imaging and irradiation unit entirely based on robotic arms for research on cancer treatment with ion beams on small animals (mice or rats). The platform comprises two subsystems that combine several imaging modalities, such as 2D X-ray imaging, CT, and particle tracking, with precise positioning of a small animal for imaging and irradiation. Computed Tomography: The CT subsystem of the Irradion platform is equipped with two 6-joint robotic arms that position a photon counting detector and an X-ray tube independently and freely around the scanned specimen and allow image acquisition utilizing computed tomography. Irradiation measures nearly all conventional 2D and 3D trajectories of X-ray imaging with precisely calibrated and repeatable geometrical accuracy leading to a spatial resolution of up to 50 µm. In addition, the photon counting detectors allow X-ray photon energy discrimination, which can suppress scattered radiation, thus improving image contrast. It can also measure absorption spectra and recognize different materials (tissue) types. X-ray video recording and real-time imaging options can be applied for studies of dynamic processes, including in vivo specimens. Moreover, Irradion opens the door to exploring new 2D and 3D X-ray imaging approaches. We demonstrate in this publication various novel scan trajectories and their benefits. Proton Imaging and Particle Tracking: The Irradion platform allows combining several imaging modules with any required number of robots. The proton tracking module comprises another two robots, each holding particle tracking detectors with position, energy, and time-sensitive sensors Timepix3. Timepix3 detectors can track particles entering and exiting the specimen and allow accurate guiding of photon/ion beams for irradiation. In addition, quantifying the energy losses before and after the specimen brings essential information for precise irradiation planning and verification. Work on the small animal research platform Irradion involved advanced software and hardware development that will offer researchers a novel way to investigate new approaches in (i) radiotherapy, (ii) spectral CT, (iii) arbitrary path CT, (iv) particle tracking. The robotic platform for imaging and radiation research developed for the project is an entirely new product on the market. Preclinical research systems with precision robotic irradiation with photon/ion beams combined with multimodality high-resolution imaging do not exist currently. The researched technology can potentially cause a significant leap forward compared to the current, first-generation primary devices.Keywords: arbitrary path CT, robotic CT, modular, multi-robot, small animal imaging
Procedia PDF Downloads 893322 Particle Swarm Optimization Based Method for Minimum Initial Marking in Labeled Petri Nets
Authors: Hichem Kmimech, Achref Jabeur Telmoudi, Lotfi Nabli
Abstract:
The estimation of the initial marking minimum (MIM) is a crucial problem in labeled Petri nets. In the case of multiple choices, the search for the initial marking leads to a problem of optimization of the minimum allocation of resources with two constraints. The first concerns the firing sequence that could be legal on the initial marking with respect to the firing vector. The second deals with the total number of tokens that can be minimal. In this article, the MIM problem is solved by the meta-heuristic particle swarm optimization (PSO). The proposed approach presents the advantages of PSO to satisfy the two previous constraints and find all possible combinations of minimum initial marking with the best computing time. This method, more efficient than conventional ones, has an excellent impact on the resolution of the MIM problem. We prove through a set of definitions, lemmas, and examples, the effectiveness of our approach.Keywords: marking, production system, labeled Petri nets, particle swarm optimization
Procedia PDF Downloads 1783321 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 2783320 Recycling of Aggregates from Construction Demolition Wastes in Concrete: Study of Physical and Mechanical Properties
Authors: M. Saidi, F. Ait Medjber, B. Safi, M. Samar
Abstract:
This work is focused on the study of valuation of recycled concrete aggregates, by measuring certain properties of concrete in the fresh and hardened state. In this study, rheological tests and physic-mechanical characterization on concretes and mortars were conducted with recycled concrete whose geometric properties were identified aggregates. Mortars were elaborated with recycled fine aggregate (0/5mm) and concretes were manufactured using recycled coarse aggregates (5/12.5 mm and 12.5/20 mm). First, a study of the mortars was conducted to determine the effectiveness of adjuvant polycarboxylate superplasticizer on the workability of these and their action deflocculating of the fine recycled sand. The rheological behavior of mortars based on fine aggregate recycled was characterized. The results confirm that the mortars composed of different fractions of recycled sand (0/5) have a better mechanical properties (compressive and flexural strength) compared to normal mortar. Also, the mechanical strengths of concretes made with recycled aggregates (5/12.5 mm and 12.5/20 mm), are comparable to those of conventional concrete with conventional aggregates, provided that the implementation can be improved by the addition of a superplasticizer.Keywords: demolition wastes, recycled coarse aggregate, concrete, workability, mechanical strength, porosity/water absorption
Procedia PDF Downloads 3383319 Ultra-High Frequency Passive Radar Coverage for Cars Detection in Semi-Urban Scenarios
Authors: Pedro Gómez-del-Hoyo, Jose-Luis Bárcena-Humanes, Nerea del-Rey-Maestre, María-Pilar Jarabo-Amores, David Mata-Moya
Abstract:
A study of achievable coverages using passive radar systems in terrestrial traffic monitoring applications is presented. The study includes the estimation of the bistatic radar cross section of different commercial vehicle models that provide challenging low values which make detection really difficult. A semi-urban scenario is selected to evaluate the impact of excess propagation losses generated by an irregular relief. A bistatic passive radar exploiting UHF frequencies radiated by digital video broadcasting transmitters is assumed. A general method of coverage estimation using electromagnetic simulators in combination with estimated car average bistatic radar cross section is applied. In order to reduce the computational cost, hybrid solution is implemented, assuming free space for the target-receiver path but estimating the excess propagation losses for the transmitter-target one.Keywords: bistatic radar cross section, passive radar, propagation losses, radar coverage
Procedia PDF Downloads 3363318 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 3433317 A Novel Geometrical Approach toward the Mechanical Properties of Particle Reinforced Composites
Authors: Hamed Khezrzadeh
Abstract:
Many investigations on the micromechanical structure of materials indicate that there exist fractal patterns at the micro scale in some of the main construction and industrial materials. A recently presented micro-fractal theory brings together the well-known periodic homogenization and the fractal geometry to construct an appropriate model for determination of the mechanical properties of particle reinforced composite materials. The proposed multi-step homogenization scheme considers the mechanical properties of different constituent phases in the composite together with the interaction between these phases throughout a step-by-step homogenization technique. In the proposed model the interaction of different phases is also investigated. By using this method the effect of fibers grading on the mechanical properties also could be studied. The theory outcomes are compared to the experimental data for different types of particle-reinforced composites which very good agreement with the experimental data is observed.Keywords: fractal geometry, homogenization, micromehcanics, particulate composites
Procedia PDF Downloads 2913316 Modelling and Optimization Analysis of Silicon/MgZnO-CBTSSe Tandem Solar Cells
Authors: Vallisree Sivathanu, Kumaraswamidhas Lakshmi Annamalai, Trupti Ranjan Lenka
Abstract:
We report a tandem solar cell model with Silicon as the bottom cell absorber material and Cu₂BaSn(S, Se)₄(CBTSSe) as absorber material for the top cell. As a first step, the top and bottom cells were modelled and validated by comparison with the experiment. Once the individual cells are validated, then the tandem structure is modelled with Indium Tin Oxide(ITO) as conducting layer between the top and bottom cells. The tandem structure yielded better open circuit voltage and fill factor; however, the efficiency obtained is 7.01%. The top cell and the bottom cells are investigated with the help of electron-hole current density, photogeneration rate, and external quantum efficiency profiles. In order to minimize the various loss mechanisms in the tandem solar cell, the material parameters are optimized within experimentally achievable limits. Initially, the top cell optimization was carried out; then, the bottom cell is optimized for maximizing the light absorption, and upon minimizing the current and photon losses in the tandem structure, the maximum achievable efficiency is predicted to be 19.52%.Keywords: CBTSSe, silicon, tandem, solar cell, device modeling, current losses, photon losses
Procedia PDF Downloads 1173315 Analysis of the Level of Production Failures by Implementing New Assembly Line
Authors: Joanna Kochanska, Dagmara Gornicka, Anna Burduk
Abstract:
The article examines the process of implementing a new assembly line in a manufacturing enterprise of the household appliances industry area. At the initial stages of the project, a decision was made that one of its foundations should be the concept of lean management. Because of that, eliminating as many errors as possible in the first phases of its functioning was emphasized. During the start-up of the line, there were identified and documented all production losses (from serious machine failures, through any unplanned downtime, to micro-stops and quality defects). During 6 weeks (line start-up period), all errors resulting from problems in various areas were analyzed. These areas were, among the others, production, logistics, quality, and organization. The aim of the work was to analyze the occurrence of production failures during the initial phase of starting up the line and to propose a method for determining their critical level during its full functionality. There was examined the repeatability of the production losses in various areas and at different levels at such an early stage of implementation, by using the methods of statistical process control. Based on the Pareto analysis, there were identified the weakest points in order to focus improvement actions on them. The next step was to examine the effectiveness of the actions undertaken to reduce the level of recorded losses. Based on the obtained results, there was proposed a method for determining the critical failures level in the studied areas. The developed coefficient can be used as an alarm in case of imbalance of the production, which is caused by the increased failures level in production and production support processes in the period of the standardized functioning of the line.Keywords: production failures, level of production losses, new production line implementation, assembly line, statistical process control
Procedia PDF Downloads 1283314 Experimental Study on Capturing of Magnetic Nanoparticles Transported in an Implant Assisted Cylindrical Tube under Magnetic Field
Authors: Anurag Gaur Nidhi
Abstract:
Targeted drug delivery is a method of delivering medication to a patient in a manner that increases the concentration of the medication in some parts of the body relative to others. Targeted drug delivery seeks to concentrate the medication in the tissues of interest while reducing the relative concentration of the medication in the remaining tissues. This improves efficacy of the while reducing side effects. In the present work, we investigate the effect of magnetic field, flow rate and particle concentration on the capturing of magnetic particles transported in a stent implanted fluidic channel. Iron oxide magnetic nanoparticles (Fe3O4) nanoparticles were synthesized via co-precipitation method. The synthesized Fe3O4 nanoparticles were added in the de-ionized (DI) water to prepare the Fe3O4 magnetic particle suspended fluid. This fluid is transported in a cylindrical tube of diameter 8 mm with help of a peristaltic pump at different flow rate (25-40 ml/min). A ferromagnetic coil of SS 430 has been implanted inside the cylindrical tube to enhance the capturing of magnetic nanoparticles under magnetic field. The capturing of magnetic nanoparticles was observed at different magnetic magnetic field, flow rate and particle concentration. It is observed that capture efficiency increases from 47-67 % at magnetic field 2-5kG, respectively at particle concentration 0.6 mg/ml and at flow rate 30 ml/min. However, the capture efficiency decreases from 65 to 44 % by increasing the flow rate from 25 to 40 ml/min, respectively. Furthermore, it is observed that capture efficiency increases from 51 to 67 % by increasing the particle concentration from 0.3 to 0.6 mg/ml, respectively.Keywords: capture efficiency, implant assisted-Magnetic drug targeting (IA-MDT), magnetic nanoparticles, In-vitro study
Procedia PDF Downloads 3073313 Cleaner Production Framework for an Beverage Manufacturing Company
Authors: Ignatio Madanhire, Charles Mbohwa
Abstract:
This study explores to improve the resource efficiency, waste water reduction and to reduce losses of raw materials in a beverage making industry. A number of cleaner production technologies were put across in this work. It was also noted that cleaner production technology practices are not only desirable from the environmental point of view, but they also make good economic sense, in their contribution to the bottom line by conserving resources like energy, raw materials and manpower, improving yield as well as reducing treatment/disposal costs. This work is a resource in promoting adoption and implementation of CP in other industries for sustainable development.Keywords: resource efficiency, beverages, reduce losses, cleaner production, energy, yield
Procedia PDF Downloads 4163312 Effect of Impact Angle on Erosive Abrasive Wear of Ductile and Brittle Materials
Authors: Ergin Kosa, Ali Göksenli
Abstract:
Erosion and abrasion are wear mechanisms reducing the lifetime of machine elements like valves, pump and pipe systems. Both wear mechanisms are acting at the same time, causing a “Synergy” effect, which leads to a rapid damage of the surface. Different parameters are effective on erosive abrasive wear rate. In this study effect of particle impact angle on wear rate and wear mechanism of ductile and brittle materials was investigated. A new slurry pot was designed for experimental investigation. As abrasive particle, silica sand was used. Particle size was ranking between 200-500 µm. All tests were carried out in a sand-water mixture of 20% concentration for four hours. Impact velocities of the particles were 4,76 m/s. As ductile material steel St 37 with Brinell Hardness Number (BHN) of 245 and quenched St 37 with 510 BHN was used as brittle material. After wear tests, morphology of the eroded surfaces were investigated for better understanding of the wear mechanisms acting at different impact angles by using optical microscopy and Scanning Electron Microscope. The results indicated that wear rate of ductile material was higher than brittle material. Maximum wear was observed by ductile material at a particle impact angle of 300. On the contrary wear rate increased by brittle materials by an increase in impact angle and reached maximum value at 450. High amount of craters were detected after observation on ductile material surface Also plastic deformation zones were detected, which are typical failure modes for ductile materials. Craters formed by particles were deeper according to brittle material worn surface. Amount of craters decreased on brittle material surface. Microcracks around craters were detected which are typical failure modes of brittle materials. Deformation wear was the dominant wear mechanism on brittle material. At the end it is concluded that wear rate could not be directly related to impact angle of the hard particle due to the different responses of ductile and brittle materials.Keywords: erosive wear, particle impact angle, silica sand, wear rate, ductile-brittle material
Procedia PDF Downloads 4013311 Model-Based Control for Piezoelectric-Actuated Systems Using Inverse Prandtl-Ishlinskii Model and Particle Swarm Optimization
Authors: Jin-Wei Liang, Hung-Yi Chen, Lung Lin
Abstract:
In this paper feedforward controller is designed to eliminate nonlinear hysteresis behaviors of a piezoelectric stack actuator (PSA) driven system. The control design is based on inverse Prandtl-Ishlinskii (P-I) hysteresis model identified using particle swarm optimization (PSO) technique. Based on the identified P-I model, both the inverse P-I hysteresis model and feedforward controller can be determined. Experimental results obtained using the inverse P-I feedforward control are compared with their counterparts using hysteresis estimates obtained from the identified Bouc-Wen model. Effectiveness of the proposed feedforward control scheme is demonstrated. To improve control performance feedback compensation using traditional PID scheme is adopted to integrate with the feedforward controller.Keywords: the Bouc-Wen hysteresis model, particle swarm optimization, Prandtl-Ishlinskii model, automation engineering
Procedia PDF Downloads 5143310 Eco-Fashion Dyeing of Denim and Knitwear with Particle-Dyes
Authors: Adriana Duarte, Sandra Sampaio, Catia Ferreira, Jaime I. N. R. Gomes
Abstract:
With the fashion of faded worn garments the textile industry has moved from indigo and pigments to dyes that are fixed by cationization, with products that can be toxic, and that can show this effect after washing down the dye with friction and/or treating with enzymes in a subsequent operation. Increasingly they are treated with bleaches, such as hypochlorite and permanganate, both toxic substances. An alternative process is presented in this work for both garment and jet dyeing processes, without the use of pre-cationization and the alternative use of “particle-dyes”. These are hybrid products, made up by an inorganic particle and an organic dye. With standard soluble dyes, it is not possible to avoid diffusion into the inside of the fiber unless using previous cationization. Only in this way can diffusion be avoided keeping the centre of the fibres undyed so as to produce the faded effect by removing the surface dye and showing the white fiber beneath. With “particle-dyes”, previous cationization is avoided. By applying low temperatures, the dye does not diffuse completely into the inside of the fiber, since it is a particle and not a soluble dye, being then able to give the faded effect. Even though bleaching can be used it can also be avoided, by the use of friction and enzymes they can be used just as for other dyes. This fashion brought about new ways of applying reactive dyes by the use of previous cationization of cotton, lowering the salt, and temperatures that reactive dyes usually need for reacting and as a side effect the application of a more environmental process. However, cationization is a process that can be problematic in applying it outside garment dyeing, such as jet dyeing, being difficult to obtain level dyeings. It also should be applied by a pad-fix or Pad-batch process due to the low affinity of the pre-cationization products making it a more expensive process, and the risk of unlevelness in processes such as jet dyeing. Wit particle-dyes, since no pre-cationizartion is necessary, they can be applied in jet dyeing. The excess dye is fixed by a fixing agent, fixing the insoluble dye onto the surface of the fibers. By applying the fixing agent only one to 1-3 rinses in water at room temperature are necessary, saving water and improving the washfastness.Keywords: denim, garment dyeing, worn look, eco-fashion
Procedia PDF Downloads 5373309 Correlation to Predict the Effect of Particle Type on Axial Voidage Profile in Circulating Fluidized Beds
Authors: M. S. Khurram, S. A. Memon, S. Khan
Abstract:
Bed voidage behavior among different flow regimes for Geldart A, B, and D particles (fluid catalytic cracking catalyst (FCC), particle A and glass beads) of diameter range 57-872 μm, apparent density 1470-3092 kg/m3, and bulk density range 890-1773 kg/m3 were investigated in a gas-solid circulating fluidized bed of 0.1 m-i.d. and 2.56 m-height of plexi-glass. Effects of variables (gas velocity, particle properties, and static bed height) were analyzed on bed voidage. The axial voidage profile showed a typical trend along the riser: a dense bed at the lower part followed by a transition in the splash zone and a lean phase in the freeboard. Bed expansion and dense bed voidage increased with an increase of gas velocity as usual. From experimental results, a generalized model relationship based on inverse fluidization number for dense bed voidage from bubbling to fast fluidization regimes was presented.Keywords: axial voidage, circulating fluidized bed, splash zone, static bed
Procedia PDF Downloads 2853308 Magnetomechanical Effects on MnZn Ferrites
Authors: Ibrahim Ellithy, Mauricio Esguerra, , Rewanth Radhakrishnan
Abstract:
In this study, the effects of hydrostatic stress on the magnetic properties of MnZn ferrite rings of different power grades, were measured and analyzed in terms of the magneto-mechanical effect on core losses was modeled via the Hodgdon-Esguerra hysteresis model. The results show excellent agreement with the model and a correlation between the permeability drop and the core loss increase in dependence of the material grade properties. These results emphasize the vulnerabilities of MnZn ferrites when subjected to mechanical perturbations, especially in real-world scenarios like under-road embedding for WPT.Keywords: hydrostatic stress, power ferrites, core losses, wireless power transfer
Procedia PDF Downloads 703307 Synthesis and Characterization of LiCoO2 Cathode Material by Sol-Gel Method
Authors: Nur Azilina Abdul Aziz, Tuti Katrina Abdullah, Ahmad Azmin Mohamad
Abstract:
Lithium-transition metals and some of their oxides, such as LiCoO2, LiMn2O2, LiFePO4, and LiNiO2 have been used as cathode materials in high performance lithium-ion rechargeable batteries. Among the cathode materials, LiCoO2 has potential to been widely used as a lithium-ion battery because of its layered crystalline structure, good capacity, high cell voltage, high specific energy density, high power rate, low self-discharge, and excellent cycle life. This cathode material has been widely used in commercial lithium-ion batteries due to its low irreversible capacity loss and good cycling performance. However, there are several problems that interfere with the production of material that has good electrochemical properties, including the crystallinity, the average particle size and particle size distribution. In recent years, synthesis of nanoparticles has been intensively investigated. Powders prepared by the traditional solid-state reaction have a large particle size and broad size distribution. On the other hand, solution method can reduce the particle size to nanometer range and control the particle size distribution. In this study, LiCoO2 was synthesized using the sol–gel preparation method, which Lithium acetate and Cobalt acetate were used as reactants. The stoichiometric amounts of the reactants were dissolved in deionized water. The solutions were stirred for 30 hours using magnetic stirrer, followed by heating at 80°C under vigorous stirring until a viscous gel was formed. The as-formed gel was calcined at 700°C for 7 h under a room atmosphere. The structural and morphological analysis of LiCoO2 was characterized using X-ray diffraction and Scanning electron microscopy. The diffraction pattern of material can be indexed based on the α-NaFeO2 structure. The clear splitting of the hexagonal doublet of (006)/(102) and (108)/(110) in this patterns indicates materials are formed in a well-ordered hexagonal structure. No impurity phase can be seen in this range probably due to the homogeneous mixing of the cations in the precursor. Furthermore, SEM micrograph of the LiCoO2 shows the particle size distribution is almost uniform while particle size is between 0.3-0.5 microns. In conclusion, LiCoO2 powder was successfully synthesized using the sol–gel method. LiCoO2 showed a hexagonal crystal structure. The sample has been prepared clearly indicate the pure phase of LiCoO2. Meanwhile, the morphology of the sample showed that the particle size and size distribution of particles is almost uniform.Keywords: cathode material, LiCoO2, lithium-ion rechargeable batteries, Sol-Gel method
Procedia PDF Downloads 3733306 The Utilization of Particle Swarm Optimization Method to Solve Nurse Scheduling Problem
Authors: Norhayati Mohd Rasip, Abd. Samad Hasan Basari , Nuzulha Khilwani Ibrahim, Burairah Hussin
Abstract:
The allocation of working schedule especially for shift environment is hard to fulfill its fairness among them. In the case of nurse scheduling, to set up the working time table for them is time consuming and complicated, which consider many factors including rules, regulation and human factor. The scenario is more complicated since most nurses are women which have personnel constraints and maternity leave factors. The undesirable schedule can affect the nurse productivity, social life and the absenteeism can significantly as well affect patient's life. This paper aimed to enhance the scheduling process by utilizing the particle swarm optimization in order to solve nurse scheduling problem. The result shows that the generated multiple initial schedule is fulfilled the requirements and produces the lowest cost of constraint violation.Keywords: nurse scheduling, particle swarm optimisation, nurse rostering, hard and soft constraint
Procedia PDF Downloads 3723305 Improving Coverage in Wireless Sensor Networks Using Particle Swarm Optimization Algorithm
Authors: Ehsan Abdolzadeh, Sanaz Nouri, Siamak Khalaj
Abstract:
Today WSNs have many applications in different fields like the environment, military operations, discoveries, monitoring operations, and so on. Coverage size and energy consumption are the important challenges that these networks need to face. This paper tries to solve the problem of coverage with a requirement of k-coverage and minimum energy consumption. In order to minimize energy consumption, visual sensor networks have been used that observe and process just those targets that are located in their view direction. As a result, sensor rotations have decreased, and subsequently, energy consumption has been minimized. To solve the problem of coverage particle swarm optimization, coverage optimization has been able to ensure coverage requirement together with minimizing sensor rotations while meeting the problem requirement of k≤14. So energy consumption has decreased, and this could extend the sensors’ lifetime subsequently.Keywords: K coverage, particle union optimization algorithm, wireless sensor networks, visual sensor networks
Procedia PDF Downloads 1153304 Research on Ultrafine Particles Classification Using Hydrocyclone with Annular Rinse Water
Authors: Tao Youjun, Zhao Younan
Abstract:
The separation effect of fine coal can be improved by the process of pre-desliming. It was significantly enhanced when the fine coal was processed using Falcon concentrator with the removal of -45um coal slime. Ultrafine classification tests using Krebs classification cyclone with annular rinse water showed that increasing feeding pressure can effectively avoid the phenomena of heavy particles passing into overflow and light particles slipping into underflow. The increase of rinse water pressure could reduce the content of fine-grained particles while increasing the classification size. The increase in feeding concentration had a negative effect on the efficiency of classification, meanwhile increased the classification size due to the enhanced hindered settling caused by high underflow concentration. As a result of optimization experiments with response indicator of classification efficiency which based on orthogonal design using Design-Expert software indicated that the optimal classification efficiency reached 91.32% with the feeding pressure of 0.03MPa, the rinse water pressure of 0.02MPa and the feeding concentration of 12.5%. Meanwhile, the classification size was 49.99 μm which had a good agreement with the predicted value.Keywords: hydrocyclone, ultrafine classification, slime, classification efficiency, classification size
Procedia PDF Downloads 1673303 A Comparison of Sequential Quadratic Programming, Genetic Algorithm, Simulated Annealing, Particle Swarm Optimization for the Design and Optimization of a Beam Column
Authors: Nima Khosravi
Abstract:
This paper describes an integrated optimization technique with concurrent use of sequential quadratic programming, genetic algorithm, and simulated annealing particle swarm optimization for the design and optimization of a beam column. In this research, the comparison between 4 different types of optimization methods. The comparison is done and it is found out that all the methods meet the required constraints and the lowest value of the objective function is achieved by SQP, which was also the fastest optimizer to produce the results. SQP is a gradient based optimizer hence its results are usually the same after every run. The only thing which affects the results is the initial conditions given. The initial conditions given in the various test run were very large as compared. Hence, the value converged at a different point. Rest of the methods is a heuristic method which provides different values for different runs even if every parameter is kept constant.Keywords: beam column, genetic algorithm, particle swarm optimization, sequential quadratic programming, simulated annealing
Procedia PDF Downloads 3863302 Ramp Rate and Constriction Factor Based Dual Objective Economic Load Dispatch Using Particle Swarm Optimization
Authors: Himanshu Shekhar Maharana, S. K .Dash
Abstract:
Economic Load Dispatch (ELD) proves to be a vital optimization process in electric power system for allocating generation amongst various units to compute the cost of generation, the cost of emission involving global warming gases like sulphur dioxide, nitrous oxide and carbon monoxide etc. In this dissertation, we emphasize ramp rate constriction factor based particle swarm optimization (RRCPSO) for analyzing various performance objectives, namely cost of generation, cost of emission, and a dual objective function involving both these objectives through the experimental simulated results. A 6-unit 30 bus IEEE test case system has been utilized for simulating the results involving improved weight factor advanced ramp rate limit constraints for optimizing total cost of generation and emission. This method increases the tendency of particles to venture into the solution space to ameliorate their convergence rates. Earlier works through dispersed PSO (DPSO) and constriction factor based PSO (CPSO) give rise to comparatively higher computational time and less good optimal solution at par with current dissertation. This paper deals with ramp rate and constriction factor based well defined ramp rate PSO to compute various objectives namely cost, emission and total objective etc. and compares the result with DPSO and weight improved PSO (WIPSO) techniques illustrating lesser computational time and better optimal solution.Keywords: economic load dispatch (ELD), constriction factor based particle swarm optimization (CPSO), dispersed particle swarm optimization (DPSO), weight improved particle swarm optimization (WIPSO), ramp rate and constriction factor based particle swarm optimization (RRCPSO)
Procedia PDF Downloads 3823301 Comparison of Cyclone Design Methods for Removal of Fine Particles from Plasma Generated Syngas
Authors: Mareli Hattingh, I. Jaco Van der Walt, Frans B. Waanders
Abstract:
A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.Keywords: Cyclone, design, plasma, renewable energy, solid separation, waste processing
Procedia PDF Downloads 2143300 Development of Green Cement, Based on Partial Replacement of Clinker with Limestone Powder
Authors: Yaniv Knop, Alva Peled
Abstract:
Over the past few years there has been a growing interest in the development of Portland Composite Cement, by partial replacement of the clinker with mineral additives. The motivations to reduce the clinker content are threefold: (1) Ecological - due to lower emission of CO2 to the atmosphere; (2) Economical - due to cost reduction; and (3) Scientific\Technology – improvement of performances. Among the mineral additives being used and investigated, limestone is one of the most attractive, as it is considered natural, available, and with low cost. The goal of the research is to develop green cement, by partial replacement of the clinker with limestone powder while improving the performances of the cement paste. This work studied blended cements with three limestone powder particle diameters: smaller than, larger than, and similarly sized to the clinker particle. Blended cement with limestone consisting of one particle size distribution and limestone consisting of a combination of several particle sizes were studied and compared in terms of hydration rate, hydration degree, and water demand to achieve normal consistency. The performances of these systems were also compared with that of the original cement (without added limestone). It was found that the ability to replace an active material with an inert additive, while achieving improved performances, can be obtained by increasing the packing density of the cement-based particles. This may be achieved by replacing the clinker with limestone powders having a combination of several different particle size distributions. Mathematical and physical models were developed to simulate the setting history from initial to final setting time and to predict the packing density of blended cement with limestone having different sizes and various contents. Besides the effect of limestone, as inert additive, on the packing density of the blended cement, the influence of the limestone particle size on three different chemical reactions were studied; hydration of the cement, carbonation of the calcium hydroxide and the reactivity of the limestone with the hydration reaction products. The main results and developments will be presented.Keywords: packing density, hydration degree, limestone, blended cement
Procedia PDF Downloads 285