Search results for: Particle Swarm Optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4676

Search results for: Particle Swarm Optimization

3086 Isolation and Identification of Biosurfactant Producing Microorganism for Bioaugmentation

Authors: Karthick Gopalan, Selvamohan Thankiah

Abstract:

Biosurfactants are lipid compounds produced by microbes, which are amphipathic molecules consisting of hydrophophic and hydrophilic domains. In the present investigation, ten bacterial strains were isolated from petroleum oil contaminated sites near petrol bunk. Oil collapsing test, haemolytic activity were used as a criteria for primary isolation of biosurfactant producing bacteria. In this study, all the bacterial strains gave positive results. Among the ten strains, two were observed as good biosurfactant producers, they utilize the diesel as a sole carbon source. Optimization of biosurfactant producing bacteria isolated from petroleum oil contaminated sites was carried out using different parameters such as, temperature (20ºC, 25ºC, 30ºC, 37ºC and 45ºC), pH (5,6,7,8 & 9) and nitrogen sources (ammonium chloride, ammonium carbonate and sodium nitrate). Biosurfactants produced by bacteria were extracted, dried and quantified. As a result of optimization of parameters the suitable values for the production of more amount of biosurfactant by the isolated bacterial species was observed as 30ºC (0.543 gm/lt) in the pH 7 (0.537 gm/lt) with ammonium nitrate (0.431 gm/lt) as sole carbon source.

Keywords: isolation and identification, biosurfactant, microorganism, bioaugmentation

Procedia PDF Downloads 344
3085 Aerodynamic Design of Axisymmetric Supersonic Nozzle Used by an Optimization Algorithm

Authors: Mohammad Mojtahedpoor

Abstract:

In this paper, it has been studied the method of optimal design of the supersonic nozzle. It could make viscous axisymmetric nozzles that the quality of their outlet flow is quite desired. In this method, it is optimized the divergent nozzle, at first. The initial divergent nozzle contour is designed through the method of characteristics and adding a suitable boundary layer to the inviscid contour. After that, it is made a proper grid and then simulated flow by the numerical solution and AUSM+ method by using the operation boundary condition. At the end, solution outputs are investigated and optimized. The numerical method has been validated with experimental results. Also, in order to evaluate the effectiveness of the present method, the nozzles compared with the previous studies. The comparisons show that the nozzles obtained through this method are sufficiently better in some conditions, such as the flow uniformity, size of the boundary layer, and obtained an axial length of the nozzle. Designing the convergent nozzle part affects by flow uniformity through changing its axial length and input diameter. The results show that increasing the length of the convergent part improves the output flow uniformity.

Keywords: nozzle, supersonic, optimization, characteristic method, CFD

Procedia PDF Downloads 196
3084 Nurse Schedule Problem in Mubarak Al Kabeer Hospital

Authors: Khaled Al-Mansour, Nawaf Esmael, Abdulaziz Al-Zaid, Mohammed Al Ateeqi, Ali Al-Yousfi, Sayed Al-Zalzalah

Abstract:

In this project we will create the new schedule of nurse according to the preference of them. We did our project in Mubarak Al Kabeer Hospital (in Kuwait). The project aims to optimize the schedule of nurses in Mubarak Al Kabeer Hospital. The schedule of the nurses was studied and understood well to do any modification for their schedule to make the nurses feel as much comfort as they are. First constraints were found to know what things we can change and what things we can’t, the hard constraints are the hospital and ministry policies where we can’t change anything about, and the soft constraints are things that make nurses more comfortable. Data were collected and nurses were interviewed to know what is more better for them. All these constraints and date have been formulated to mathematical equations. This report will first contain an introduction to the topic which includes details of the problem definition. It will also contain information regarding the optimization of a nurse schedule and its contents and importance; furthermore, the report will contain information about the data needed to solve the problem and how it was collected. The problem requires formulation and that is also to be shown. The methodology will be explained which will state what has already been done. We used the lingo software to find the best schedule for the nurse. The schedule has been made according to what the nurses prefer, and also took consideration of the hospital policy when we make the schedule.

Keywords: nurse schedule problem, Kuwait, hospital policy, optimization of schedules

Procedia PDF Downloads 265
3083 Investigation of Optimized Mechanical Properties on Friction Stir Welded Al6063 Alloy

Authors: Lingaraju Dumpala, Narasa Raju Gosangi

Abstract:

Friction Stir Welding (FSW) is relatively new, environmentally friendly, versatile, and widely used joining technique for soft materials such as aluminum. FSW has got a lot of attention as a solid-state joining method which avoids many common problems of fusion welding and provides an improved way of producing aluminum joints in a faster way. FSW can be used for various aerospace, defense, automotive and transportation applications. It is necessary to understand the friction stir welded joints and its characteristics to use this new joining technique in critical applications. This study investigated the mechanical properties of friction stir welded aluminum 6063 alloys. FSW is carried out based on the design of experiments using L16 mixed level array by considering tool rotational speeds, tool feed rate and tool tilt angles as process parameters. The optimization of process parameters is carried by Taguchi based regression analysis and the significance of process parameters is analyzed using ANOVA. It is observed that the considered process parameters are high influences the mechanical properties of Al6063.

Keywords: FSW, aluminum alloy, mechanical properties, optimization, Taguchi, ANOVA

Procedia PDF Downloads 130
3082 Dynamic Correlations and Portfolio Optimization between Islamic and Conventional Equity Indexes: A Vine Copula-Based Approach

Authors: Imen Dhaou

Abstract:

This study examines conditional Value at Risk by applying the GJR-EVT-Copula model, and finds the optimal portfolio for eight Dow Jones Islamic-conventional pairs. Our methodology consists of modeling the data by a bivariate GJR-GARCH model in which we extract the filtered residuals and then apply the Peak over threshold model (POT) to fit the residual tails in order to model marginal distributions. After that, we use pair-copula to find the optimal portfolio risk dependence structure. Finally, with Monte Carlo simulations, we estimate the Value at Risk (VaR) and the conditional Value at Risk (CVaR). The empirical results show the VaR and CVaR values for an equally weighted portfolio of Dow Jones Islamic-conventional pairs. In sum, we found that the optimal investment focuses on Islamic-conventional US Market index pairs because of high investment proportion; however, all other index pairs have low investment proportion. These results deliver some real repercussions for portfolio managers and policymakers concerning to optimal asset allocations, portfolio risk management and the diversification advantages of these markets.

Keywords: CVaR, Dow Jones Islamic index, GJR-GARCH-EVT-pair copula, portfolio optimization

Procedia PDF Downloads 252
3081 Convergence Analysis of a Gibbs Sampling Based Mix Design Optimization Approach for High Compressive Strength Pervious Concrete

Authors: Jiaqi Huang, Lu Jin

Abstract:

Pervious concrete features with high water permeability rate. However, due to the lack of fine aggregates, the compressive strength is usually lower than other conventional concrete products. Optimization of pervious concrete mix design has long been recognized as an effective mechanism to achieve high compressive strength while maintaining desired permeability rate. In this paper, a Gibbs Sampling based algorithm is proposed to approximate the optimal mix design to achieve a high compressive strength of pervious concrete. We prove that the proposed algorithm efficiently converges to the set of global optimal solutions. The convergence rate and accuracy depend on a control parameter employed in the proposed algorithm. The simulation results show that, by using the proposed approach, the system converges to the optimal solution quickly and the derived optimal mix design achieves the maximum compressive strength while maintaining the desired permeability rate.

Keywords: convergence, Gibbs Sampling, high compressive strength, optimal mix design, pervious concrete

Procedia PDF Downloads 175
3080 Optimization Analysis of Controlled Cooling Process for H-Shape Steam Beams

Authors: Jiin-Yuh Jang, Yu-Feng Gan

Abstract:

In order to improve the comprehensive mechanical properties of the steel, the cooling rate, and the temperature distribution must be controlled in the cooling process. A three-dimensional numerical model for the prediction of the heat transfer coefficient distribution of H-beam in the controlled cooling process was performed in order to obtain the uniform temperature distribution and minimize the maximum stress and the maximum deformation after the controlled cooling. An algorithm developed with a simplified conjugated-gradient method was used as an optimizer to optimize the heat transfer coefficient distribution. The numerical results showed that, for the case of air cooling 5 seconds followed by water cooling 6 seconds with uniform the heat transfer coefficient, the cooling rate is 15.5 (℃/s), the maximum temperature difference is 85℃, the maximum the stress is 125 MPa, and the maximum deformation is 1.280 mm. After optimize the heat transfer coefficient distribution in control cooling process with the same cooling time, the cooling rate is increased to 20.5 (℃/s), the maximum temperature difference is decreased to 52℃, the maximum stress is decreased to 82MPa and the maximum deformation is decreased to 1.167mm.

Keywords: controlled cooling, H-Beam, optimization, thermal stress

Procedia PDF Downloads 364
3079 Optimization of Process Parameters using Response Surface Methodology for the Removal of Zinc(II) by Solvent Extraction

Authors: B. Guezzen, M.A. Didi, B. Medjahed

Abstract:

A factorial design of experiments and a response surface methodology were implemented to investigate the liquid-liquid extraction process of zinc (II) from acetate medium using the 1-Butyl-imidazolium di(2-ethylhexyl) phosphate [BIm+][D2EHP-]. The optimization process of extraction parameters such as the initial pH effect (2.5, 4.5, and 6.6), ionic liquid concentration (1, 5.5, and 10 mM) and salt effect (0.01, 5, and 10 mM) was carried out using a three-level full factorial design (33). The results of the factorial design demonstrate that all these factors are statistically significant, including the square effects of pH and ionic liquid concentration. The results showed that the order of significance: IL concentration > salt effect > initial pH. Analysis of variance (ANOVA) showing high coefficient of determination (R2 = 0.91) and low probability values (P < 0.05) signifies the validity of the predicted second-order quadratic model for Zn (II) extraction. The optimum conditions for the extraction of zinc (II) at the constant temperature (20 °C), initial Zn (II) concentration (1mM) and A/O ratio of unity were: initial pH (4.8), extractant concentration (9.9 mM), and NaCl concentration (8.2 mM). At the optimized condition, the metal ion could be quantitatively extracted.

Keywords: ionic liquid, response surface methodology, solvent extraction, zinc acetate

Procedia PDF Downloads 371
3078 Improvement of Electric Aircraft Endurance through an Optimal Propeller Design Using Combined BEM, Vortex and CFD Methods

Authors: Jose Daniel Hoyos Giraldo, Jesus Hernan Jimenez Giraldo, Juan Pablo Alvarado Perilla

Abstract:

Range and endurance are the main limitations of electric aircraft due to the nature of its source of power. The improvement of efficiency on this kind of systems is extremely meaningful to encourage the aircraft operation with less environmental impact. The propeller efficiency highly affects the overall efficiency of the propulsion system; hence its optimization can have an outstanding effect on the aircraft performance. An optimization method is applied to an aircraft propeller in order to maximize its range and endurance by estimating the best combination of geometrical parameters such as diameter and airfoil, chord and pitch distribution for a specific aircraft design at a certain cruise speed, then the rotational speed at which the propeller operates at minimum current consumption is estimated. The optimization is based on the Blade Element Momentum (BEM) method, additionally corrected to account for tip and hub losses, Mach number and rotational effects; furthermore an airfoil lift and drag coefficients approximation is implemented from Computational Fluid Dynamics (CFD) simulations supported by preliminary studies of grid independence and suitability of different turbulence models, to feed the BEM method, with the aim of achieve more reliable results. Additionally, Vortex Theory is employed to find the optimum pitch and chord distribution to achieve a minimum induced loss propeller design. Moreover, the optimization takes into account the well-known brushless motor model, thrust constraints for take-off runway limitations, maximum allowable propeller diameter due to aircraft height and maximum motor power. The BEM-CFD method is validated by comparing its predictions for a known APC propeller with both available experimental tests and APC reported performance curves which are based on Vortex Theory fed with the NASA Transonic Airfoil code, showing a adequate fitting with experimental data even more than reported APC data. Optimal propeller predictions are validated by wind tunnel tests, CFD propeller simulations and a study of how the propeller will perform if it replaces the one of on known aircraft. Some tendency charts relating a wide range of parameters such as diameter, voltage, pitch, rotational speed, current, propeller and electric efficiencies are obtained and discussed. The implementation of CFD tools shows an improvement in the accuracy of BEM predictions. Results also showed how a propeller has higher efficiency peaks when it operates at high rotational speed due to the higher Reynolds at which airfoils present lower drag. On the other hand, the behavior of the current consumption related to the propulsive efficiency shows counterintuitive results, the best range and endurance is not necessary achieved in an efficiency peak.

Keywords: BEM, blade design, CFD, electric aircraft, endurance, optimization, range

Procedia PDF Downloads 103
3077 Cost Analysis of Hybrid Wind Energy Generating System Considering CO2 Emissions

Authors: M. A. Badr, M. N. El Kordy, A. N. Mohib, M. M. Ibrahim

Abstract:

The basic objective of the research is to study the effect of hybrid wind energy on the cost of generated electricity considering the cost of reduction CO2 emissions. The system consists of small wind turbine(s), storage battery bank and a diesel generator (W/D/B). Using an optimization software package, different system configurations are investigated to reach optimum configuration based on the net present cost (NPC) and cost of energy (COE) as economic optimization criteria. The cost of avoided CO2 is taken into consideration. The system is intended to supply the electrical load of a small community (gathering six families) in a remote Egyptian area. The investigated system is not connected to the electricity grid and may replace an existing conventional diesel powered electric supply system to reduce fuel consumption and CO2 emissions. The simulation results showed that W/D energy system is more economic than diesel alone. The estimated COE is 0.308$/kWh and extracting the cost of avoided CO2, the COE reached 0.226 $/kWh which is an external benefit of wind turbine, as there are no pollutant emissions through operational phase.

Keywords: hybrid wind turbine systems, remote areas electrification, simulation of hybrid energy systems, techno-economic study

Procedia PDF Downloads 395
3076 Application of Global Predictive Real Time Control Strategy to Improve Flooding Prevention Performance of Urban Stormwater Basins

Authors: Shadab Shishegar, Sophie Duchesne, Genevieve Pelletier

Abstract:

Sustainability as one of the key elements of Smart cities, can be realized by employing Real Time Control Strategies for city’s infrastructures. Nowadays Stormwater management systems play an important role in mitigating the impacts of urbanization on natural hydrological cycle. These systems can be managed in such a way that they meet the smart cities standards. In fact, there is a huge potential for sustainable management of urban stormwater and also its adaptability to global challenges like climate change. Hence, a dynamically managed system that can adapt itself to instability of the environmental conditions is desirable. A Global Predictive Real Time Control approach is proposed in this paper to optimize the performance of stormwater management basins in terms of flooding prevention. To do so, a mathematical optimization model is developed then solved using Genetic Algorithm (GA). Results show an improved performance at system-level for the stormwater basins in comparison to static strategy.

Keywords: environmental sustainability, optimization, real time control, storm water management

Procedia PDF Downloads 172
3075 Hydrometallurgical Processing of a Nigerian Chalcopyrite Ore

Authors: Alafara A. Baba, Kuranga I. Ayinla, Folahan A. Adekola, Rafiu B. Bale

Abstract:

Due to increasing demands and diverse applications of copper oxide as pigment in ceramics, cuprammonium hydroxide solution for rayon, p-type semi-conductor, dry cell batteries production and as safety disposal of hazardous materials, a study on the hydrometallurgical operations involving leaching, solvent extraction and precipitation for the recovery of copper for producing high grade copper oxide from a Nigerian chalcopyrite ore in chloride media has been examined. At a particular set of experimental parameter with respect to acid concentration, reaction temperature and particle size, the leaching investigation showed that the ore dissolution increases with increasing acid concentration, temperature and decreasing particle diameter at a moderate stirring. The kinetics data has been analyzed and was found to follow diffusion control mechanism. At optimal conditions, the extent of ore dissolution reached 94.3%. The recovery of the total copper from the hydrochloric acid-leached chalcopyrite ore was undertaken by solvent extraction and precipitation techniques, prior to the beneficiation of the purified solution as copper oxide. The purification of the leach liquor was firstly done by precipitation of total iron and manganese using Ca(OH)2 and H2O2 as oxidizer at pH 3.5 and 4.25, respectively. An extraction efficiency of 97.3% total copper was obtained by 0.2 mol/L Dithizone in kerosene at 25±2ºC within 40 minutes, from which ≈98% Cu from loaded organic phase was successfully stripped by 0.1 mol/L HCl solution. The beneficiation of the recovered pure copper solution was carried out by crystallization through alkali addition followed by calcination at 600ºC to obtain high grade copper oxide (Tenorite, CuO: 05-0661). Finally, a simple hydrometallurgical scheme for the operational extraction procedure amenable for industrial utilization and economic sustainability was provided.

Keywords: chalcopyrite ore, Nigeria, copper, copper oxide, solvent extraction

Procedia PDF Downloads 388
3074 Impact of Geomagnetic Variation over Sub-Auroral Ionospheric Region during High Solar Activity Year 2014

Authors: Arun Kumar Singh, Rupesh M. Das, Shailendra Saini

Abstract:

The present work is an attempt to evaluate the sub-auroral ionospheric behavior under changing space weather conditions especially during high solar activity year 2014. In view of this, the GPS TEC along with Ionosonde data over Indian permanent scientific base 'Maitri', Antarctica (70°46′00″ S, 11°43′56″ E) has been utilized. The results suggested that the nature of ionospheric responses to the geomagnetic disturbances mainly depended upon the status of high latitudinal electro-dynamic processes along with the season of occurrence. Fortunately, in this study, both negative and positive ionospheric impact to the geomagnetic disturbances has been observed in a single year but in different seasons. The study reveals that the combination of equator-ward plasma transportation along with ionospheric compositional changes causes a negative ionospheric impact during summer and equinox seasons. However, the combination of pole-ward contraction of the oval region along with particle precipitation may lead to exhibiting positive ionospheric response during the winter season. Other than this, some Ionosonde based new experimental evidence also provided clear evidence of particle precipitation deep up to the low altitudinal ionospheric heights, i.e., up to E-layer by the sudden and strong appearance of E-layer at 100 km altitudes. The sudden appearance of E-layer along with a decrease in F-layer electron density suggested the dominance of NO⁺ over O⁺ at a considered region under geomagnetic disturbed condition. The strengthening of E-layer is responsible for modification of auroral electrojet and field-aligned current system. The present study provided a good scientific insight on sub-auroral ionospheric to the changing space weather condition.

Keywords: high latitude ionosphere, space weather, geomagnetic storms, sub-storm

Procedia PDF Downloads 162
3073 Artificial Bee Colony Based Modified Energy Efficient Predictive Routing in MANET

Authors: Akhil Dubey, Rajnesh Singh

Abstract:

In modern days there occur many rapid modifications in field of ad hoc network. These modifications create many revolutionary changes in the routing. Predictive energy efficient routing is inspired on the bee’s behavior of swarm intelligence. Predictive routing improves the efficiency of routing in the energetic point of view. The main aim of this routing is the minimum energy consumption during communication and maximized intermediate node’s remaining battery power. This routing is based on food searching behavior of bees. There are two types of bees for the exploration phase the scout bees and for the evolution phase forager bees use by this routing. This routing algorithm computes the energy consumption, fitness ratio and goodness of the path. In this paper we review the literature related with predictive routing, presenting modified routing and simulation result of this algorithm comparison with artificial bee colony based routing schemes in MANET and see the results of path fitness and probability of fitness.

Keywords: mobile ad hoc network, artificial bee colony, PEEBR, modified predictive routing

Procedia PDF Downloads 411
3072 Investigation of the Morphology of SiO2 Nano-Particles Using Different Synthesis Techniques

Authors: E. Gandomkar, S. Sabbaghi

Abstract:

In this paper, the effects of variation synthesized methods on morphology and size of silica nanostructure via modifying sol-gel and precipitation method have been investigated. Meanwhile, resulting products have been characterized by particle size analyzer, scanning electron microscopy (SEM), X-ray Diffraction (XRD) and Fourier transform infrared (FT-IR) spectra. As result, the shape of SiO2 with sol-gel and precipitation methods was spherical but with modifying sol-gel method we have been had nanolayer structure.

Keywords: modified sol-gel, precipitation, nanolayer, Na2SiO3, nanoparticle

Procedia PDF Downloads 287
3071 Modelling and Optimization Analysis of Silicon/MgZnO-CBTSSe Tandem Solar Cells

Authors: Vallisree Sivathanu, Kumaraswamidhas Lakshmi Annamalai, Trupti Ranjan Lenka

Abstract:

We report a tandem solar cell model with Silicon as the bottom cell absorber material and Cu₂BaSn(S, Se)₄(CBTSSe) as absorber material for the top cell. As a first step, the top and bottom cells were modelled and validated by comparison with the experiment. Once the individual cells are validated, then the tandem structure is modelled with Indium Tin Oxide(ITO) as conducting layer between the top and bottom cells. The tandem structure yielded better open circuit voltage and fill factor; however, the efficiency obtained is 7.01%. The top cell and the bottom cells are investigated with the help of electron-hole current density, photogeneration rate, and external quantum efficiency profiles. In order to minimize the various loss mechanisms in the tandem solar cell, the material parameters are optimized within experimentally achievable limits. Initially, the top cell optimization was carried out; then, the bottom cell is optimized for maximizing the light absorption, and upon minimizing the current and photon losses in the tandem structure, the maximum achievable efficiency is predicted to be 19.52%.

Keywords: CBTSSe, silicon, tandem, solar cell, device modeling, current losses, photon losses

Procedia PDF Downloads 110
3070 Mathematical Modeling Pressure Losses of Trapezoidal Labyrinth Channel and Bi-Objective Optimization of the Design Parameters

Authors: Nina Philipova

Abstract:

The influence of the geometric parameters of trapezoidal labyrinth channel on the pressure losses along the labyrinth length is investigated in this work. The impact of the dentate height is studied at fixed values of the dentate angle and the dentate spacing. The objective of the work presented in this paper is to derive a mathematical model of the pressure losses along the labyrinth length depending on the dentate height. The numerical simulations of the water flow movement are performed by using Commercial codes ANSYS GAMBIT and FLUENT. Dripper inlet pressure is set up to be 1 bar. As a result, the mathematical model of the pressure losses is determined as a second-order polynomial by means Commercial code STATISTIKA. Bi-objective optimization is performed by using the mean algebraic function of utility. The optimum value of the dentate height is defined at fixed values of the dentate angle and the dentate spacing. The derived model of the pressure losses and the optimum value of the dentate height are used as a basis for a more successful emitter design.

Keywords: drip irrigation, labyrinth channel hydrodynamics, numerical simulations, Reynolds stress model

Procedia PDF Downloads 152
3069 Quantum Mechanics as A Limiting Case of Relativistic Mechanics

Authors: Ahmad Almajid

Abstract:

The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.

Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics

Procedia PDF Downloads 72
3068 Study on the Electrochemical Performance of Graphene Effect on Cadmium Oxide in Lithium Battery

Authors: Atef Y. Shenouda, Anton A. Momchilov

Abstract:

Graphene and CdO with different stoichiometric ratios of Cd(CH₃COO)₂ and graphene samples were prepared by hydrothermal reaction. The crystalline phases of pure CdO and 3CdO:1graphene were identified by X-ray diffraction (XRD). The particle morphology was studied with SEM. Furthermore, impedance measurements were applied. Galvanostatic measurements for the cells were carried out using potential limits between 0.01 and 3 V vs. Li/Li⁺. The current cycling intensity was 10⁻⁴ A. The specific discharge capacity of 3CdO-1G cell was about 450 Ah.Kg⁻¹ up to more than 100 cycles.

Keywords: CdO, graphene, negative electrode, lithium battery

Procedia PDF Downloads 159
3067 Optimizing Logistics for Courier Organizations with Considerations of Congestions and Pickups: A Courier Delivery System in Amman as Case Study

Authors: Nader A. Al Theeb, Zaid Abu Manneh, Ibrahim Al-Qadi

Abstract:

Traveling salesman problem (TSP) is a combinatorial integer optimization problem that asks "What is the optimal route for a vehicle to traverse in order to deliver requests to a given set of customers?”. It is widely used by the package carrier companies’ distribution centers. The main goal of applying the TSP in courier organizations is to minimize the time that it takes for the courier in each trip to deliver or pick up the shipments during a day. In this article, an optimization model is constructed to create a new TSP variant to optimize the routing in a courier organization with a consideration of congestion in Amman, the capital of Jordan. Real data were collected by different methods and analyzed. Then, concert technology - CPLEX was used to solve the proposed model for some random generated data instances and for the real collected data. At the end, results have shown a great improvement in time compared with the current trip times, and an economic study was conducted afterwards to figure out the impact of using such models.

Keywords: travel salesman problem, congestions, pick-up, integer programming, package carriers, service engineering

Procedia PDF Downloads 424
3066 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 98
3065 Personnel Selection Based on Step-Wise Weight Assessment Ratio Analysis and Multi-Objective Optimization on the Basis of Ratio Analysis Methods

Authors: Emre Ipekci Cetin, Ebru Tarcan Icigen

Abstract:

Personnel selection process is considered as one of the most important and most difficult issues in human resources management. At the stage of personnel selection, the applicants are handled according to certain criteria, the candidates are dealt with, and efforts are made to select the most appropriate candidate. However, this process can be more complicated in terms of the managers who will carry out the staff selection process. Candidates should be evaluated according to different criteria such as work experience, education, foreign language level etc. It is crucial that a rational selection process is carried out by considering all the criteria in an integrated structure. In this study, the problem of choosing the front office manager of a 5 star accommodation enterprise operating in Antalya is addressed by using multi-criteria decision-making methods. In this context, SWARA (Step-wise weight assessment ratio analysis) and MOORA (Multi-Objective Optimization on the basis of ratio analysis) methods, which have relatively few applications when compared with other methods, have been used together. Firstly SWARA method was used to calculate the weights of the criteria and subcriteria that were determined by the business. After the weights of the criteria were obtained, the MOORA method was used to rank the candidates using the ratio system and the reference point approach. Recruitment processes differ from sector to sector, from operation to operation. There are a number of criteria that must be taken into consideration by businesses in accordance with the structure of each sector. It is of utmost importance that all candidates are evaluated objectively in the framework of these criteria, after these criteria have been carefully selected in the selection of suitable candidates for employment. In the study, staff selection process was handled by using SWARA and MOORA methods together.

Keywords: accommodation establishments, human resource management, multi-objective optimization on the basis of ratio analysis, multi-criteria decision making, step-wise weight assessment ratio analysis

Procedia PDF Downloads 341
3064 Application of Homer Optimization to Investigate the Prospects of Hybrid Renewable Energy System in Rural Area: Case of Rwanda

Authors: Emile Niringiyimana, LI Ji Qing, Giovanni Dushimimana, Virginie Umwere

Abstract:

The development and utilization of renewable energy (RE) can not only effectively reduce carbon dioxide (CO2) emissions, but also became a solution to electricity shortage mitigation in rural areas. Hybrid RE systems are promising ways to provide consistent and continuous power for isolated areas. This work investigated the prospect and cost effectiveness of hybrid system complementarity between a 100kW solar PV system and a small-scale 200kW hydropower station in the South of Rwanda. In order to establish the optimal size of a RE system with adequate sizing of system components, electricity demand, solar radiation, hydrology, climate data are utilized as system input. The average daily solar radiation in Rukarara is 5.6 kWh/m2 and average wind speed is 3.5 m/s. The ideal integrated RE system, according to Homer optimization, consists of 91.21kW PV, 146kW hydropower, 12 x 24V li-ion batteries with a 20kW converter. The method of enhancing such hybrid systems control, sizing and choice of components is to reduce the Net present cost (NPC) of the system, unmet load, the cost of energy and reduction of CO2. The power consumption varies according to dominant source of energy in the system by controlling the energy compensation depending on the generation capacity of each power source. The initial investment of the RE system is $977,689.25, and its operation and maintenance expenses is $142,769.39 over a 25-year period. Although the investment is very high, the targeted profits in future are huge, taking into consideration of high investment in rural electrification structure implementations, tied with an increase of electricity cost and the 5 years payback period. The study outcomes suggest that the standalone hybrid PV-Hydropower system is feasible with zero pollution in Rukara community.

Keywords: HOMER optimization, hybrid power system, renewable energy, NPC and solar pv systems

Procedia PDF Downloads 58
3063 Cable De-Commissioning of Legacy Accelerators at CERN

Authors: Adya Uluwita, Fernando Pedrosa, Georgi Georgiev, Christian Bernard, Raoul Masterson

Abstract:

CERN is an international organisation funded by 23 countries that provide the particle physics community with excellence in particle accelerators and other related facilities. Founded in 1954, CERN has a wide range of accelerators that allow groundbreaking science to be conducted. Accelerators bring particles to high levels of energy and make them collide with each other or with fixed targets, creating specific conditions that are of high interest to physicists. A chain of accelerators is used to ramp up the energy of particles and eventually inject them into the largest and most recent one: the Large Hadron Collider (LHC). Among this chain of machines is, for instance the Proton Synchrotron, which was started in 1959 and is still in operation. These machines, called "injectors”, keep evolving over time, as well as the related infrastructure. Massive decommissioning of obsolete cables started in 2015 at CERN in the frame of the so-called "injectors de-cabling project phase 1". Its goal was to replace aging cables and remove unused ones, freeing space for new cables necessary for upgrades and consolidation campaigns. To proceed with the de-cabling, a project co-ordination team was assembled. The start of this project led to the investigation of legacy cables throughout the organisation. The identification of cables stacked over half a century proved to be arduous. Phase 1 of the injectors de-cabling was implemented for 3 years with success after overcoming some difficulties. Phase 2, started 3 years later, focused on improving safety and structure with the introduction of a quality assurance procedure. This paper discusses the implementation of this quality assurance procedure throughout phase 2 of the project and the transition between the two phases. Over hundreds of kilometres of cable were removed in the injectors complex at CERN from 2015 to 2023.

Keywords: CERN, de-cabling, injectors, quality assurance procedure

Procedia PDF Downloads 84
3062 Hybridized Approach for Distance Estimation Using K-Means Clustering

Authors: Ritu Vashistha, Jitender Kumar

Abstract:

Clustering using the K-means algorithm is a very common way to understand and analyze the obtained output data. When a similar object is grouped, this is called the basis of Clustering. There is K number of objects and C number of cluster in to single cluster in which k is always supposed to be less than C having each cluster to be its own centroid but the major problem is how is identify the cluster is correct based on the data. Formulation of the cluster is not a regular task for every tuple of row record or entity but it is done by an iterative process. Each and every record, tuple, entity is checked and examined and similarity dissimilarity is examined. So this iterative process seems to be very lengthy and unable to give optimal output for the cluster and time taken to find the cluster. To overcome the drawback challenge, we are proposing a formula to find the clusters at the run time, so this approach can give us optimal results. The proposed approach uses the Euclidian distance formula as well melanosis to find the minimum distance between slots as technically we called clusters and the same approach we have also applied to Ant Colony Optimization(ACO) algorithm, which results in the production of two and multi-dimensional matrix.

Keywords: ant colony optimization, data clustering, centroids, data mining, k-means

Procedia PDF Downloads 125
3061 Solving the Wireless Mesh Network Design Problem Using Genetic Algorithm and Simulated Annealing Optimization Methods

Authors: Moheb R. Girgis, Tarek M. Mahmoud, Bahgat A. Abdullatif, Ahmed M. Rabie

Abstract:

Mesh clients, mesh routers and gateways are components of Wireless Mesh Network (WMN). In WMN, gateways connect to Internet using wireline links and supply Internet access services for users. We usually need multiple gateways, which takes time and costs a lot of money set up, due to the limited wireless channel bit rate. WMN is a highly developed technology that offers to end users a wireless broadband access. It offers a high degree of flexibility contrasted to conventional networks; however, this attribute comes at the expense of a more complex construction. Therefore, a challenge is the planning and optimization of WMNs. In this paper, we concentrate on this challenge using a genetic algorithm and simulated annealing. The genetic algorithm and simulated annealing enable searching for a low-cost WMN configuration with constraints and determine the number of used gateways. Experimental results proved that the performance of the genetic algorithm and simulated annealing in minimizing WMN network costs while satisfying quality of service. The proposed models are presented to significantly outperform the existing solutions.

Keywords: wireless mesh networks, genetic algorithms, simulated annealing, topology design

Procedia PDF Downloads 456
3060 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks

Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton

Abstract:

Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.

Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions

Procedia PDF Downloads 77
3059 Chronic Impact of Silver Nanoparticle on Aerobic Wastewater Biofilm

Authors: Sanaz Alizadeh, Yves Comeau, Arshath Abdul Rahim, Sunhasis Ghoshal

Abstract:

The application of silver nanoparticles (AgNPs) in personal care products, various household and industrial products has resulted in an inevitable environmental exposure of such engineered nanoparticles (ENPs). Ag ENPs, released via household and industrial wastes, reach water resource recovery facilities (WRRFs), yet the fate and transport of ENPs in WRRFs and their potential risk in the biological wastewater processes are poorly understood. Accordingly, our main objective was to elucidate the impact of long-term continuous exposure to AgNPs on biological activity of aerobic wastewater biofilm. The fate, transport and toxicity of 10 μg.L-1and 100 μg.L-1 PVP-stabilized AgNPs (50 nm) were evaluated in an attached growth biological treatment process, using lab-scale moving bed bioreactors (MBBRs). Two MBBR systems for organic matter removal were fed with a synthetic influent and operated at a hydraulic retention time (HRT) of 180 min and 60% volumetric filling ratio of Anox-K5 carriers with specific surface area of 800 m2/m3. Both reactors were operated for 85 days after reaching steady state conditions to develop a mature biofilm. The impact of AgNPs on the biological performance of the MBBRs was characterized over a period of 64 days in terms of the filtered biodegradable COD (SCOD) removal efficiency, the biofilm viability and key enzymatic activities (α-glucosidase and protease). The AgNPs were quantitatively characterized using single-particle inductively coupled plasma mass spectroscopy (spICP-MS), determining simultaneously the particle size distribution, particle concentration and dissolved silver content in influent, bioreactor and effluent samples. The generation of reactive oxygen species and the oxidative stress were assessed as the proposed toxicity mechanism of AgNPs. Results indicated that a low concentration of AgNPs (10 μg.L-1) did not significantly affect the SCOD removal efficiency whereas a significant reduction in treatment efficiency (37%) was observed at 100 μg.L-1AgNPs. Neither the viability nor the enzymatic activities of biofilm were affected at 10 μg.L-1AgNPs but a higher concentration of AgNPs induced cell membrane integrity damage resulting in 31% loss of viability and reduced α-glucosidase and protease enzymatic activities by 31% and 29%, respectively, over the 64-day exposure period. The elevated intercellular ROS in biofilm at a higher AgNPs concentration over time was consistent with a reduced biological biofilm performance, confirming the occurrence of a nanoparticle-induced oxidative stress in the heterotrophic biofilm. The spICP-MS analysis demonstrated a decrease in the nanoparticles concentration over the first 25 days, indicating a significant partitioning of AgNPs into the biofilm matrix in both reactors. The concentration of nanoparticles increased in effluent of both reactors after 25 days, however, indicating a decreased retention capacity of AgNPs in biofilm. The observed significant detachment of biofilm also contributed to a higher release of nanoparticles due to cell-wall destabilizing properties of AgNPs as an antimicrobial agent. The removal efficiency of PVP-AgNPs and the biofilm biological responses were a function of nanoparticle concentration and exposure time. This study contributes to a better understanding of the fate and behavior of AgNPs in biological wastewater processes, providing key information that can be used to predict the environmental risks of ENPs in aquatic ecosystems.

Keywords: biofilm, silver nanoparticle, single particle ICP-MS, toxicity, wastewater

Procedia PDF Downloads 266
3058 A Fast Optimizer for Large-scale Fulfillment Planning based on Genetic Algorithm

Authors: Choonoh Lee, Seyeon Park, Dongyun Kang, Jaehyeong Choi, Soojee Kim, Younggeun Kim

Abstract:

Market Kurly is the first South Korean online grocery retailer that guarantees same-day, overnight shipping. More than 1.6 million customers place an average of 4.7 million orders and add 3 to 14 products into a cart per month. The company has sold almost 30,000 kinds of various products in the past 6 months, including food items, cosmetics, kitchenware, toys for kids/pets, and even flowers. The company is operating and expanding multiple dry, cold, and frozen fulfillment centers in order to store and ship these products. Due to the scale and complexity of the fulfillment, pick-pack-ship processes are planned and operated in batches, and thus, the planning that decides the batch of the customers’ orders is a critical factor in overall productivity. This paper introduces a metaheuristic optimization method that reduces the complexity of batch processing in a fulfillment center. The method is an iterative genetic algorithm with heuristic creation and evolution strategies; it aims to group similar orders into pick-pack-ship batches to minimize the total number of distinct products. With a well-designed approach to create initial genes, the method produces streamlined plans, up to 13.5% less complex than the actual plans carried out in the company’s fulfillment centers in the previous months. Furthermore, our digital-twin simulations show that the optimized plans can reduce 3% of operation time for packing, which is the most complex and time-consuming task in the process. The optimization method implements a multithreading design on the Spring framework to support the company’s warehouse management systems in near real-time, finding a solution for 4,000 orders within 5 to 7 seconds on an AWS c5.2xlarge instance.

Keywords: fulfillment planning, genetic algorithm, online grocery retail, optimization

Procedia PDF Downloads 80
3057 A Novel Approach towards Test Case Prioritization Technique

Authors: Kamna Solanki, Yudhvir Singh, Sandeep Dalal

Abstract:

Software testing is a time and cost intensive process. A scrutiny of the code and rigorous testing is required to identify and rectify the putative bugs. The process of bug identification and its consequent correction is continuous in nature and often some of the bugs are removed after the software has been launched in the market. This process of code validation of the altered software during the maintenance phase is termed as Regression testing. Regression testing ubiquitously considers resource constraints; therefore, the deduction of an appropriate set of test cases, from the ensemble of the entire gamut of test cases, is a critical issue for regression test planning. This paper presents a novel method for designing a suitable prioritization process to optimize fault detection rate and performance of regression test on predefined constraints. The proposed method for test case prioritization m-ACO alters the food source selection criteria of natural ants and is basically a modified version of Ant Colony Optimization (ACO). The proposed m-ACO approach has been coded in 'Perl' language and results are validated using three examples by computation of Average Percentage of Faults Detected (APFD) metric.

Keywords: regression testing, software testing, test case prioritization, test suite optimization

Procedia PDF Downloads 333