Search results for: optimization technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9311

Search results for: optimization technique

8231 Adsorption of Cd(II) and Pb(II) from Aqueous Solutions by Using Pods of Acacia Karoo

Authors: Gulshan Kumar Jawa, Sandeep Mohan Ahuja

Abstract:

With the increase in industrialization, the presence of heavy metals in wastewater streams has turned into a serious concern for the ecosystem. The metals diffuse through the food chains, causing various health hazards. Conventional methods used to remove these heavy metals from water have some limitations, such as cost, secondary pollution due to sludge formation, recovery of metal, economic viability at low metal concentrations, etc. Many of the biomaterials have been investigated by researchers for the adsorption of heavy metals from water solutions as an alternative technique for the last two decades and have found promising results. In this paper, the batch study on the use of pods of acacia karoo for the adsorption of Cd(II) and Pb(II) from aqueous solutions has been reported. The effect of various parameters on the removal of metal ions, such as pH, contact time, stirring speed, initial metal ion concentration, adsorbent dose, and temperature, have been established to find the optimum parameters through one parameter optimization. Further, kinetic, equilibrium, and thermodynamic studies have been conducted. The pods of acacia karoo have shown great potential for adsorption of Cd(II) and Pb(II) from aqueous solutions and have proven to be a better and more economical alternative for the purpose.

Keywords: adsorption, heavy metals, biomaterials, Cadmium(II), Lead(II), pods of acacia karoo

Procedia PDF Downloads 36
8230 Polymorphic Positions, Haplotypes, and Mutations Detected In The Mitochonderial DNA Coding Region By Sanger Sequence Technique

Authors: Imad H. Hameed, Mohammad A. Jebor, Ammera J. Omer

Abstract:

The aim of this research is to study the mitochonderial coding region by using the Sanger sequencing technique and establish the degree of variation characteristic of a fragment. FTA® Technology (FTA™ paper DNA extraction) utilized to extract DNA. Portion of coding region encompassing positions 11719 –12384 amplified in accordance with the Anderson reference sequence. PCR products purified by EZ-10 spin column then sequenced and Detected by using the ABI 3730xL DNA Analyzer. Five new polymorphic positions 11741, 11756, 11878, 11887 and 12133 are described may be suitable sources for identification purpose in future. The calculated value D= 0.95 and RMP=0.048 of the genetic diversity should be understood as high in the context of coding function of the analysed DNA fragment. The relatively high gene diversity and a relatively low random match probability were observed in Iraq population. The obtained data can be used to identify the variable nucleotide positions characterized by frequent occurrence which is most promising for various identifications.

Keywords: coding region, Iraq, mitochondrial DNA, polymorphic positions, sanger technique

Procedia PDF Downloads 430
8229 A Unique Exact Approach to Handle a Time-Delayed State-Space System: The Extraction of Juice Process

Authors: Mohamed T. Faheem Saidahmed, Ahmed M. Attiya Ibrahim, Basma GH. Elkilany

Abstract:

This paper discusses the application of Time Delay Control (TDC) compensation technique in the juice extraction process in a sugar mill. The objective is to improve the control performance of the process and increase extraction efficiency. The paper presents the mathematical model of the juice extraction process and the design of the TDC compensation controller. Simulation results show that the TDC compensation technique can effectively suppress the time delay effect in the process and improve control performance. The extraction efficiency is also significantly increased with the application of the TDC compensation technique. The proposed approach provides a practical solution for improving the juice extraction process in sugar mills using MATLAB Processes.

Keywords: time delay control (TDC), exact and unique state space model, delay compensation, Smith predictor.

Procedia PDF Downloads 82
8228 Data Analytics in Energy Management

Authors: Sanjivrao Katakam, Thanumoorthi I., Antony Gerald, Ratan Kulkarni, Shaju Nair

Abstract:

With increasing energy costs and its impact on the business, sustainability today has evolved from a social expectation to an economic imperative. Therefore, finding methods to reduce cost has become a critical directive for Industry leaders. Effective energy management is the only way to cut costs. However, Energy Management has been a challenge because it requires a change in old habits and legacy systems followed for decades. Today exorbitant levels of energy and operational data is being captured and stored by Industries, but they are unable to convert these structured and unstructured data sets into meaningful business intelligence. It must be noted that for quick decisions, organizations must learn to cope with large volumes of operational data in different formats. Energy analytics not only helps in extracting inferences from these data sets, but also is instrumental in transformation from old approaches of energy management to new. This in turn assists in effective decision making for implementation. It is the requirement of organizations to have an established corporate strategy for reducing operational costs through visibility and optimization of energy usage. Energy analytics play a key role in optimization of operations. The paper describes how today energy data analytics is extensively used in different scenarios like reducing operational costs, predicting energy demands, optimizing network efficiency, asset maintenance, improving customer insights and device data insights. The paper also highlights how analytics helps transform insights obtained from energy data into sustainable solutions. The paper utilizes data from an array of segments such as retail, transportation, and water sectors.

Keywords: energy analytics, energy management, operational data, business intelligence, optimization

Procedia PDF Downloads 360
8227 Optimization of Tundish Geometry for Minimizing Dead Volume Using OpenFOAM

Authors: Prateek Singh, Dilshad Ahmad

Abstract:

Growing demand for high-quality steel products has inspired researchers to investigate the unit operations involved in the manufacturing of these products (slabs, rods, sheets, etc.). One such operation is tundish operation, in which a vessel (tundish) acts as a buffer of molten steel for the solidification operation in mold. It is observed that tundish also plays a crucial role in the quality and cleanliness of the steel produced, besides merely acting as a reservoir for the mold. It facilitates removal of dissolved oxygen (inclusions) from the molten steel thus improving its cleanliness. Inclusion removal can be enhanced by increasing the residence time of molten steel in the tundish by incorporation of flow modifiers like dams, weirs, turbo-pad, etc. These flow modifiers also help in reducing the dead or short circuit zones within the tundish which is significant for maintaining thermal and chemical homogeneity of molten steel. Thus, it becomes important to analyze the flow of molten steel in the tundish for different configuration of flow modifiers. In the present work, effect of varying positions and heights/depths of dam and weir on the dead volume in tundish is studied. Steady state thermal and flow profiles of molten steel within the tundish are obtained using OpenFOAM. Subsequently, Residence Time Distribution analysis is performed to obtain the percentage of dead volume in the tundish. Design of Experiment method is then used to configure different tundish geometries for varying positions and heights/depths of dam and weir, and dead volume for each tundish design is obtained. A second-degree polynomial with two-term interactions of independent variables to predict the dead volume in the tundish with positions and heights/depths of dam and weir as variables are computed using Multiple Linear Regression model. This polynomial is then used in an optimization framework to obtain the optimal tundish geometry for minimizing dead volume using Sequential Quadratic Programming optimization.

Keywords: design of experiments, multiple linear regression, OpenFOAM, residence time distribution, sequential quadratic programming optimization, steel, tundish

Procedia PDF Downloads 200
8226 Median-Based Nonparametric Estimation of Returns in Mean-Downside Risk Portfolio Frontier

Authors: H. Ben Salah, A. Gannoun, C. de Peretti, A. Trabelsi

Abstract:

The Downside Risk (DSR) model for portfolio optimisation allows to overcome the drawbacks of the classical mean-variance model concerning the asymetry of returns and the risk perception of investors. This model optimization deals with a positive definite matrix that is endogenous with respect to portfolio weights. This aspect makes the problem far more difficult to handle. For this purpose, Athayde (2001) developped a new recurcive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier presents an appearance which is not very smooth. In order to overcome that, Athayde (2003) proposed a mean kernel estimation of the returns, so as to create a smoother portfolio frontier. This technique provides an effect similar to the case in which we had continuous observations. In this paper, taking advantage on the the robustness of the median, we replace the mean estimator in Athayde's model by a nonparametric median estimator of the returns. Then, we give a new version of the former algorithm (of Athayde (2001, 2003)). We eventually analyse the properties of this improved portfolio frontier and apply this new method on real examples.

Keywords: Downside Risk, Kernel Method, Median, Nonparametric Estimation, Semivariance

Procedia PDF Downloads 485
8225 Performance Optimization on Waiting Time Using Queuing Theory in an Advanced Manufacturing Environment: Robotics to Enhance Productivity

Authors: Ganiyat Soliu, Glen Bright, Chiemela Onunka

Abstract:

Performance optimization plays a key role in controlling the waiting time during manufacturing in an advanced manufacturing environment to improve productivity. Queuing mathematical modeling theory was used to examine the performance of the multi-stage production line. Robotics as a disruptive technology was implemented into a virtual manufacturing scenario during the packaging process to study the effect of waiting time on productivity. The queuing mathematical model was used to determine the optimum service rate required by robots during the packaging stage of manufacturing to yield an optimum production cost. Different rates of production were assumed in a virtual manufacturing environment, cost of packaging was estimated with optimum production cost. An equation was generated using queuing mathematical modeling theory and the theorem adopted for analysis of the scenario is the Newton Raphson theorem. Queuing theory presented here provides an adequate analysis of the number of robots required to regulate waiting time in order to increase the number of output. Arrival rate of the product was fast which shows that queuing mathematical model was effective in minimizing service cost and the waiting time during manufacturing. At a reduced waiting time, there was an improvement in the number of products obtained per hour. The overall productivity was improved based on the assumptions used in the queuing modeling theory implemented in the virtual manufacturing scenario.

Keywords: performance optimization, productivity, queuing theory, robotics

Procedia PDF Downloads 144
8224 Optimization Study of Adsorption of Nickel(II) on Bentonite

Authors: B. Medjahed, M. A. Didi, B. Guezzen

Abstract:

This work concerns with the experimental study of the adsorption of the Ni(II) on bentonite. The effects of various parameters such as contact time, stirring rate, initial concentration of Ni(II), masse of clay, initial pH of aqueous solution and temperature on the adsorption yield, were carried out. The study of the effect of the ionic strength on the yield of adsorption was examined by the identification and the quantification of the present chemical species in the aqueous phase containing the metallic ion Ni(II). The adsorbed species were investigated by a calculation program using CHEAQS V. L20.1 in order to determine the relation between the percentages of the adsorbed species and the adsorption yield. The optimization process was carried out using 23 factorial designs. The individual and combined effects of three process parameters, i.e. initial Ni(II) concentration in aqueous solution (2.10−3 and 5.10−3 mol/L), initial pH of the solution (2 and 6.5), and mass of bentonite (0.03 and 0.3 g) on Ni(II) adsorption, were studied.

Keywords: adsorption, bentonite, factorial design, Nickel(II)

Procedia PDF Downloads 156
8223 Ultrasound Therapy: Amplitude Modulation Technique for Tissue Ablation by Acoustic Cavitation

Authors: Fares A. Mayia, Mahmoud A. Yamany, Mushabbab A. Asiri

Abstract:

In recent years, non-invasive Focused Ultrasound (FU) has been utilized for generating bubbles (cavities) to ablate target tissue by mechanical fractionation. Intensities >10 kW/cm² are required to generate the inertial cavities. The generation, rapid growth, and collapse of these inertial cavities cause tissue fractionation and the process is called Histotripsy. The ability to fractionate tissue from outside the body has many clinical applications including the destruction of the tumor mass. The process of tissue fractionation leaves a void at the treated site, where all the affected tissue is liquefied to particles at sub-micron size. The liquefied tissue will eventually be absorbed by the body. Histotripsy is a promising non-invasive treatment modality. This paper presents a technique for generating inertial cavities at lower intensities (< 1 kW/cm²). The technique (patent pending) is based on amplitude modulation (AM), whereby a low frequency signal modulates the amplitude of a higher frequency FU wave. Cavitation threshold is lower at low frequencies; the intensity required to generate cavitation in water at 10 kHz is two orders of magnitude lower than the intensity at 1 MHz. The Amplitude Modulation technique can operate in both continuous wave (CW) and pulse wave (PW) modes, and the percentage modulation (modulation index) can be varied from 0 % (thermal effect) to 100 % (cavitation effect), thus allowing a range of ablating effects from Hyperthermia to Histotripsy. Furthermore, changing the frequency of the modulating signal allows controlling the size of the generated cavities. Results from in vitro work demonstrate the efficacy of the new technique in fractionating soft tissue and solid calcium carbonate (Chalk) material. The technique, when combined with MR or Ultrasound imaging, will present a precise treatment modality for ablating diseased tissue without affecting the surrounding healthy tissue.

Keywords: focused ultrasound therapy, histotripsy, inertial cavitation, mechanical tissue ablation

Procedia PDF Downloads 315
8222 A Study of Non-Coplanar Imaging Technique in INER Prototype Tomosynthesis System

Authors: Chia-Yu Lin, Yu-Hsiang Shen, Cing-Ciao Ke, Chia-Hao Chang, Fan-Pin Tseng, Yu-Ching Ni, Sheng-Pin Tseng

Abstract:

Tomosynthesis is an imaging system that generates a 3D image by scanning in a limited angular range. It could provide more depth information than traditional 2D X-ray single projection. Radiation dose in tomosynthesis is less than computed tomography (CT). Because of limited angular range scanning, there are many properties depending on scanning direction. Therefore, non-coplanar imaging technique was developed to improve image quality in traditional tomosynthesis. The purpose of this study was to establish the non-coplanar imaging technique of tomosynthesis system and evaluate this technique by the reconstructed image. INER prototype tomosynthesis system contains an X-ray tube, a flat panel detector, and a motion machine. This system could move X-ray tube in multiple directions during the acquisition. In this study, we investigated three different imaging techniques that were 2D X-ray single projection, traditional tomosynthesis, and non-coplanar tomosynthesis. An anthropopathic chest phantom was used to evaluate the image quality. It contained three different size lesions (3 mm, 5 mm and, 8 mm diameter). The traditional tomosynthesis acquired 61 projections over a 30 degrees angular range in one scanning direction. The non-coplanar tomosynthesis acquired 62 projections over 30 degrees angular range in two scanning directions. A 3D image was reconstructed by iterative image reconstruction algorithm (ML-EM). Our qualitative method was to evaluate artifacts in tomosynthesis reconstructed image. The quantitative method was used to calculate a peak-to-valley ratio (PVR) that means the intensity ratio of the lesion to the background. We used PVRs to evaluate the contrast of lesions. The qualitative results showed that in the reconstructed image of non-coplanar scanning, anatomic structures of chest and lesions could be identified clearly and no significant artifacts of scanning direction dependent could be discovered. In 2D X-ray single projection, anatomic structures overlapped and lesions could not be discovered. In traditional tomosynthesis image, anatomic structures and lesions could be identified clearly, but there were many artifacts of scanning direction dependent. The quantitative results of PVRs show that there were no significant differences between non-coplanar tomosynthesis and traditional tomosynthesis. The PVRs of the non-coplanar technique were slightly higher than traditional technique in 5 mm and 8 mm lesions. In non-coplanar tomosynthesis, artifacts of scanning direction dependent could be reduced and PVRs of lesions were not decreased. The reconstructed image was more isotropic uniformity in non-coplanar tomosynthesis than in traditional tomosynthesis. In the future, scan strategy and scan time will be the challenges of non-coplanar imaging technique.

Keywords: image reconstruction, non-coplanar imaging technique, tomosynthesis, X-ray imaging

Procedia PDF Downloads 363
8221 PWM Harmonic Injection and Frequency-Modulated Triangular Carrier to Improve the Lives of the Transformers

Authors: Mario J. Meco-Gutierrez, Francisco Perez-Hidalgo, Juan R. Heredia-Larrubia, Antonio Ruiz-Gonzalez, Francisco Vargas-Merino

Abstract:

More and more applications power inverters connected to transformers, for example, the connection facilities to the power grid renewable generation. It is well known that the quality of signal power inverters it is not a pure sine. The harmonic content produced negative effects, one of which is the heating of electrical machines and therefore, affects the life of the machines. The decrease of life of transformers can be calculated by Arrhenius or Montsinger equation. Analyzing this expression any (long-term) decrease of a transformer temperature for 6º C - 7º C means doubles its life-expectancy. Methodologies: This work presents the technique of pulse width modulation (PWM) with an injection of harmonic and triangular frequency carrier modulated in frequency. This technique is used to improve the quality of the output voltage signal of the power inverters controlled PWM. The proposed technique increases in the fundamental term and a significant reduction in low order harmonics with the same commutations per time that control sine PWM. To achieve this, the modulating wave is compared to a triangular carrier with variable frequency over the period of the modulator. Therefore, it is, advantageous for the modulating signal to have a large amount of sinusoidal “information” in the areas of greater sampling. A triangular signal with a frequency that varies over the modulator’s period is used as a carrier, for obtaining more samples in the area with the greatest slope. A power inverter controlled by PWM proposed technique is connected to a transformer. Results: In order to verify the derived thermal parameters under different operation conditions, another ambient and loading scenario is involved for a further verification, which was sampled from the same power transformer. Temperatures of different parts of the transformer will be exposed for each PWM control technique analyzed. An assessment of the temperature be done with different techniques PWM control and hence the life of the transformer is calculated for each technique. Conclusion: This paper analyzes such as transformer heating produced by this technique and compared with other forms of PWM control. In it can be seen as a reduction the harmonic content produces less heat transformer and therefore, an increase in the life of the transformer.

Keywords: heating, power-inverter, PWM, transformer

Procedia PDF Downloads 407
8220 Thermodynamic Optimization of an R744 Based Transcritical Refrigeration System with Dedicated Mechanical Subcooling Cycle

Authors: Mihir Mouchum Hazarika, Maddali Ramgopal, Souvik Bhattacharyya

Abstract:

The thermodynamic analysis shows that the performance of the R744 based transcritical refrigeration cycle drops drastically for higher ambient temperatures. This is due to the peculiar s-shape of the isotherm in the supercritical region. However, subcooling of the refrigerant at the gas cooler exit enhances the performance of the R744 based system. The present study is carried out to analyze the R744 based transcritical system with dedicated mechanical subcooling cycle. Based on this proposed cycle, the thermodynamic analysis is performed, and optimum operating parameters are determined. The amount of subcooling and the pressure ratio in the subcooling cycle are the parameters which are needed to be optimized to extract the maximum COP from this proposed cycle. It is expected that this study will be helpful in implementing the dedicated subcooling cycle with R744 based transcritical system to improve the performance.

Keywords: optimization, R744, subcooling, transcritical

Procedia PDF Downloads 303
8219 Krill-Herd Step-Up Approach Based Energy Efficiency Enhancement Opportunities in the Offshore Mixed Refrigerant Natural Gas Liquefaction Process

Authors: Kinza Qadeer, Muhammad Abdul Qyyum, Moonyong Lee

Abstract:

Natural gas has become an attractive energy source in comparison with other fossil fuels because of its lower CO₂ and other air pollutant emissions. Therefore, compared to the demand for coal and oil, that for natural gas is increasing rapidly world-wide. The transportation of natural gas over long distances as a liquid (LNG) preferable for several reasons, including economic, technical, political, and safety factors. However, LNG production is an energy-intensive process due to the tremendous amount of power requirements for compression of refrigerants, which provide sufficient cold energy to liquefy natural gas. Therefore, one of the major issues in the LNG industry is to improve the energy efficiency of existing LNG processes through a cost-effective approach that is 'optimization'. In this context, a bio-inspired Krill-herd (KH) step-up approach was examined to enhance the energy efficiency of a single mixed refrigerant (SMR) natural gas liquefaction (LNG) process, which is considered as a most promising candidate for offshore LNG production (FPSO). The optimal design of a natural gas liquefaction processes involves multivariable non-linear thermodynamic interactions, which lead to exergy destruction and contribute to process irreversibility. As key decision variables, the optimal values of mixed refrigerant flow rates and process operating pressures were determined based on the herding behavior of krill individuals corresponding to the minimum energy consumption for LNG production. To perform the rigorous process analysis, the SMR process was simulated in Aspen Hysys® software and the resulting model was connected with the Krill-herd approach coded in MATLAB. The optimal operating conditions found by the proposed approach significantly reduced the overall energy consumption of the SMR process by ≤ 22.5% and also improved the coefficient of performance in comparison with the base case. The proposed approach was also compared with other well-proven optimization algorithms, such as genetic and particle swarm optimization algorithms, and was found to exhibit a superior performance over these existing approaches.

Keywords: energy efficiency, Krill-herd, LNG, optimization, single mixed refrigerant

Procedia PDF Downloads 152
8218 Optimization of Electrical Discharge Machining Parameters in Machining AISI D3 Tool Steel by Grey Relational Analysis

Authors: Othman Mohamed Altheni, Abdurrahman Abusaada

Abstract:

This study presents optimization of multiple performance characteristics [material removal rate (MRR), surface roughness (Ra), and overcut (OC)] of hardened AISI D3 tool steel in electrical discharge machining (EDM) using Taguchi method and Grey relational analysis. Machining process parameters selected were pulsed current Ip, pulse-on time Ton, pulse-off time Toff and gap voltage Vg. Based on ANOVA, pulse current is found to be the most significant factor affecting EDM process. Optimized process parameters are simultaneously leading to a higher MRR, lower Ra, and lower OC are then verified through a confirmation experiment. Validation experiment shows an improved MRR, Ra and OC when Taguchi method and grey relational analysis were used

Keywords: edm parameters, grey relational analysis, Taguchi method, ANOVA

Procedia PDF Downloads 291
8217 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements

Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating

Abstract:

Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.

Keywords: dynamic area requirements, facility layout problem, optimization model, product assembly

Procedia PDF Downloads 224
8216 Analyzing Test Data Generation Techniques Using Evolutionary Algorithms

Authors: Arslan Ellahi, Syed Amjad Hussain

Abstract:

Software Testing is a vital process in software development life cycle. We can attain the quality of software after passing it through software testing phase. We have tried to find out automatic test data generation techniques that are a key research area of software testing to achieve test automation that can eventually decrease testing time. In this paper, we review some of the approaches presented in the literature which use evolutionary search based algorithms like Genetic Algorithm, Particle Swarm Optimization (PSO), etc. to validate the test data generation process. We also look into the quality of test data generation which increases or decreases the efficiency of testing. We have proposed test data generation techniques for model-based testing. We have worked on tuning and fitness function of PSO algorithm.

Keywords: search based, evolutionary algorithm, particle swarm optimization, genetic algorithm, test data generation

Procedia PDF Downloads 184
8215 Fragment Domination for Many-Objective Decision-Making Problems

Authors: Boris Djartov, Sanaz Mostaghim

Abstract:

This paper presents a number-based dominance method. The main idea is how to fragment the many attributes of the problem into subsets suitable for the well-established concept of Pareto dominance. Although other similar methods can be found in the literature, they focus on comparing the solutions one objective at a time, while the focus of this method is to compare entire subsets of the objective vector. Given the nature of the method, it is computationally costlier than other methods and thus, it is geared more towards selecting an option from a finite set of alternatives, where each solution is defined by multiple objectives. The need for this method was motivated by dynamic alternate airport selection (DAAS). In DAAS, pilots, while en route to their destination, can find themselves in a situation where they need to select a new landing airport. In such a predicament, they need to consider multiple alternatives with many different characteristics, such as wind conditions, available landing distance, the fuel needed to reach it, etc. Hence, this method is primarily aimed at human decision-makers. Many methods within the field of multi-objective and many-objective decision-making rely on the decision maker to initially provide the algorithm with preference points and weight vectors; however, this method aims to omit this very difficult step, especially when the number of objectives is so large. The proposed method will be compared to Favour (1 − k)-Dom and L-dominance (LD) methods. The test will be conducted using well-established test problems from the literature, such as the DTLZ problems. The proposed method is expected to outperform the currently available methods in the literature and hopefully provide future decision-makers and pilots with support when dealing with many-objective optimization problems.

Keywords: multi-objective decision-making, many-objective decision-making, multi-objective optimization, many-objective optimization

Procedia PDF Downloads 86
8214 On the System of Split Equilibrium and Fixed Point Problems in Real Hilbert Spaces

Authors: Francis O. Nwawuru, Jeremiah N. Ezeora

Abstract:

In this paper, a new algorithm for solving the system of split equilibrium and fixed point problems in real Hilbert spaces is considered. The equilibrium bifunction involves a nite family of pseudo-monotone mappings, which is an improvement over monotone operators. More so, it turns out that the solution of the finite family of nonexpansive mappings. The regularized parameters do not depend on Lipschitz constants. Also, the computations of the stepsize, which plays a crucial role in the convergence analysis of the proposed method, do require prior knowledge of the norm of the involved bounded linear map. Furthermore, to speed up the rate of convergence, an inertial term technique is introduced in the proposed method. Under standard assumptions on the operators and the control sequences, using a modified Halpern iteration method, we establish strong convergence, a desired result in applications. Finally, the proposed scheme is applied to solve some optimization problems. The result obtained improves numerous results announced earlier in this direction.

Keywords: equilibrium, Hilbert spaces, fixed point, nonexpansive mapping, extragradient method, regularized equilibrium

Procedia PDF Downloads 45
8213 Core Number Optimization Based Scheduler to Order/Mapp Simulink Application

Authors: Asma Rebaya, Imen Amari, Kaouther Gasmi, Salem Hasnaoui

Abstract:

Over these last years, the number of cores witnessed a spectacular increase in digital signal and general use processors. Concurrently, significant researches are done to get benefit from the high degree of parallelism. Indeed, these researches are focused to provide an efficient scheduling from hardware/software systems to multicores architecture. The scheduling process consists on statically choose one core to execute one task and to specify an execution order for the application tasks. In this paper, we describe an efficient scheduler that calculates the optimal number of cores required to schedule an application, gives a heuristic scheduling solution and evaluates its cost. Our proposal results are evaluated and compared with Preesm scheduler results and we prove that ours allows better scheduling in terms of latency, computation time and number of cores.

Keywords: computation time, hardware/software system, latency, optimization, multi-cores platform, scheduling

Procedia PDF Downloads 276
8212 An Approach to Electricity Production Utilizing Waste Heat of a Triple-Pressure Cogeneration Combined Cycle Power Plant

Authors: Soheil Mohtaram, Wu Weidong, Yashar Aryanfar

Abstract:

This research investigates the points with heat recovery potential in a triple-pressure cogeneration combined cycle power plant and determines the amount of waste heat that can be recovered. A modified cycle arrangement is then adopted for accessing thermal potentials. Modeling the energy system is followed by thermodynamic and energetic evaluation, and then the price of the manufactured products is also determined using the Total Revenue Requirement (TRR) method and term economic analysis. The results of optimization are then presented in a Pareto chart diagram by implementing a new model with dual objective functions, which include power cost and produce heat. This model can be utilized to identify the optimal operating point for such power plants based on electricity and heat prices in different regions.

Keywords: heat loss, recycling, unused energy, efficient production, optimization, triple-pressure cogeneration

Procedia PDF Downloads 76
8211 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 56
8210 Foggy Image Restoration Using Neural Network

Authors: Khader S. Al-Aidmat, Venus W. Samawi

Abstract:

Blurred vision in the misty atmosphere is essential problem which needs to be resolved. To solve this problem, we developed a technique to restore foggy degraded image from its original version using Back-propagation neural network (BP-NN). The suggested technique is based on mapping between foggy scene and its corresponding original scene. Seven different approaches are suggested based on type of features used in image restoration. Features are extracted from spatial and spatial-frequency domain (using DCT). Each of these approaches comes with its own BP-NN architecture depending on type and number of used features. The weight matrix resulted from training each BP-NN represents a fog filter. The performance of these filters are evaluated empirically (using PSNR), and perceptually. By comparing the performance of these filters, the effective features that suits BP-NN technique for restoring foggy images is recognized. This system proved its effectiveness and success in restoring moderate foggy images.

Keywords: artificial neural network, discrete cosine transform, feed forward neural network, foggy image restoration

Procedia PDF Downloads 379
8209 Software Reliability Prediction Model Analysis

Authors: Lela Mirtskhulava, Mariam Khunjgurua, Nino Lomineishvili, Koba Bakuria

Abstract:

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

Keywords: exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability

Procedia PDF Downloads 461
8208 Suitable Die Shaping for a Rectangular Shape Bottle by Application of FEM and AI Technique

Authors: N. Ploysook, R. Rugsaj, C. Suvanjumrat

Abstract:

The characteristic requirement for producing rectangular shape bottles was a uniform thickness of the plastic bottle wall. Die shaping was a good technique which controlled the wall thickness of bottles. An advance technology which was the finite element method (FEM) for blowing parison to be a rectangular shape bottle was conducted to reduce waste plastic from a trial and error method of a die shaping and parison control method. The artificial intelligent (AI) comprised of artificial neural network and genetic algorithm was selected to optimize the die gap shape from the FEM results. The application of AI technique could optimize the suitable die gap shape for the parison blow molding which did not depend on the parison control method to produce rectangular bottles with the uniform wall. Particularly, this application can be used with cheap blow molding machines without a parison controller therefore it will reduce cost of production in the bottle blow molding process.

Keywords: AI, bottle, die shaping, FEM

Procedia PDF Downloads 234
8207 An Improved C-Means Model for MRI Segmentation

Authors: Ying Shen, Weihua Zhu

Abstract:

Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.

Keywords: magnetic resonance image (MRI), c-means model, image segmentation, information entropy

Procedia PDF Downloads 221
8206 Planning a Supply Chain with Risk and Environmental Objectives

Authors: Ghanima Al-Sharrah, Haitham M. Lababidi, Yusuf I. Ali

Abstract:

The main objective of the current work is to introduce sustainability factors in optimizing the supply chain model for process industries. The supply chain models are normally based on purely economic considerations related to costs and profits. To account for sustainability, two additional factors have been introduced; environment and risk. A supply chain for an entire petroleum organization has been considered for implementing and testing the proposed optimization models. The environmental and risk factors were introduced as indicators reflecting the anticipated impact of the optimal production scenarios on sustainability. The aggregation method used in extending the single objective function to multi-objective function is proven to be quite effective in balancing the contribution of each objective term. The results indicate that introducing sustainability factor would slightly reduce the economic benefit while improving the environmental and risk reduction performances of the process industries.

Keywords: environmental indicators, optimization, risk, supply chain

Procedia PDF Downloads 346
8205 Optimization of Process Parameters and Modeling of Mass Transport during Hybrid Solar Drying of Paddy

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying is one of the most critical unit operations for prolonging the shelf-life of food grains in order to ensure global food security. Photovoltaic integrated solar dryers can be a sustainable solution for replacing energy intensive thermal dryers as it is capable of drying in off-sunshine hours and provide better control over drying conditions. But, performance and reliability of PV based solar dryers depend hugely on climatic conditions thereby, drastically affecting process parameters. Therefore, to ensure quality and prolonged shelf-life of paddy, optimization of process parameters for solar dryers is critical. Proper moisture distribution within the grains is most detrimental factor to enhance the shelf-life of paddy therefore; modeling of mass transport can help in providing a better insight of moisture migration. Hence, present work aims at optimizing the process parameters and to develop a 3D finite element model (FEM) for predicting moisture profile in paddy during solar drying. Optimization of process parameters (power level, air velocity and moisture content) was done using box Behnken model in Design expert software. Furthermore, COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Optimized model for drying paddy was found to be 700W, 2.75 m/s and 13% wb with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Furthermore, 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product to achieve global food and energy security

Keywords: finite element modeling, hybrid solar drying, mass transport, paddy, process optimization

Procedia PDF Downloads 135
8204 Construction Time - Cost Trade-Off Analysis Using Fuzzy Set Theory

Authors: V. S. S. Kumar, B. Vikram, G. C. S. Reddy

Abstract:

Time and cost are the two critical objectives of construction project management and are not independent but intricately related. Trade-off between project duration and cost are extensively discussed during project scheduling because of practical relevance. Generally when the project duration is compressed, the project calls for an increase in labor and more productive equipments, which increases the cost. Thus, the construction time-cost optimization is defined as a process to identify suitable construction activities for speeding up to attain the best possible savings in both time and cost. As there is hidden tradeoff relationship between project time and cost, it might be difficult to predict whether the total cost would increase or decrease as a result of compressing the schedule. Different combinations of duration and cost for the activities associated with the project determine the best set in the time-cost optimization. Therefore, the contractors need to select the best combination of time and cost to perform each activity, all of which will ultimately determine the project duration and cost. In this paper, the fuzzy set theory is used to model the uncertainties in the project environment for time-cost trade off analysis.

Keywords: fuzzy sets, uncertainty, qualitative factors, decision making

Procedia PDF Downloads 646
8203 Application of VE in Healthcare Services: An Overview of Healthcare Facility

Authors: Safeer Ahmad, Pratheek Sudhakran, M. Arif Kamal, Tarique Anwar

Abstract:

In Healthcare facility designing, Efficient MEP services are very crucial because the built environment not only affects patients and family but also Healthcare staff and their outcomes. This paper shall cover the basics of Value engineering and its different phases that can be implemented to the MEP Designing stage for Healthcare facility optimization, also VE can improve the product cost the unnecessary costs associated with healthcare services. This paper explores Healthcare facility services and their Value engineering Job plan for the successful application of the VE technique by conducting a Workshop with end-users, designing team and associate experts shall be carried out using certain concepts, tools, methods and mechanism developed to achieve the purpose of selecting what is actually appropriate and ideal among many value engineering processes and tools that have long proven their ability to enhance the value by following the concept of Total quality management while achieving the most efficient resources allocation to satisfy the key functions and requirements of the project without sacrificing the targeted level of service for all design metrics. Detail study has been discussed with analysis been carried out by this process to achieve a better outcome, Various tools are used for the Analysis of the product at different phases used, at the end the results obtained after implementation of techniques are discussed.

Keywords: value engineering, healthcare facility, design, services

Procedia PDF Downloads 189
8202 An Integrated Fuzzy Inference System and Technique for Order of Preference by Similarity to Ideal Solution Approach for Evaluation of Lean Healthcare Systems

Authors: Aydin M. Torkabadi, Ehsan Pourjavad

Abstract:

A decade after the introduction of Lean in Saskatchewan’s public healthcare system, its effectiveness remains a controversial subject among health researchers, workers, managers, and politicians. Therefore, developing a framework to quantitatively assess the Lean achievements is significant. This study investigates the success of initiatives across Saskatchewan health regions by recognizing the Lean healthcare criteria, measuring the success levels, comparing the regions, and identifying the areas for improvements. This study proposes an integrated intelligent computing approach by applying Fuzzy Inference System (FIS) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). FIS is used as an efficient approach to assess the Lean healthcare criteria, and TOPSIS is applied for ranking the values in regards to the level of leanness. Due to the innate uncertainty in decision maker judgments on criteria, principals of the fuzzy theory are applied. Finally, FIS-TOPSIS was established as an efficient technique in determining the lean merit in healthcare systems.

Keywords: lean healthcare, intelligent computing, fuzzy inference system, healthcare evaluation, technique for order of preference by similarity to ideal solution, multi-criteria decision making, MCDM

Procedia PDF Downloads 153