Search results for: multi-objective combinatorial optimization problem
8204 Random Matrix Theory Analysis of Cross-Correlation in the Nigerian Stock Exchange
Authors: Chimezie P. Nnanwa, Thomas C. Urama, Patrick O. Ezepue
Abstract:
In this paper we use Random Matrix Theory to analyze the eigen-structure of the empirical correlations of 82 stocks which are consistently traded in the Nigerian Stock Exchange (NSE) over a 4-year study period 3 August 2009 to 26 August 2013. We apply the Marchenko-Pastur distribution of eigenvalues of a purely random matrix to investigate the presence of investment-pertinent information contained in the empirical correlation matrix of the selected stocks. We use hypothesised standard normal distribution of eigenvector components from RMT to assess deviations of the empirical eigenvectors to this distribution for different eigenvalues. We also use the Inverse Participation Ratio to measure the deviation of eigenvectors of the empirical correlation matrix from RMT results. These preliminary results on the dynamics of asset price correlations in the NSE are important for improving risk-return trade-offs associated with Markowitz’s portfolio optimization in the stock exchange, which is pursued in future work.Keywords: correlation matrix, eigenvalue and eigenvector, inverse participation ratio, portfolio optimization, random matrix theory
Procedia PDF Downloads 3448203 A Bi-Objective Stochastic Mathematical Model for Agricultural Supply Chain Network
Authors: Mohammad Mahdi Paydar, Armin Cheraghalipour, Mostafa Hajiaghaei-Keshteli
Abstract:
Nowadays, in advanced countries, agriculture as one of the most significant sectors of the economy, plays an important role in its political and economic independence. Due to farmers' lack of information about products' demand and lack of proper planning for harvest time, annually the considerable amount of products is corrupted. Besides, in this paper, we attempt to improve these unfavorable conditions via designing an effective supply chain network that tries to minimize total costs of agricultural products along with minimizing shortage in demand points. To validate the proposed model, a stochastic optimization approach by using a branch and bound solver of the LINGO software is utilized. Furthermore, to accumulate the data of parameters, a case study in Mazandaran province placed in the north of Iran has been applied. Finally, using ɛ-constraint approach, a Pareto front is obtained and one of its Pareto solutions as best solution is selected. Then, related results of this solution are explained. Finally, conclusions and suggestions for the future research are presented.Keywords: perishable products, stochastic optimization, agricultural supply chain, ɛ-constraint
Procedia PDF Downloads 3668202 Cost-Optimized Extra-Lateral Transshipments
Authors: Dilupa Nakandala, Henry Lau
Abstract:
Ever increasing demand for cost efficiency and customer satisfaction through reliable delivery have been a mandate for logistics practitioners to continually improve inventory management processes. With the cost optimization objectives, this study considers an extended scenario where sourcing from the same echelon of the supply chain, known as lateral transshipment which is instantaneous but more expensive than purchasing from regular suppliers, is considered by warehouses not only to re-actively fulfill the urgent outstanding retailer demand that could not be fulfilled by stock on hand but also for preventively reduce back-order cost. Such extra lateral trans-shipments as preventive responses are intended to meet the expected demand during the supplier lead time in a periodic review ordering policy setting. We develop decision rules to assist logistics practitioners to make cost optimized selection between back-ordering and combined reactive and proactive lateral transshipment options. A method for determining the optimal quantity of extra lateral transshipment is developed considering the trade-off between purchasing, holding and backorder cost components.Keywords: lateral transshipment, warehouse inventory management, cost optimization, preventive transshipment
Procedia PDF Downloads 6168201 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs
Authors: H. M. Soroush
Abstract:
The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.Keywords: number of late jobs, scheduling, single server, stochastic
Procedia PDF Downloads 4978200 Value Index, a Novel Decision Making Approach for Waste Load Allocation
Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani
Abstract:
Waste load allocation (WLA) policies may use multi-objective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.Keywords: waste load allocation (WLA), value index, multi objective particle swarm optimization (MOPSO), Haraz River, equity
Procedia PDF Downloads 4228199 Determinants of Financial Structure in the Economic Institution
Authors: Abdous Noureddine
Abstract:
The problem of funding in Algeria emerged as a problem you need to study after many Algerians researchers pointed out that the faltering Algerian public economic institution due to the imbalance in the financial structures and lower steering and marketing efficiency, as well as a result of severe expansion of borrowing because of inadequate own resources, and the consequent inability This institution to repay loans and interest payments, in addition to increasing reliance on overdraft so used to finance fixed assets, no doubt that this deterioration requires research and study of the causes and aspects of treatment, which addresses the current study, aside from it.Keywords: financial structure, financial capital, equity, debt, firm’s value, return, leverage
Procedia PDF Downloads 3128198 Enhancing the Bionic Eye: A Real-time Image Optimization Framework to Encode Color and Spatial Information Into Retinal Prostheses
Authors: William Huang
Abstract:
Retinal prostheses are currently limited to low resolution grayscale images that lack color and spatial information. This study develops a novel real-time image optimization framework and tools to encode maximum information to the prostheses which are constrained by the number of electrodes. One key idea is to localize main objects in images while reducing unnecessary background noise through region-contrast saliency maps. A novel color depth mapping technique was developed through MiniBatchKmeans clustering and color space selection. The resulting image was downsampled using bicubic interpolation to reduce image size while preserving color quality. In comparison to current schemes, the proposed framework demonstrated better visual quality in tested images. The use of the region-contrast saliency map showed improvements in efficacy up to 30%. Finally, the computational speed of this algorithm is less than 380 ms on tested cases, making real-time retinal prostheses feasible.Keywords: retinal implants, virtual processing unit, computer vision, saliency maps, color quantization
Procedia PDF Downloads 1538197 Optimization of Dissolution of Chevreul’s Salt in Ammonium Chloride Solutions
Authors: Mustafa Sertçelik, Hacali Necefoğlu, Turan Çalban, Soner Kuşlu
Abstract:
In this study, Chevreul’s salt was dissolved in ammonium chloride solutions. All experiments were performed in a batch reactor. The obtained results were optimized. Parameters used in the experiments were the reaction temperature, the ammonium chloride concentration, the reaction time and the solid-to-liquid ratio. The optimum conditions were determined by 24 factorial experimental design method. The best values of four parameters were determined as based on the experiment results. After the evaluation of experiment results, all parameters were found as effective in experiment conditions selected. The optimum conditions on the maximum Chevreul’s salt dissolution were the ammonium chloride concentration 4.5 M, the reaction time 13.2 min., the reaction temperature 25 oC, and the solid-to-liquid ratio 9/80 g.mL-1. The best dissolution yield in these conditions was 96.20%.Keywords: Chevreul's salt, factorial experimental design method, ammonium chloride, dissolution, optimization
Procedia PDF Downloads 2468196 Design and Optimization of Spoke Rotor Type Brushless Direct Current Motor for Electric Vehicles Using Different Flux Barriers
Authors: Ismail Kurt, Necibe Fusun Oyman Serteller
Abstract:
Today, with the reduction in semiconductor system costs, Brushless Direct Current (BLDC) motors have become widely preferred. Based on rotor architecture, BLDC structures are divided into internal permanent magnet (IPM) and surface permanent magnet (SPM). However, permanent magnet (PM) motors in electric vehicles (EVs) are still predominantly based on interior permanent magnet (IPM) motors, as the rotors do not require sleeves, the PMs are better protected by the rotor cores, and the air-gap lengths can be much smaller. This study discusses the IPM rotor structure in detail, highlighting its higher torque levels, reluctance torque, wide speed range operation, and production advantages. IPM rotor structures are particularly preferred in EVs due to their high-speed capabilities, torque density and field weakening (FW) features. In FW applications, the motor becomes more suitable for operation at torques lower than the rated torque but at speeds above the rated speed. Although V-type and triangular IPM rotor structures are generally preferred in EV applications, the spoke-type rotor structure offers distinct advantages, making it a competitive option for these systems. The flux barriers in the rotor significantly affect motor performance, providing notable benefits in both motor efficiency and cost. This study utilizes ANSYS/Maxwell simulation software to analyze the spoke-type IPM motor and examine its key design parameters. Through analytical and 2D analysis, preliminary motor design and parameter optimization have been carried out. During the parameter optimization phase, torque ripple a common issue, especially for IPM motors has been investigated, along with the associated changes in motor parameters.Keywords: electric vehicle, field weakening, flux barrier, spoke rotor.
Procedia PDF Downloads 88195 Optimization of Moisture Content for Highest Tensile Strength of Instant Soluble Milk Tablet and Flowability of Milk Powder
Authors: Siddharth Vishwakarma, Danie Shajie A., Mishra H. N.
Abstract:
Milk powder becomes very useful in the low milk supply area but the exact amount to add for one glass of milk and the handling is difficult. So, the idea of instant soluble milk tablet comes into existence for its high solubility and easy handling. The moisture content of milk tablets is increased by the direct addition of water with no additives for binding. The variation of the tensile strength of instant soluble milk tablets and the flowability of milk powder with the moisture content is analyzed and optimized for the highest tensile strength of instant soluble milk tablets and flowability, above a particular value of milk powder using response surface methodology. The flowability value is necessary for ease in quantifying the milk powder, as a feed, in the designed tablet making machine. The instant soluble nature of milk tablets purely depends upon the disintegration characteristic of tablets in water whose study is under progress. Conclusions: The optimization results are very useful in the commercialization of milk tablets.Keywords: flowability, milk powder, response surface methodology, tablet making machine, tensile strength
Procedia PDF Downloads 1828194 Optimization of Biodiesel Production from Palm Oil over Mg-Al Modified K-10 Clay Catalyst
Authors: Muhammad Ayoub, Abrar Inayat, Bhajan Lal, Sintayehu Mekuria Hailegiorgis
Abstract:
Biodiesel which comes from pure renewable resources provide an alternative fuel option for future because of limited fossil fuel resources as well as environmental concerns. The transesterification of vegetable oils for biodiesel production is a promising process to overcome this future crises of energy. The use of heterogeneous catalysts greatly simplifies the technological process by facilitating the separation of the post-reaction mixture. The purpose of the present work was to examine a heterogeneous catalyst, in particular, Mg-Al modified K-10 clay, to produce methyl esters of palm oil. The prepared catalyst was well characterized by different latest techniques. In this study, the transesterification of palm oil with methanol was studied in a heterogeneous system in the presence of Mg-Al modified K-10 clay as solid base catalyst and then optimized these results with the help of Design of Experiments software. The results showed that methanol is the best alcohol for this reaction condition. The best results was achieved for optimization of biodiesel process. The maximum conversion of triglyceride (88%) was noted after 8 h of reaction at 60 ̊C, with a 6:1 molar ratio of methanol to palm oil and 3 wt % of prepared catalyst.Keywords: palm oil, transestrefication, clay, biodiesel, mesoporous clay, K-10
Procedia PDF Downloads 3968193 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 2318192 Optimized Deep Learning-Based Facial Emotion Recognition System
Authors: Erick C. Valverde, Wansu Lim
Abstract:
Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.Keywords: deep learning, face detection, facial emotion recognition, network optimization methods
Procedia PDF Downloads 1188191 Production of Novel Antibiotics by Importing eryK and eryG Genes in Streptomyces fradiae
Authors: Neda Gegar Goshe, Hossein Rassi
Abstract:
The antibacterial properties of macrolide antibiotics (such as erythromycin and tylosin) depend ultimately on the glycosylation of otherwise inactive polyketide lactones. Among the sugars commonly found in such macrolides are various 6-deoxyhexoses including the 3-dimethylamino sugars mycaminose and desosamine (4-deoxymycaminose). Some macrolides (such as tylosin) possess multiple sugar moieties, whereas others (such as erythromycin) have two sugar substituents. Streptomyces fradiae is an ideal host for development of generic polyketide-overproducing strains because it contains three of the most common precursors-malonyl-CoA, methylmalonyl-CoA and ethylmalonyl-CoA-used by modular PKS, and is a host that is amenable to genetic manipulation. As patterns of glycosylation markedly influence a macrolide's drug activity, there is considerable interest in the possibility of using combinatorial biosynthesis to generate new pairings of polyketide lactones with sugars, especially 6-deoxyhexoses. Here, we report a successful attempt to alter the aminodeoxyhexose-biosynthetic capacity of Streptomyces fradiae (a producer of tylosin) by importing genes from the erythromycin producer Saccharopolyspora erythraea. The biotransformation of erythromycin-D into the desired major component erythromycin-A involves two final enzymatic reactions, EryK-catalyzed hydroxylation at the C-12 position of the aglycone and EryG-catalyzed O methylation at the C-3 position of macrose .This engineered S. fradiae produced substantial amounts of two potentially useful macrolides that had not previously been obtained by fermentation.Keywords: Streptomyces fradiae, eryK and eryG genes, tylosin, antibiotics
Procedia PDF Downloads 3258190 Engineered Bio-Coal from Pressed Seed Cake for Removal of 2, 4, 6-Trichlorophenol with Parametric Optimization Using Box–Behnken Method
Authors: Harsha Nagar, Vineet Aniya, Alka Kumari, Satyavathi B.
Abstract:
In the present study, engineered bio-coal was produced from pressed seed cake, which otherwise is non-edible in origin. The production process involves a slow pyrolysis wherein, based on the optimization of process parameters; a substantial reduction in H/C and O/C of 77% was achieved with respect to the original ratio of 1.67 and 0.8, respectively. The bio-coal, so the product was found to have a higher heating value of 29899 kJ/kg with surface area 17 m²/g and pore volume of 0.002 cc/g. The functional characterization of bio-coal and its subsequent modification was carried out to enhance its active sites, which were further used as an adsorbent material for removal of 2,4,6-Trichlorophenol (2,4,6-TCP) herbicide from the aqueous stream. The point of zero charge for the bio-coal was found to be pH < 3 where its surface is positively charged and attracts anions resulting in the maximum 2, 4, 6-TCP adsorption at pH 2.0. The parametric optimization of the adsorption process was studied based on the Box-Behken design with the desirability approach. The results showed optimum values of adsorption efficiency of 74.04% and uptake capacity of 118.336 mg/g for an initial metal concentration of 250 mg/l and particle size of 0.12 mm at pH 2.0 and 1 g/L of bio-coal loading. Negative Gibbs free energy change values indicated the feasibility of 2,4,6-TCP adsorption on biochar. Decreasing the ΔG values with the rise in temperature indicated high favourability at low temperatures. The equilibrium modeling results showed that both isotherms (Langmuir and Freundlich) accurately predicted the equilibrium data, which may be attributed to the different affinity of the functional groups of bio-coal for 2,4,6-TCP removal. The possible mechanism for 2,4,6-TCP adsorption is found to be physisorption (pore diffusion, p*_p electron donor-acceptor interaction, H-bonding, and van der Waals dispersion forces) and chemisorption (phenolic and amine groups chemical bonding) based on the kinetics data modeling.Keywords: engineered biocoal, 2, 4, 6-trichlorophenol, box behnken design, biosorption
Procedia PDF Downloads 1178189 Characteristic Function in Estimation of Probability Distribution Moments
Authors: Vladimir S. Timofeev
Abstract:
In this article the problem of distributional moments estimation is considered. The new approach of moments estimation based on usage of the characteristic function is proposed. By statistical simulation technique, author shows that new approach has some robust properties. For calculation of the derivatives of characteristic function there is used numerical differentiation. Obtained results confirmed that author’s idea has a certain working efficiency and it can be recommended for any statistical applications.Keywords: characteristic function, distributional moments, robustness, outlier, statistical estimation problem, statistical simulation
Procedia PDF Downloads 5048188 Spherical Harmonic Based Monostatic Anisotropic Point Scatterer Model for RADAR Applications
Authors: Eric Huang, Coleman DeLude, Justin Romberg, Saibal Mukhopadhyay, Madhavan Swaminathan
Abstract:
High performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from electromagnetic (EM) simulations is often times cumbersome leading to large storage requirement. This paper proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. This paper solves this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets.Keywords: RADAR, RCS, high performance computing, point scatterer model
Procedia PDF Downloads 1918187 Network Connectivity Knowledge Graph Using Dwave Quantum Hybrid Solvers
Authors: Nivedha Rajaram
Abstract:
Hybrid Quantum solvers have been given prime focus in recent days by computation problem-solving domain industrial applications. D’Wave Quantum Computers are one such paragon of systems built using quantum annealing mechanism. Discrete Quadratic Models is a hybrid quantum computing model class supplied by D’Wave Ocean SDK - a real-time software platform for hybrid quantum solvers. These hybrid quantum computing modellers can be employed to solve classic problems. One such problem that we consider in this paper is finding a network connectivity knowledge hub in a huge network of systems. Using this quantum solver, we try to find out the prime system hub, which acts as a supreme connection point for the set of connected computers in a large network. This paper establishes an innovative problem approach to generate a connectivity system hub plot for a set of systems using DWave ocean SDK hybrid quantum solvers.Keywords: quantum computing, hybrid quantum solver, DWave annealing, network knowledge graph
Procedia PDF Downloads 1278186 Minimization Entropic Applied to Rotary Dryers to Reduce the Energy Consumption
Authors: I. O. Nascimento, J. T. Manzi
Abstract:
The drying process is an important operation in the chemical industry and it is widely used in the food, grain industry and fertilizer industry. However, for demanding a considerable consumption of energy, such a process requires a deep energetic analysis in order to reduce operating costs. This paper deals with thermodynamic optimization applied to rotary dryers based on the entropy production minimization, aiming at to reduce the energy consumption. To do this, the mass, energy and entropy balance was used for developing a relationship that represents the rate of entropy production. The use of the Second Law of Thermodynamics is essential because it takes into account constraints of nature. Since the entropy production rate is minimized, optimals conditions of operations can be established and the process can obtain a substantial gain in energy saving. The minimization strategy had been led using classical methods such as Lagrange multipliers and implemented in the MATLAB platform. As expected, the preliminary results reveal a significant energy saving by the application of the optimal parameters found by the procedure of the entropy minimization It is important to say that this method has shown easy implementation and low cost.Keywords: thermodynamic optimization, drying, entropy minimization, modeling dryers
Procedia PDF Downloads 2588185 Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images
Authors: Enver Kucukkulahli, Pakize Erdogmus, Kemal Polat
Abstract:
The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.Keywords: image segmentation, clustering, GUI, 2D MRI
Procedia PDF Downloads 3778184 Statistical Optimization of Distribution Coefficient for Reactive Extraction of Lactic Acid Using Tri-n-octyl Amine in Oleyl Alcohol and n-Hexane
Authors: Avinash Thakur, Parmjit S. Panesar, Manohar Singh
Abstract:
The distribution coefficient, KD for the reactive extraction of lactic acid from aqueous solutions of lactic acid using 10-30% (v/v) tri-n-octyl amine (extractant) dissolved in n-hexane (inert diluent) and 20% (v/v) oleyl alcohol (modifier) was optimized by using response surface methodology (RSM). A three level Box-Behnken design was employed for experimental design, analysis of the results and to depict the combined interactive effect of seven independent variables, viz lactic acid concentration (cl), pH, TOA concentration in organic phase (ψ), treat ratio (φ), temperature (T), agitation speed (ω) and batch agitation time (τ) on distribution coefficient of lactic acid. The regression analysis recommended that the quadratic model is significant (R2 and adjusted R2 are 98.72 % and 98.69 % respectively) for analysis. A numerical optimization had resulted in maximum lactic acid distribution coefficient (KD) of 3.16 at the optimized values for test variables, cl, pH, ψ, φ, T, ω and τ as 0.15 [M], 3.0, 22.75% (v/v), 1.0 (v/v), 26°C, 145 rpm and 23 min respectively. A good agreement between the predicted and experimentally obtained values for distribution coefficient using the optimized conditions was exhibited.Keywords: Distribution coefficient, tri-n-octylamine, lactic acid, response surface methodology
Procedia PDF Downloads 4568183 Production of Novel Antibiotics of Tylosin by Importing eryK and eryG Genes in Streptomyces fradiae
Authors: Neda Gegar Goshe, M. Moradi, Hossein Rassi
Abstract:
The antibacterial properties of macrolide antibiotics (such as erythromycin and tylosin) depend ultimately on the glycosylation of otherwise inactive polyketide lactones. Among the sugars commonly found in such macrolides are various 6-deoxyhexoses including the 3-dimethylamino sugars mycaminose and desosamine (4-deoxymycaminose). Some macrolides (such as tylosin) possess multiple sugar moieties, whereas others (such as erythromycin) have two sugar substituents. Streptomyces fradiae is an ideal host for development of generic polyketide-overproducing strains because it contains three of the most common precursors-malonyl-CoA, methylmalonyl-CoA and ethylmalonyl-CoA-used by modular PKS, and is a host that is amenable to genetic manipulation. As patterns of glycosylation markedly influence a macrolide's drug activity, there is considerable interest in the possibility of using combinatorial biosynthesis to generate new pairings of polyketide lactones with sugars, especially 6-deoxyhexoses. Here, we report a successful attempt to alter the aminodeoxyhexose-biosynthetic capacity of Streptomyces fradiae (a producer of tylosin) by importing genes from the erythromycin producer Saccharopolyspora erythraea. The bio transformation of erythromycin-D into the desired major component erythromycin-A involves two final enzymatic reactions, EryK-catalyzed hydroxylation at the C-12 position of the aglycone and EryG-catalyzed O methylation at the C-3 position of macrose. This engineered S. fradiae produced substantial amounts of two potentially useful macrolides that had not previously been obtained by fermentation.Keywords: tylosin, eryK and eryG genes, streptomyces fradiae
Procedia PDF Downloads 3528182 A Two-Dimensional Problem Micropolar Thermoelastic Medium under the Effect of Laser Irradiation and Distributed Sources
Authors: Devinder Singh, Rajneesh Kumar, Arvind Kumar
Abstract:
The present investigation deals with the deformation of micropolar generalized thermoelastic solid subjected to thermo-mechanical loading due to a thermal laser pulse. Laplace transform and Fourier transform techniques are used to solve the problem. Thermo-mechanical laser interactions are taken as distributed sources to describe the application of the approach. The closed form expressions of normal stress, tangential stress, coupled stress and temperature are obtained in the domain. Numerical inversion technique of Laplace transform and Fourier transform has been implied to obtain the resulting quantities in the physical domain after developing a computer program. The normal stress, tangential stress, coupled stress and temperature are depicted graphically to show the effect of relaxation times. Some particular cases of interest are deduced from the present investigation.Keywords: pulse laser, integral transform, thermoelastic, boundary value problem
Procedia PDF Downloads 6168181 The Effectiveness of Adaptive Difficulty Adjustment in Touch Tablet App on Young Children's Spatial Problem Solving Development
Authors: Chenchen Liu, Jacques Audran
Abstract:
Using tablet apps with a certain educational purpose to promote young children’s cognitive development, is quite common now. Developing an educational app on an Ipad like tablet, especially for a young child (age 3-5) requires an optimal level of challenge to continuously attract children’s attention and obtain an educational effect. Adaptive difficulty adjustment, which could dynamically set the difficulty in the challenge according to children’s performance, seems to be a good solution. Since space concept plays an important role in young children’s cognitive development, we made an experimental comparison in a French kindergarten between one group of 23 children using an educational app ‘Debout Ludo’ with adaptive difficulty settings and another group of 20 children using the previous version of ‘Debout Ludo’ with a classic incremental difficulty adjustment. The experiment results of spatial problem solving indicated that a significantly higher learning outcome was acquired by the young children who used the adaptive version of the app.Keywords: adaptive difficulty, spatial problem solving, tactile tablet, young children
Procedia PDF Downloads 4448180 Optimization of Lubricant Distribution with Alternative Coordinates and Number of Warehouses Considering Truck Capacity and Time Windows
Authors: Taufik Rizkiandi, Teuku Yuri M. Zagloel, Andri Dwi Setiawan
Abstract:
Distribution and growth in the transportation and warehousing business sector decreased by 15,04%. There was a decrease in Gross Domestic Product (GDP) contribution level from rank 7 of 4,41% in 2019 to 3,81% in rank 8 in 2020. A decline in the transportation and warehousing business sector contributes to GDP, resulting in oil and gas companies implementing an efficient supply chain strategy to ensure the availability of goods, especially lubricants. Fluctuating demand for lubricants and warehouse service time limits are essential things that are taken into account in determining an efficient route. Add depots points as a solution so that demand for lubricants is fulfilled (not stock out). However, adding a depot will increase operating costs and storage costs. Therefore, it is necessary to optimize the addition of depots using the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). This research case study was conducted at an oil and gas company that produces lubricants from 2019 to 2021. The study results obtained the optimal route and the addition of a depot with a minimum additional cost. The total cost remains efficient with the addition of a depot when compared to one depot from Jakarta.Keywords: CVRPTW, optimal route, depot, tabu search algorithm
Procedia PDF Downloads 1368179 Cash Flow Optimization on Synthetic CDOs
Authors: Timothée Bligny, Clément Codron, Antoine Estruch, Nicolas Girodet, Clément Ginet
Abstract:
Collateralized Debt Obligations are not as widely used nowadays as they were before 2007 Subprime crisis. Nonetheless there remains an enthralling challenge to optimize cash flows associated with synthetic CDOs. A Gaussian-based model is used here in which default correlation and unconditional probabilities of default are highlighted. Then numerous simulations are performed based on this model for different scenarios in order to evaluate the associated cash flows given a specific number of defaults at different periods of time. Cash flows are not solely calculated on a single bought or sold tranche but rather on a combination of bought and sold tranches. With some assumptions, the simplex algorithm gives a way to find the maximum cash flow according to correlation of defaults and maturities. The used Gaussian model is not realistic in crisis situations. Besides present system does not handle buying or selling a portion of a tranche but only the whole tranche. However the work provides the investor with relevant elements on how to know what and when to buy and sell.Keywords: synthetic collateralized debt obligation (CDO), credit default swap (CDS), cash flow optimization, probability of default, default correlation, strategies, simulation, simplex
Procedia PDF Downloads 2758178 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models
Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh
Abstract:
In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals
Procedia PDF Downloads 3028177 Process Optimization of Mechanochemical Synthesis for the Production of 4,4 Bipyridine Based MOFS using Twin Screw Extrusion and Multivariate Analysis
Authors: Ahmed Metawea, Rodrigo Soto, Majeida Kharejesh, Gavin Walker, Ahmad B. Albadarin
Abstract:
In this study, towards a green approach, we have investigated the effect of operating conditions of solvent assessed twin-screw extruder (TSE) for the production of 4, 4-bipyridine (1-dimensional coordinated polymer (1D)) based coordinated polymer using cobalt nitrate as a metal precursor with molar ratio 1:1. Different operating parameters such as solvent percentage, screw speed and feeding rate are considered. The resultant product is characterized using offline characterization methods, namely Powder X-ray diffraction (PXRD), Raman spectroscopy and scanning electron microscope (SEM) in order to investigate the product purity and surface morphology. A lower feeding rate increased the product’s quality as more resident time was provided for the reaction to take place. The most important influencing factor was the amount of liquid added. The addition of water helped in facilitating the reaction inside the TSE by increasing the surface area of the reaction for particlesKeywords: MOFS, multivariate analysis, process optimization, chemometric
Procedia PDF Downloads 1598176 Wavelet Method for Numerical Solution of Fourth Order Wave Equation
Authors: A. H. Choudhury
Abstract:
In this paper, a highly accurate numerical method for the solution of one-dimensional fourth-order wave equation is derived. This hyperbolic problem is solved by using semidiscrete approximations. The space direction is discretized by wavelet-Galerkin method, and the time variable is discretized by using Newmark schemes.Keywords: hyperbolic problem, semidiscrete approximations, stability, Wavelet-Galerkin Method
Procedia PDF Downloads 3158175 A Mathematical Model for a Two-Stage Assembly Flow-Shop Scheduling Problem with Batch Delivery System
Authors: Saeedeh Ahmadi Basir, Mohammad Mahdavi Mazdeh, Mohammad Namakshenas
Abstract:
Manufacturers often dispatch jobs in batches to reduce delivery costs. However, sending several jobs in batches can have a negative effect on other scheduling-related objective functions such as minimizing the number of tardy jobs which is often used to rate managers’ performance in many manufacturing environments. This paper aims to minimize the number of weighted tardy jobs and the sum of delivery costs of a two-stage assembly flow-shop problem in a batch delivery system. We present a mixed-integer linear programming (MILP) model to solve the problem. As this is an MILP model, the commercial solver (the CPLEX solver) is not guaranteed to find the optimal solution for large-size problems at a reasonable amount of time. We present several numerical examples to confirm the accuracy of the model.Keywords: scheduling, two-stage assembly flow-shop, tardy jobs, batched delivery system
Procedia PDF Downloads 460